text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Inflammation Induces TDP-43 Mislocalization and Aggregation TAR DNA-binding protein 43 (TDP-43) is a major component in aggregates of ubiquitinated proteins in amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). Here we report that lipopolysaccharide (LPS)-induced inflammation can promote TDP-43 mislocalization and aggregation. In culture, microglia and astrocytes exhibited TDP-43 mislocalization after exposure to LPS. Likewise, treatment of the motoneuron-like NSC-34 cells with TNF-alpha (TNF-α) increased the cytoplasmic levels of TDP-43. In addition, the chronic intraperitoneal injection of LPS at a dose of 1mg/kg in TDP-43A315T transgenic mice exacerbated the pathological TDP-43 accumulation in the cytoplasm of spinal motor neurons and it enhanced the levels of TDP-43 aggregation. These results suggest that inflammation may contribute to development or exacerbation of TDP-43 proteinopathies in neurodegenerative disorders. Introduction Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disorder characterized by the loss of motor neurons in the brain and spinal cord, causing progressive muscle weakness and typically leading to death by paralysis within a few years.Mutations in over twenty genes are known to be associated with familial forms of ALS [1][2] which account for 10% of all ALS cases.In both familial and sporadic ALS, degenerating neurons are known to present an abnormal accumulation of cytoplasmic inclusions containing ubiquitinated proteins [3].TAR DNAbinding protein (TDP-43) has been identified as a major component of cytoplasmic inclusions in sporadic and most familial ALS cases, as well as in frontotemporal lobar dementia (FTLD) with ubiquitinated inclusions, coupling these two diseases as TDP-43 proteinopathies [4][5][6][7][8][9].Various dominant mutations in TDP-43 have also been linked with familial cases of both ALS and FTLD, confirming the importance of TDP-43 in the pathology of these diseases [10][11][12][13][14][15][16]. Previously, we reported that levels of messenger RNA (mRNA) and protein for TDP-43 and nuclear factor κ B (NF-κB) p65 were higher in the spinal cord of ALS patients than of control individuals [23].Surprisingly, TDP-43 was found to interact with NF-κB p65 in glia and neurons of ALS patients and of transgenic mice overexpressing human wild-type or mutant TDP-43 species.NF-κB is a key component of the innate immune response.This led us to investigate the potential effects of NF-κB activation by inflammatory stimuli on TDP-43 redistribution in various cultured cells including microglia, astrocytes and neurons.It is well established that dysfunction glial cells can contribute to motor neuron damage [24][25][26].Moreover, it is noteworthy that ALS patients exhibit increased levels of lipopolysaccharides (LPS) in the blood as well as an up-regulation of LPS/TLR-4 signaling associated genes in peripheral blood monocytes [27][28]. Here, we report that LPS exposure induced cytoplasmic redistribution of TDP-43 in cultured microglia and astrocytes.Similarly, NF-κB activation in motor neuron-like cell line NSC-34 by TNF-α enhanced TDP-43 cytoplasmic level.We also tested the in vivo effect of chronic LPS administration in transgenic mice expressing genomic fragment of human TDP-43 A315T gene (hTDP-43 A315T ) [11][12].Interestingly, the chronic LPS treatment enhanced the cytoplasmic mislocalization and aggregation of TDP-43 in the spinal cord of TDP-43 A315T transgenic mice.These results suggest that chronic brain inflammation may contribute to TDP-43 proteinopathies. Animals used The heterozygous transgenic mouse line expressing the human mutant TDP-43 A315T (hTDP-43 A315T ) has been generated and characterized by us [29,23].All experimental procedures were approved by the Laval University Animal Care Ethics Committee and are in accordance with the Guide to the Care and Use of Experimental Animals of the Canadian Council on Animal Care. Astroglia cultures Primary astroglial cultures from brain tissues of neonatal (P2-P3) mice were prepared as described previously [30].In brief, the brain tissues were stripped of their meninges and minced with scissors under a dissecting microscope in Dulbecco's modified Eagle medium (DMEM).After trypsinization (0.25% trypsin-EDTA (Life Technologies), 10 min, 37°C, 5% CO 2 ), the tissue was triturated.The cell suspension was washed in glial culture medium (DME supplemented with 10% FBS, 1 mM l-glutamine, 1 mM Na pyruvate, 100 U/ml penicillin, and 100 mg/ml streptomycin, non-essential amino acids (all from Life Technologies) and cultured at 37°C, 5% CO 2 in 25 cm 2 Falcon tissue culture flasks (BD, one brain per flask) coated with 10 mg/ml poly-d-lysine (PDL; Sigma-Aldrich) for overnight and then rinsed thoroughly with sterile distilled water.Four to five days later medium was changed and supplemented with 5ng/ml of mouse recombinant macrophage colony stimulating factor (M-CSF, R&D) and every second day thereafter, for a total culture time of 10-14 days.At the moment of tissue dissection the genotype of each pup was unknown.Each brain was then dissected and cultured separately (one brain/25 cm 2 flask).The transgenic pups were identified by DNA extraction from tails and PCR amplification of the human TARDBP gene.After 10-14 days, when the genotype of each pup was known, cultures with the same genotype were pulled together during the separation of astrocytes from microglia. Separation of astrocytes from microglia The procedure followed to separate astrocytes from microglia is based in previous published protocols [30].In brief, microglia were shaken off the primary mixed brain glial cell cultures in an orbital shaker set at 200 rpm, 37°C, for 3h.Microglia cells in suspension were collected and re-plated for expansion in glial culture medium supplemented with M-CSF and that had been conditioned on astrocyte cultures.Medium in the microglia cultures was changed every 5-7 days.Cells that remained attached after the orbital shaking, which were mainly a monolayer of astrocytes, were dissociated with 0.5% Trypsin-EDTA diluted in DMEM ratio 1:3 for 15 min at 37°C 5%CO2.In order to eliminate microglia cells contaminating the monolayer of astrocytes, dissociated cells were re-plated and microglia cells were let to attach to the surface of the culture flask for 1h, while astrocytes were still in suspension.Astrocytes were then collected and re-plated with glial culture medium.Medium in astrocytes cultures was changed every 2-3 days.Medium collected from astrocyte cultures were filtered through 0.2 um filters (Millipore) and kept at 4°C.Astrocyte-conditioned glial culture medium was used to maintain the microglia cultures.After expanded for 10-14 days, microglia were re-plated at a density of 20,000 cell/cm 2 and astrocytes at 40,000 cell/cm 2 in 16-well chamber slides (Thermo scientific) for immunocytochemical analysis and 6-well-plates for protein or RNA extraction.Astrocytes and microglia cultures were treated with LPS (Sigma) at different concentration and for different periods as indicated in figures and results section. Intraperitoneal LPS injection in mice To trigger a systemic innate immune response in the CNS, presymptomatic 6-month-old hTDP-43 A315T mice and their non-transgenic (wild-type) littermates received intraperitoneal (i.p.) injection of LPS (1 mg/kg of body weight; from Escherichia coli; serotype 055:B5; Sigma, Saint Louis, MO) diluted in 100 μl of vehicle (Veh) solution (sterile pyrogen-free saline).Mice were i.p. injected once a week for duration of two months.Mice were not exhibiting overt phenotypes due to LPS injection.Control mice were given same volume of saline. Preparation of spinal cord sections for immunohistochemistry After two months of systemic injections, animals were deeply anesthetized by i.p. injection of pentobarbital (50mg/kg) and then rapidly perfused transcardially with 0.9% saline, followed by 4% paraformaldehyde in 0.1 M borax buffer, pH 9.5, at 4°C.Spinal cords were rapidly removed from the animals.Dissected spinal cord tissues were postfixed for 24 h in 4% paraformaldehyde and equilibrated in a solution of PBS-sucrose (20%) for 48 h.Spinal cord tissues were cut in 25 μm thick sections with a Leica frozen microtome and kept in a cryoprotective solution at -20°C.Immunohistochemistry was performed on 25μm-thick sections.TDP-43 was probed (1:200) with Rabbit polyclonal (ProteinTech) or (1:200) mouse monoclonal (Abnova) antibodies.Neu N was immunostained with (1: 500) monoclonal (Invitrogen) antibody. Quantitative real-time PCR Real-time RT-PCR was performed with a LightCycler 480 (Roche) sequence detection system using Light-Cycler SYBR green I at the Quebec Genomics Centre.Total RNA was extracted from cell cultures using TRIZOL reagent (Life Technologies).Total RNA was treated with DNase (QIAGEN) to get rid of genomic DNA contaminations.Total RNA was then quantified No change in mRNA expression for TDP-43 species due to LPS treatment.(A) Representative images of primary astrocytes and microglia cultures double stained for Iba 1(in red, marker of microglia) and GFAP (in green, marker of astrocytes).Nuclei of all cells were stained with DAPI (in blue).Using this double immunostaining we confirmed that we achieved a good separation of astrocytes from microglia, and resulting cultures were ~90% pure in one of the cell types.(B) Total RNA was extracted from LPS-treated (1ug/ml LPS for 1 day) and untreated cultures of astrocytes and microglia.Samples of total RNA were then subjected to real-time quantitative RT-PCR for human TDP-43 (hTDP-43, transgene) and mouse TDP-43 (mTDP-43, endogenous gene).Number of copies of hTDP-43 and mTDP-43 were normalized with the house-keeping gene Atp5 mRNA, in order to take in account variations in samples size.Levels of RNA in the LPS-treated sample were divided by the levels of mRNA in the respective untreated control, in order to get the fold changes in the expression with the LPS-treatment.Fold change equal to 1 indicate no effect of LPS in the expression of the gene, which was seen in most of the cultures analyzed for both mouse and human TDP-43.Higher levels of TDP-43 protein in LPS-activated astrocytes and microglia.Cultured astrocytes and microglia from transgenic (hTDP-43 A315T ) and non-transgenic litters were treated for 1 day with LPS at 1 μg/ml or 0.1 μg/ml.Total protein was extracted from LPS -treated and untreated cultures and analysed by immunoblotting.TDP-43 was detected using the polyclonal antibody (Proteintech # 10782-AP) which reacts with both human (transgenic) and mouse (endogenous) proteins.TDP-43 bands were normalized against actin to take into account the difference in protein loading.The exposure times are actually different between transgenic and non-transgenic blots.Due to higher quantities of TDP-43 (human plus mouse) in the transgenic cell cultures the exposure time was only 30 seconds whereas for the non-transgenic cultures it was 4 minutes.Fold change is the ratio of band intensity in LPS-treated cultures over the band intensity in the respective untreated control.(A) Representative western blot of TDP-43 from LPS treated or untreated astrocyte culture.(B) Quantitative analysis of western blot showed that levels of total TDP-43 was higher in LPS treated transgenic astrocyte culture than nontransgenic culture.(C) Representative western blot of TDP-43 from LPS treated or untreated microglia culture.(D) Quantitative analysis of western blot showed that levels of total TDP-43 were also higher in LPS-treated microglia from transgenic TDP-43 A315T mice than from C57BL6 mice.doi:10.1371/journal.pone.0140248.g002using Nanodrop, and its purity was verified by Bioanalyzer 2100 (Agilent Technologies).Genespecific primers were constructed using the GeneTools software (Biotools Inc.).Three genes, Atp5, Hprt1, and GAPDH, were used as internal control genes.The mRNA copy number of each analyzed gene was divided by the mRNA copy number of the house-keeping gene Atp5 in the same sample, in order to take in account variations in samples size.Then, in order to get the fold changes in the expression of each gene with the LPS-treatment, levels of mRNA in the LPS-treated sample were divided by the levels of mRNA in the respective untreated control.Fold change equal to 1 indicate no effect of LPS in the gene expression, while fold changes higher than 1 indicate that the LPS treatment led to a higher expression of the analyzed gene. Sub-cellular fractionation For sub-cellular fractionation, cells were lysed by a hypotonic buffer (10 mM HEPES-KOH pH 7.6, 10 mM KCl, 1.5 mM MgCl2, 1 mM EDTA, 1 mM EGTA, 0.5 mM DTT, Halt phosphatase inhibitor cocktail (Thermo Scientific) and protease inhibitor (Sigma)) for 30 min on ice.Cells were further broken down by passing 30 times though a 22.5/27 G needle.Membranes were separated from the remaining cellular components by centrifugation at 3000 rpm, 10 min., 4°C.Pellet was then resuspended in extraction buffer (20 mM HEPES-KOH pH 7.6, 25% v/v glycerol, 0.5 mM NaCl, 1.5 mM MgCl2, 1 mM EDTA, 1 mM EGTA, 0.5 mM DTT, Halt phosphatase inhibitor cocktail (Thermo Scientific) and protease inhibitor (Sigma)) and incubated for 60 min at 4°C with rotation.Nuclear fraction was then separated from the cytosolic fraction by ultra centrifugation at 30,000g for 30 min at 4°C.Pellet containing the cytosolic fraction was resuspended in RIPA buffer.Protein concentration was estimated using the Bradford method. Soluble and Insoluble fractionation It was done as previously described by Hart et al, with some modifications as described below [31].Frozen spinal cords of LPS or saline-injected mice were homogenized in ice-cold homogenisation buffer (NP40 lysis buffer) containing 20 mM Tris-HCl, PH 7.4, 150mM NaCL, 1% NP40, 5mM EDTA, 1 mM DTT, 10% glycerol, 1 mM EGTA), freshly supplemented with protease inhibitor cocktail (Sigma) and phosphatase inhibitors (10 mM NaF, 1 mM b-glycerophosphate, 1 mM Na3VO4).Lysates were rotated for 30 minutes at 4°C and then centrifuged at 4°C for 20 minutes at 15,800 g.Supernatants containing salt-soluble fraction were transferred to new tubes.To remove carryovers the pellet was washed once in homogenisation buffer and resuspended in Urea buffer (homogenisation buffer with 8 M urea, supplemented with protease and phosphatase inhibitor) followed by sonication.After spinning the lysate at 4°C for 20 minutes, the supernatant was removed as insoluble fraction.Proteins were quantified by Lowry method. Western blots Total protein was extracted from cells using the RIPA buffer (50 mM TRIS-HCl pH7.4, 1 mM EDTA pH 8.0, 150 mM NaCl, 0,1%SDS, 1% NP-40, Halt phosphatase inhibitor cocktail (Thermo Scientific) and protease inhibitor (Sigma)).Protein concentration was estimated using the Bradford method.After denaturation, protein samples were resolved by 12% sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto nitrocellulose membrane (Schleicher & Schuell, Dassel, Germany).Membranes were incubated in blocking solution (5% milk, 0.1% Tween-20 in Tris-buffered saline (TBS) solution) for 1h at room temperature.Membranes were then incubated with a primary antibody diluted in blocking solution at 4°C overnight.After rinsing 3 times with 0.1% Tween-20 in TBS solution, membranes were incubated in a horseradish peroxidase-conjugated secondary antibody made in goat (Jackson ImmunoResearch Laboratories Inc., Baltimore, PA, USA) diluted 1:5000 in blocking solution.After rinsing 3 times with 0.1% Tween-20 in TBS solution, immunoreactive proteins were visualized by chemiluminescence with the Renaissance kit (PerkinElmer Life Sciences, Waltham, MA, USA).Primary antibodies used were the following: (1) Anti human TDP-43 monoclonal antibody (1: 1000, Abnova) (2) anti-TDP43 rabbit polyclonal antibody (1:4000, ProteinTech #10782-2-AP), which reacts with both human (transgenic) and mouse (endogenous) proteins and (3) anti-actin mouse monoclonal (1:25000, Chemicon).Bands intensity was measured using Image J 1.45p (NIH, USA).The intensity of all TDP-43 bands was divided by the intensity of the respective actin band to take in account the differences in the protein loading.Fold change is the ratio of the band intensity in the LPS-treated cultures over the band intensity in the respective untreated controls, after normalized by the intensity of the correspondent actin bands. No change in mRNA expression for TDP-43 due to LPS treatment Primary astrocytes and microglia were prepared from neonatal mice as described in Materials and Methods.As schematic representation of the approaches is shown in S1 Fig. Morphological changes of astrocytes and microglia to LPS treatment confirmed the activation of those cells.As expected, microglia from C57Bl6 (non-transgenic) mice and from hTDP-43 A315T transgenic mice went from a 'resting' form with long branching processes and small cellular body, in the absence of LPS, to a large ameboid form when treated with LPS (Fig 1A).LPS treatment of astrocytes or microglia did not cause significant changes in mRNA levels for the endogenous TDP-43 or human TDP-43 mutant, as detected by real-time quantitative RT-PCR (Fig 1B).Representative images of astrocytes from hTDP-43 A315T transgenic and nontransgenic littermates, treated or not treated with LPS (1 μg/ml) for one day.Astrocytes are double stained for TDP-43 (red) and GFAP (green) while nuclei are stained with DAPI (blue).Immunofluorescence for TDP-43 increased in the cytoplasm of LPS-treated astrocytes in both non transgenic (2 nd panel) and transgenic culture (4 th panel) as compared to the untreated control (1 st and 3 rd panel).However, the overall cytoplasmic increase of TDP-43 was more pronounced in LPS-treated astrocytes from hTDP-43 A315T transgenic mice than in LPS-treated astrocytes from C57Bl6 mice.Arrows point on cells presenting a decrease in nuclear TDP-43.Scale bar 100μm.doi:10.1371/journal.pone.0140248.g003 Higher levels of TDP-43 protein in LPS-activated astrocytes and microglia Although the TDP-43 mRNA levels were not altered by LPS treatment, immunoblotting revealed higher amounts of TDP-43 protein from astrocytes and microglia when treated with LPS (Fig 2A -2D).We carried out the immunoblots with a widely used polyclonal antibody against TDP-43 [4,5, 8, 6,7, 29, 23], which reacts with both mouse and human TDP-43 species and yields a band at 43 kDa corresponding to the unmodified TDP-43. Accumulation of TDP-43 protein in the cytoplasm of LPS-activated astrocytes and microglia To test the effect of LPS on TDP-43 distribution in cytoplasm and nucleus of glial cells, primary glial cultures from non-transgenic and transgenic hTDP-43 A315T mice were treated with 1μg/ ml of LPS for 1 day.The sub-cellular localization of TDP-43 in astrocytes and microglia was analyzed by immunocytochemistry. Cell cultures were double stained for TDP-43 and GFAP (astrocyte marker) ( most LPS-treated microglia also presented altered cytoplasmic distribution of TDP-43 with punctate aggregates of TDP-43, resembling those described in samples from patients with ALS and FTLD [4,7].Although some TDP-43 positive cytoplasmic punctate structures were also observed in LPS-treated microglial cells from non-transgenic mice, the punctate staining was less intense than in LPS-treated microglial cells from transgenic hTDP-43 A315T mice. Subcellular fractionation of astrocyte cultures was performed.Nuclear and cytoplasmic fractions were analyzed by Western blot (Fig 5).TDP-43 was detected using a polyclonal antibody (Proteintech #10782-2-AP) which reacts with both human (transgenic) and mouse (endogenous) TDP-43.The intensity of TDP-43 bands was divided by the intensity of the respective actin band to take in account the differences in the protein loading.As shown in Fig 5, the treatment with LPS resulted in loss of TDP-43 protein in the nuclear fraction and in increased levels of TDP-43 in the cytoplasmic fraction. As shown in Fig 6, the nuclear and cytosolic TDP-43 staining intensities were measured in microglial cultures using the software Imaris on images taken with the confocal microscope.The results show that LPS treatment caused reduction in nuclear immunostaining of TDP-43 and increased cytoplasmic immunostaining of TDP-43. Treatment with TNF-α increases cytoplasmic TDP-43 in NSC-34 cells The cellular/molecular events that may lead to abnormal neuronal distribution of TDP-43 in neuronal cells are largely unknown.We investigated whether inflammatory stimuli could promote mislocalization of TDP-43 from nucleus to cytoplasm in motor neuron-like NSC-34 cell lines.Non-transfected NSC-34 cells and NSC-34 cells stably transfected with hTDP-43WT-HA were treated with recombinant mouse TNF-α at a concentration of 10 ng/mL for Representative images of microglia from hTDP-43 A315T transgenic and control littermates, treated or not treated with LPS (1μg/ml) for one day.Microglial cultures were double stained for polyclonal TDP-43 (red) and CD11b (green) whereas the nuclei were stained with DAPI (blue).No cytoplasmic aggregates of TDP-43 were found in untreated microglia from control C57Bl6 mice (1 st panel) or from hTDP-43 A315T transgenic mice (3 rd panel).Treatment with LPS resulted in formation of small cytoplasmic TDP-43 aggregates in microglia from control C57Bl6 mice (2 nd panel) and from hTDP-43 A315T transgenic mice (4 th panel).However, TDP-43 punctate aggregates were more abundant and of higher intense staining in LPS-treated microglia (4 th panel) from hTDP-43 A315T transgenic when compared to microglia from C57Bl6 mice (2 nd panel).Arrowheads point to cells with cytoplasmic aggregates of TDP-43.6h.Non-TNF treated cells served as controls.Following treatment, the subcellular distribution of TDP-43 was determined by Western blot analysis complemented by immunocytochemistry.As expected, the majority of TDP-43 was located in nucleus.However, TNF-α treatment induced cytoplasmic mislocalization of TDP43 in non-transfected cells as well as in cells stably transfected with hTDP-43WT-HA (Fig 7).Thus, the results suggest that the activation of TNFα/NF-κB signaling pathway can induce abnormal cytoplasmic mislocalization and aggregation of TDP-43 in neuronal cells. Chronic LPS administration exacerbated cytoplasmic mislocalization and aggregation of TDP-43 in hTDP-43 A315T transgenic mice To address whether LPS treatment may also induce TDP-43 mislocalization and aggregation in vivo, 6 months old non-transgenic mice and hTDP-43 A315T transgenic mice were chronically i. p. injected once a week with LPS (1 mg/kg) or vehicle solution starting at 6 months of age for a In ALS, TDP-43 does not remain in its normal nuclear location, but instead forms insoluble aggregates in both the nucleus and cytoplasm of affected neurons [4,7].In order to analyze the presence of insoluble TDP-43 aggregates after LPS treatment, soluble and insoluble protein fractions in spinal cord lysates were prepared as described earlier [31].No change was detected in the amount of soluble TDP-43 in the spinal cord after chronic LPS treatment of non-transgenic and hTDP-43 A315T transgenic mice (Fold change ~1) (Fig 8C and 8D).In contrast, levels of insoluble TDP-43 were increased after chronic LPS treatment in the spinal cord of nontransgenic and hTDP-43 A315T transgenic mice (Fold change > 1) (Fig 8C and 8D).The effect of LPS treatment on TDP-43 aggregation was more pronounced in the hTDP-43 A315T transgenic mice when compared to non-transgenic mice (Fig 8D). To further confirm the effect of LPS treatment on abnormal distribution of TDP-43, spinal cord sections from LPS and vehicle-treated mice were immunostained with monoclonal antibody specific for human TDP-43.As revealed by immunostaining, spinal neurons from hTDP-43 A315T transgenic mice exhibited more cytoplasmic TDP-43 immunostaining when injected with LPS rather than vehicle.LPS treatment increased the number of neuronal cells with nuclear depletion of TDP-43 and with cytoplasmic TDP-43 aggregates (Fig 9).These in vivo results are in agreement with our findings with in vitro cell cultures that activation of NF-κB pathway by inflammatory stimuli (LPS or TNF-α) can promote cytoplasmic mislocalization and aggregation of TDP-43. Discussion The results presented here demonstrate that inflammatory stimuli such as TNF-α or LPS can promote cytoplasmic mislocalization and aggregation of TDP-43 in glial and neuronal cells, a proteinopathy similar to what has been observed in ALS cases [7].Some factors are known to trigger TDP-43 redistribution from the nucleus to cytoplasm with formation of protein aggregates including axotomy, cell stressors, over-expression and mutations of TDP-43 gene [21,17].This is the first report of inflammation being a factor that can contribute to TDP-43 proteinopathy. In this study, we took advantage of transgenic mice bearing a genomic fragment encoding human mutant TDP-43 A315T linked to familial ALS [11][12].This transgenic mouse model was previously characterized [23,29].Unlike other transgenic mice overexpressing TDP-43 species under the control of strong neuronal gene promoters [32][33][34][35], the hTDP-43 A315T mice overexpress at moderate levels the TDP-43 transgene ubiquitously under its own promoter.Thus, it was possible to derive from the hTDP-43 A315T transgenic mice primary cultures of astrocyte and microglia that expressed TDP-43 A315T mutant.The LPS treatment increased the total amount of TDP-43 protein in both microglia and astrocyte cultures (Fig 2), but without corresponding increases at mRNA levels (Fig 1).Moreover, LPS treatment of microglia and astrocytes enhanced the cytoplasmic mislocalization TDP-43.In microglia, LPS exposure also led to formation of cytoplasmic TDP-43 punctate aggregates (Fig 4). Furthermore, in vivo evidence for an involvement of inflammation in TDP-43 pathology was provided by the chronic LPS administration in hTDP-43 A315T transgenic mice starting at 6 months of age.These transgenic mice exhibit during aging cognitive and motor impairments, as well as progressive formation of TDP-43 cytoplasmic aggregates characteristic of ALS [29].After two months of LPS treatment, the hTDP-43 A315T mice exhibited higher levels of insoluble TDP-43 than vehicle-treated mice (Fig 8C and 8D).In addition, measurement of TDP-43 immunostaining intensity in spinal motor neurons revealed that LPS treatment increased the cytoplasmic to nuclear ratio of TDP-43 immunostaining (Fig 8A and 8B).Immunofluorescence microscopy of spinal cord sections with monoclonal antibody against human TDP-43 showed that LPS-treatment resulted in nuclear TDP-43 depletion and in cytoplasmic TDP-43 aggregates (Fig 9).There is evidence that such abnormal cytosolic TDP-43 aggregates can be toxic [36,37]. In summary, the in vitro and in vivo results presented here suggest that inflammation, induced by stimuli of NF-κB signaling such as TNF-α or LPS, may be a mediator of TDP-43 proteinopathy which constitutes a pathological hallmark of ALS and FTLD [4][5][6][7][8][9].Chronic immune activation is common feature of neurodegenerative disorders including ALS.The sources of inflammation in ALS remain to be defined.Somehow, ALS patients have increased levels of LPS in the blood as well as an up-regulation of LPS/TLR-4 signaling associated genes in the peripheral blood monocytes [27][28].An upregulation of TDP-43 levels detected in ALS is a phenomenon that may also contribute to enhance the NF-κB response to inflammatory stimuli [23].In diseases, neurons may be damaged by a side-effect of inflammation and proinflammatory cytokines.Of particular interest is our finding that TNF-α, a cytokine produced by activated microglial cells, may activate the NF-κB pathway in motor neurons to promote cytoplasmic mislocalization and aggregation of TDP-43 7).Apart from the possibility that innate immune activation of microglial cells can damage neurons and aggravate neuronal TDP-43 proteinopathy via TNF-α release, it is remarkable that treatment with LPS of cultured microglial cells, even from normal mice, triggered abnormal cytoplasmic aggregation of TDP-43.This raises the possibility that activated microglia might also constitute a source of insoluble TDP-43 aggregates for the seeding and prion-like propagation of TDP-43 aggregates. Fig 1 . Fig 1.No change in mRNA expression for TDP-43 species due to LPS treatment.(A) Representative images of primary astrocytes and microglia cultures double stained for Iba 1(in red, marker of microglia) and GFAP (in green, marker of astrocytes).Nuclei of all cells were stained with DAPI (in blue).Using this double immunostaining we confirmed that we achieved a good separation of astrocytes from microglia, and resulting cultures were ~90% pure in one of the cell types.(B) Total RNA was extracted from LPS-treated (1ug/ml LPS for 1 day) and untreated cultures of astrocytes and microglia.Samples of total RNA were then subjected to real-time quantitative RT-PCR for human TDP-43 (hTDP-43, transgene) and mouse TDP-43 (mTDP-43, endogenous gene).Number of copies of hTDP-43 and mTDP-43 were normalized with the house-keeping gene Atp5 mRNA, in order to take in account variations in samples size.Levels of RNA in the LPS-treated sample were divided by the levels of mRNA in the respective untreated control, in order to get the fold changes in the expression with the LPS-treatment.Fold change equal to 1 indicate no effect of LPS in the expression of the gene, which was seen in most of the cultures analyzed for both mouse and human TDP-43. Fig 2.Higher levels of TDP-43 protein in LPS-activated astrocytes and microglia.Cultured astrocytes and microglia from transgenic (hTDP-43 A315T ) and non-transgenic litters were treated for 1 day with LPS at 1 μg/ml or 0.1 μg/ml.Total protein was extracted from LPS -treated and untreated cultures and analysed by immunoblotting.TDP-43 was detected using the polyclonal antibody (Proteintech # 10782-AP) which reacts with both human (transgenic) and mouse (endogenous) proteins.TDP-43 bands were normalized against actin to take into account the difference in protein loading.The exposure times are actually different between transgenic and non-transgenic blots.Due to higher quantities of TDP-43 (human plus mouse) in the transgenic cell cultures the exposure time was only 30 seconds whereas for the non-transgenic cultures it was 4 minutes.Fold change is the ratio of band intensity in LPS-treated cultures over the band intensity in the respective untreated control.(A) Representative western blot of TDP-43 from LPS treated or untreated astrocyte culture.(B) Quantitative analysis of western blot showed that levels of total TDP-43 was higher in LPS treated transgenic astrocyte culture than nontransgenic culture.(C) Representative western blot of TDP-43 from LPS treated or untreated microglia culture.(D) Quantitative analysis of western blot showed that levels of total TDP-43 were also higher in LPS-treated microglia from transgenic TDP-43 A315T mice than from C57BL6 mice. Fig 3 . Fig 3. Cytoplasmic increase of TDP-43 in LPS-activated astrocytes.Representative images of astrocytes from hTDP-43 A315T transgenic and nontransgenic littermates, treated or not treated with LPS (1 μg/ml) for one day.Astrocytes are double stained for TDP-43 (red) and GFAP (green) while nuclei are stained with DAPI (blue).Immunofluorescence for TDP-43 increased in the cytoplasm of LPS-treated astrocytes in both non transgenic (2 nd panel) and transgenic culture (4 th panel) as compared to the untreated control (1 st and 3 rd panel).However, the overall cytoplasmic increase of TDP-43 was more pronounced in LPS-treated astrocytes from hTDP-43 A315T transgenic mice than in LPS-treated astrocytes from C57Bl6 mice.Arrows point on cells presenting a decrease in nuclear TDP-43.Scale bar 100μm. Fig 4 . Fig 4. Cytoplasmic aggregates of TDP-43 in LPS-activated microglia.Representative images of microglia from hTDP-43 A315T transgenic and control littermates, treated or not treated with LPS (1μg/ml) for one day.Microglial cultures were double stained for polyclonal TDP-43 (red) and CD11b (green) whereas the nuclei were stained with DAPI (blue).No cytoplasmic aggregates of TDP-43 were found in untreated microglia from control C57Bl6 mice (1 st panel) or from hTDP-43 A315T transgenic mice (3 rd panel).Treatment with LPS resulted in formation of small cytoplasmic TDP-43 aggregates in microglia from control C57Bl6 mice (2 nd panel) and from hTDP-43 A315T transgenic mice (4 th panel).However, TDP-43 punctate aggregates were more abundant and of higher intense staining in LPS-treated microglia (4 th panel) from hTDP-43 A315T transgenic when compared to microglia from C57Bl6 mice (2 nd panel).Arrowheads point to cells with cytoplasmic aggregates of TDP-43. Fig 5 . Fig 5. Translocation of TDP-43 from the nucleus to the cytoplasm in astrocytes and microglia with LPS-treatment.(A) Nuclear and cytoplasmic protein fractions of astrocytes were analyzed by Western blot.The intensity of the 43kDa TDP-43 bands was divided by the intensity of the respective actin band to take in account the differences in the protein loading.Fold change is the ratio of the 43 kDa band intensity in the LPS-treated cultures over the band intensity in the respective untreated controls, after normalized by the intensity of the correspondent actin bands.doi:10.1371/journal.pone.0140248.g005 Fig 6 . Fig 6.LPS treatment caused changes in nuclear and cytosolic TDP-43 immunostaining intensities in microglia cultures.The staining intensities were measured using the software Imaris on images taken with the confocal microscope Olympus IX81 and the software Olympus Fluoview ver 3.1a.More than 50 cells were analysed per cultures, 3 cultures of transgenic microglia and 3 cultures of non-transgenic microglia (N = 3).Fold changes were calculated by dividing the average intensities in the LPS-treated cultures over the average intensities in the untreated cultures.Graphic bars represent average and standard error of the mean values calculated with software GraphPad Prism 5. doi:10.1371/journal.pone.0140248.g006 Fig 8 . Fig 8. Chronic LPS treatment exacerbated cytoplasmic mislocalization and aggregation of TDP-43 in hTDP-43 A315T transgenic mice.Nontransgenic and transgenic mice expressing hTDP-43 A315T mutant at 6 months of age were i.p. injected with LPS (1mg/kg) or vehicle solution once per week up to 2 months.(A) Representative images of spinal cord sections of non-transgenic and hTDP-43 A315T transgenic mice, vehicle and LPS treated were double stained with polyclonal TDP-43 (green) and Neu N (red).(B) Measurement of cytoplasmic to nuclear ratio of TDP-43 staining showed increased ratio in LPS treated transgenic as well as non-transgenic mice (N = 4 per group, 8 sections of spinal cord from 4 different animals for each group, *<0.05, **<0.001 by student's t test.)(C) Chronic treatment of hTDP-43 A315T transgenic mice with LPS enhanced levels of insoluble TDP-43 in spinal cord.Total protein was extracted from spinal cords of LPS or vehicle-treated mice and sub-fractionated into insoluble and soluble fractions.Sub-fractionated samples were then analyzed by Western blot using the polyclonal TDP-43 antibody.(D) The intensity of TDP-43 bands were divided by the intensity of the respective actin band to take in account the differences in the protein loading and then the fold change was calculated.The fold change is the ratio of the band intensity in the LPS-treated mice over the band intensity in the respective vehicle-treated controls.Fold changes are all higher than 1, indicating that LPS treatment led to increased levels of TDP-43 protein.Groups were compared using t-test.*p value = 0.03 by student's t test (N = 4 per group, spinal cord from 4 different animals were used to extract protein for each group).Scale bar 50μm.doi:10.1371/journal.pone.0140248.g008
7,340.2
2015-10-07T00:00:00.000
[ "Biology" ]
THE GRANULE SIZE DISTRIBUTION INFLUENCE IN NANOCOMPOSITES ON OPTICAL AND MAGNETOOPTICAL SPECTRA We have investigated the size effect (quasi-classical size effect) in nanocomposites. It was shown that the size effect can change the amplitude, form and sign of the optical and magnetooptical spectra. We have deduced formulas for size effect and discussed the applications of the distributions for corrected description of optical and magnetooptical properties with regard to the granule size effect. It is very important to consider the distribution on the granule size in size effect. This fact allows to describe optical and magnetooptical spectra of nanocomposites better, especially in near IR due to intraband electron transitions. We have deduced formulas for size effect and discussed applications of the distributions for corrected description of optical and magnetooptical properties with regard to the effect of the granule size. Introduction Optical and magnetooptical features in nanocomposites are strictly connected with the size effects [1][2][3].So as there is a very important problem to describe optical and magnetooptical spectra of these structures especially in near IR due to intraband electron transitions.It was shown that the size effect can change the amplitude, form and sign of the optical and magnetooptical spectra [2,4].The calculations with consideration of the distribution on the granule size in size effect allow to describe magnetooptical spectra for layerwise sprayed (Co 40 Fe 40 B 20 ) x (SiO 2 ) 100-x nanocomposites qualitatively .Nanocomposites are heterogeneous magnetic materials in which ferromagnetic particles are placed in a para -or diamagnetic matrix of the dielectric.Nanocomposites are interesting because of the presence of a percolation threshold near which the electrical, optical and magneto-optical properties change significantly.This structure is a good example for the consideration of this phenomenon [1,2].To describe the properties of ferromagnetic nanocomposites effective medium approaches [1] are used.It is necessary to consider that the scattering on the surface of the granules, leading to quasi-classical size effect, modifies as a diagonal  xx == 1 −i• 2 , and the nondiagonal  xy = components of the tensor of the dielectric granule permittivity , if they are of average size (radius r 0 ) comparable with the free path length of electron l.The size effect has a significant influence on the optical and magneto-optical properties of nanocomposites particularly in the near infrared region of the spectrum by changing the amplitude, shape and the sign of the spectra.The present work discusses the influence of granule size distribution on the optical and magneto-optical properties of nanocomposites. Experimental data Nanocomposite "amorphous ferromagnet -insulator" (Co 41 Fe 39 B 20 ) x (SiO 2 ) 100-x was obtained in Voronezh state technical University by the method of ion-beam sputtering in argon atmosphere.The choice of such a complex granule composition Co 41 Fe 39 B 20 was made due to the fact that the amorphous structure of such a ferromagnetic is stable at room temperature.The specimen is produced by ion-beam sputtering in argon atmosphere.When using a vacuum chamber with three sources ion-beam [5]. EPJ Web of Conferences 185, 02009 (2018) https://doi.org/10.1051/epjconf/201818502009MISM 2017 A composite alloy target was used for the deposition.Floatable target Co 41 Fe 39 B 20 is manufactured by an induction vacuum melting.Cobalt with the purity 99.98% is used for the preparation of alloys, the amount of the carbonyl iron and boron correspond to the alloy composition.The composite target was an alloy target attached to the surface plate of a single crystal quartz/alumina.The thickness of the obtained film samples was 0.15-6.5 µm.The composition of the resulting composites was controlled by the electron probe x-ray microanalysis.The electrical and magnetoresistive properties of the obtained samples were studied by out two-probe potentiometric method.The measurement of TKE was carried out by the dynamic method, consisting in the detection of the small changes in the intensity of the reflected light when the sample is remagnetized by an alternating magnetic field [see, e.g.6].All other experimental details are shown in [6]. Calculations and discussion We'll find a given size granule distribution in the following way [1].We considered the granules as sphere particles with size r o .The free time path of electrons in the granule (τ part ) is less than its corresponding time in the massive sample (τ bulk ) due to the collisions with the surface of the granules [1]: where v f is Fermi velocity and mod 2 2 (2) The mentioned above values are the diagonal components of the permittivity tensor with the size effect influence ε mod , where ω is frequency of the light, ω p is plasma frequency.Then, taking into account that the frequency dependence of the intraband conductivity can be described by the Drude-Lorentz formulas, similar to the formula [ They are nondiagonal components of the permittivity tensor with the size effect influence, where saturation magnetization, R gr is the coefficient of anomalous Hall effect, τ bulk is the corresponding free time of the electrons in the bulk material, τ gr is the corresponding free time of the electrons in the granule, ρ bulk is the resistance of the bulk material, ρ gr is the resistance of the granules [4]. The size effect has an impact on the coefficient of the extraordinary Hall effect, and resistivity.The latter is given by the expression ρ gr = ρ bulk (1+l/r 0 ),and the influence of the size effect on factor of the extraordinary Hall effect granules can be written as: where R s is the value of the coefficient of the extraordinary Hall effect on the surface material of the granules [7].The characteristic dependences of the optical and magneto-optical parameters from granule size are given in [1,2,4].In the above formulas (1) to (3) a possible distribution of granules by size is not taken into account.At the same time, it is important for the description of optical and magneto-optical spectra of the nanocomposites, since the diagonal components of the dielectric permittivity tensor are responsible for the optical properties, and the nondiagonal for the magneto-optical properties.This is especially true in the near infrared region of the spectrum that is due to intraband transitions [2,4].Accounting for the size distribution of the granules allows to describe the spectra of the transverse Kerr effect (ТКЕ) more accurately on the p-component is calculated using equations (2-3) according to the formula: where is the incidence angle of light, δ p is the parameter of TKE [8].We considered magnetic nanocomposite for which were optical and magneto-optical parameters experimentally measured [6].The calculated theoretical spectra without size effect of TKE is shown in Fig. 1 Fig. 1 The calculated spectra of the transversal Kerr effect (Co 41 Fe 39 B 20 )x (SiO 2 ) 100-x nanocomposite: =70 0 -solid line ; =40 0 -dash line; =10 0 -dot line (х = 34%, without size effect). It is clearly seen that the magnitude of this effect strongly depends on the incidence angle of light [Fig.1].It was shown that the magnitude of the TKE varied more than an order of magnitude [Fig.1]. Fig. 2 The calculated spectra of the transversal Kerr effect with the size effect influence compared to the experiment: squares-experimental spectrum; dash dot line-calculated spectra at =70 0 ; dash line-calculated spectra taking at = 77 0 ; dot line-calculated spectra at =83 0 (х = 34%, L=0.33, r 0 =2 nm, R s /R bulk =-2.25) and solid line-calculated spectra taking into account the size effect and particle size distribution at =77 0 .Fig. 2 shows the calculated spectra of the TKE with the size effect influence compared to the experiment.It was proved that the size effect can change the amplitude, form and sign of the optical and magnetooptical spectra and allows to describe the experimental data well.For a more accurate description we must take into account the granules distribution by size.Schematically, such a nanocomposite is shown in Fig. 3. Figure 3 shows one of the variants of layer-by-layer sputtered composites heterogeneity.It is shown that the ferromagnetic granules, which are closer to the previous composite layers have a smaller size compared with those that are in the scope of layers.It is due to the fact that after the deposition of each layer the films are cooled.When we started spraying a new layer, ferromagnetic and dielectric phase falling on the still cold of the previous layer and cooled immediately, which led to the formation of small granules.In proportion the growth of the layer thickness the sample is heated that leads to the formation of larger ferromagnetic grains in the layer.Thus, we can say that composites sputtered layers are not just bulkcomposites with different randomly distributed size granules.In layer-by-layer sputtered composites we can observed a periodic layered structure, the characteristics of which strongly influence the magneto-optical properties of the films.Fig. 3 The particle size distribution in the layers of deposited nanocompositе (dash line -the board between of layer-bylayer sputtered composites heterogeneity). In the case of a uniform distribution the probability density has the form: In our version, a is the smallest possible size of a granule, and b is the largest dimension.Now we denote by r 0 is the smallest size, r 1 is the highest size, and r is the current size, then the function size of the granules R( r ) can be written in the form: R( r )= r 0 (1+F(r)). If the size of the granules is changed two times compared to the original r 0 , the function takes the form: It is important to note that the uniform distribution (6-7) is certainly important, but not the general case.Therefore, in consideration of nanocomposites of various compositions, it is important to find the magnitude of the F(r).In figure 2 the solid line presents the spectrum of TKE based on the granules distribution by size and by calculations using the function (9) .This particle size distribution allows to describe the corresponding experimental data well (Fig. 2). Conclusion Spectra of transversal Kerr effect in (Co 41 Fe 39 B 20 ) x (SiO 2 ) 100-x nanocomposite have been calculated.Both experimental and theoretical spectra have a local minimum at E ~ 1.5 eV.This is due to the fact that the size effect in the IR spectral region is connected with intraband transitions.The good interpretation of an experimental data is possible only when the size effect influence is taken into account.Furthermore, for more accurate calculations a distribution function of granules by its size should be used.Thus, the obtained results concerning the size effect and size distribution of the granules allow to improve the description and to explain the amplitude, form and sigh of the optical and magnetooptical spectra in nanocomposites mainly in the near infrared region.
2,466.8
2018-01-01T00:00:00.000
[ "Physics" ]
Health impact of monoclonal gammopathy of undetermined significance (MGUS) and monoclonal B-cell lymphocytosis (MBL): findings from a UK population-based cohort Objective To examine mortality and morbidity patterns before and after premalignancy diagnosis in individuals with monoclonal gammopathy of undetermined significance (MGUS) and monoclonal B-cell lymphocytosis (MBL) and compare their secondary healthcare activity to that of the general population. Design Population-based patient cohort, within which each patient is matched at diagnosis to 10 age-matched and sex-matched individuals from the general population. Both cohorts are linked to nationwide information on deaths, cancer registrations and Hospital Episode Statistics. Setting The UK’s Haematological Malignancy Research Network, which has a catchment population of around 4 million served by 14 hospitals and a central diagnostic laboratory. Participants All patients newly diagnosed during 2009–2015 with MGUS (n=2193) or MBL (n=561) and their age and sex-matched comparators (n=27 538). Main outcome measures Mortality and hospital inpatient and outpatient activity in the 5 years before and 3 years after diagnosis. Results Individuals with MGUS experienced excess morbidity in the 5 years before diagnosis and excess mortality and morbidity in the 3 years after diagnosis. Increased rate ratios (RRs) were evident for nearly all clinical specialties, the largest, both before and after diagnosis, being for nephrology (before RR=4.29, 95% CI 3.90 to 4.71; after RR=13.8, 95% CI 12.8 to 15.0) and rheumatology (before RR=3.40, 95% CI 3.18 to 3.63; after RR=5.44, 95% CI 5.08 to 5.83). Strong effects were also evident for endocrinology, neurology, dermatology and respiratory medicine. Conversely, only marginal increases in mortality and morbidity were evident for MBL. Conclusions MGUS and MBL are generally considered to be relatively benign, since most individuals with monoclonal immunoglobulins never develop a B-cell malignancy or any other monoclonal protein-related organ/tissue-related disorder. Nonetheless, our findings offer strong support for the view that in some individuals, monoclonal gammopathy has the potential to cause systemic disease resulting in wide-ranging organ/tissue damage and excess mortality. BACKGROUND Monoclonal gammopathy of undetermined significance (MGUS) and monoclonal B-cell lymphocytosis (MBL) are premalignant monoclonal B-cell disorders, the former progressing to myeloma at a rate of around 1% per year 1 2 and the latter to chronic lymphocytic leukaemia (CLL) at around 2% per year. 3 4 Diagnosed more frequently in men than women and people over 60 years of age, 5 6 overt symptoms of haematological malignancy are, by definition, absent in both MBL and MGUS. 7 Accordingly, although some premalignant disorders are found coincidentally during routine health checks, others are identified during diagnostic work-up investigations; MGUS during the course of tests applied to detect a range Strengths and limitations of this study ► Data are from an established population-based cohort within which all haematological malignancies and related clonal disorders are diagnosed, monitored and coded at a single laboratory. ► Providing nationally generalisable data, all diagnoses are included, and complete follow-up is achieved via linkage to nationwide administrative datasets. ► The age-matched and sex-matched general population cohort enables baseline activity and rate ratios to be calculated, both before and after premalignancy detection. ► Analyses are constrained by the fact that hospital episodes statistics are primarily collected for administrative and clinical purposes and not for research. Open access of potential conditions and illnesses 8 9 and MBL during episodes of unexplained lymphocytosis. 4 10 In addition to the association with haematological malignancy, individuals with MGUS or MBL sometimes experience higher than expected levels of mortality and morbidity that are independent of cancer. 4 8 11-16 Indeed, although most individuals with these disorders suffer no obvious ill effects, interest in their relationship with other comorbidities has increased markedly in recent years, MBL largely in relation to its potential to impact on the immune response 17 and MGUS due to the systemic organ and tissue damage that can be caused by monoclonal immunoglobulins secreted by the abnormal B-cell clone. 18 Hitherto, however, most information about these associations has been derived either from case-control studies established to look at risk factors for disease development (eg, family history of disease) and additional tests applied to specific patient groups (eg, patients with kidney disease) or cohort studies that track individuals with either MGUS or MBL forwards in time from their diagnosis. 5 13 18 However, despite the undoubted interest in the sequence of health events, as far as we are aware, no systematic population-based investigations of the comorbidity patterns that precede and succeed a diagnosis of either MGUS or MBL have been undertaken in the same cohort. With a view to shedding light on the health events occurring before and after the diagnosis of MGUS and MBL, the present report uses data from an established UK population-based patient cohort of haematological malignancies and related disorders to examine the comorbidity patterns of individuals with these premalignancy clonal disorders (MGUS=2193, MBL=561). To enable effect size quantification, these patterns are compared with the baseline activity of an individually age-matched and sexmatched (10 per patient) general population comparison cohort. METHODS Cases are from the Haematological Malignancy Research Network (HMRN; www. HMRN. org), a specialist UK registry established in 2004 to provide robust generalisable data to inform contemporary clinical practice and research across the country as a whole. 19 20 HMRN operates under a legal basis that permits data to be collected directly from health records without explicit consent. Set within a catchment population of around four million that is served by 14 hospitals and has a socioeconomic profile which is broadly representative of the UK as a whole, all haematological cancers and related conditions are diagnosed and coded by clinical specialists at a single integrated haematopathology laboratory, the Haematological Malignancy Diagnostic Service ( www. HMDS. info), using standardised diagnostic criteria and the latest WHO International Statistical Classification of Diseases third revision (ICD-O-3) classification. 7 Specifically, in relation to the present report, which covers diagnoses made during 2009-2015, MBL was defined by a peripheral blood monoclonal B-cell count <5×10 9 /L in individuals with no other features of a B-cell lymphoproliferative disorder, in MGUS by a serum paraprotein less than 30 g/L and in those where a bone marrow examination was considered necessary following clinical examination, a clonal bone marrow plasma cells/lymphoplasmacytic infiltration of less than 10%. Hence, within the HMRN region, as in other diagnostic settings, 21 22 invasive bone marrow examinations in patients with MGUS are generally only carried when laboratory or clinical features are indicative of an underlying plasma cell neoplasm, lymphoproliferative disorder, monoclonal immunoglobulin deposition disease (eg, amyloidosis) or conditions like polyneuropathy, organomegaly, endocrinopathy, monoclonal gammopathy and skin changes (POEMS) syndrome. 7 To facilitate comparisons with unaffected individuals, HMRN also has a general population cohort. To create this 'control' cohort, all patients diagnosed during 2009-2015 with a haematological malignancy or related clonal disorder (n=18 127) were individually matched on sex and age at the point of diagnosis to 10 randomly selected individuals from the same catchment population. All controls were assigned a serial number that linked them to their matched case and a 'pseudodiagnosis' date that corresponded to their matched case's diagnosis date. Individuals in the patient cohort and the comparison cohort are linked to the same nationwide information on deaths, cancer registrations and Hospital Episode Statistics (HES). At the point of selection and matching, all controls were resident in the HMRN region and none had a previous cancer registration for a haematological malignancy. 23 24 Hence, for the control cohort (in contrast to the patient cohort) no additional health information outside that contained within national administrative datasets was available. Using similar methods to those previously described, 23-25 associations with hospital inpatient activity (HES admitted patient care) and outpatient activity (HES outpatient (HES-OP)) in the 5 years prior to diagnosis/pseudodiagnosis through to the 3 years after diagnosis were investigated. HES inpatient data contain ICD-10 codes derived from discharge summaries 26 and associations with these were examined in relation to the 17 specific conditions in the Charlson Comorbidity Index. [27][28][29] By contrast, HES-OP data contain details about the type of outpatient attendance, the majority being linked to consultant specialty codes (eg, ophthalmology, rheumatology), with the remainder largely comprising routine follow-up/ monitoring, nurse-led clinic attendances (eg, anticoagulant clinics) and consultations with allied health professionals (eg, podiatry). This report includes all patients (cases) who were newly diagnosed with either MGUS (n=2193) or MBL (n=561) between 1 January 2009 and 31 December 2015 and their matched controls (n=27 538); individuals diagnosed with a haematological cancer within 6 months of their MGUS/ Open access MBL diagnosis were considered ineligible. All cases and controls were followed up for cancer registration and death until March 2017 and hospital activity (inpatient and outpatient) until March 2016. Additionally, progressions and/or transformations among cases were identified through HMDS up to March 2017. Data were summarised using standard methods. Overall survival, hospital activity and rate ratios (RRs) were calculated using time-to-event analyses. The Stata program 'strel' was used to estimate relative survival (RS), using age-specific and sex-specific background mortality rates from national life tables. 30 31 All analyses were conducted using Stata V.16.0. Patient and public involvement (PPI) PPI is integral to HMRN and takes place via a dedicated patient partnership, overseen by a lay committee. Patients from the partnership are involved in identifying key research questions and participate in all our funding applications. Furthermore, patients and their relatives routinely take part in the dissemination of HMRN's With respect to comorbidity, in the years leading up to diagnosis, patients with MGUS were significantly more likely than their corresponding controls to have Open access a record of at least 1 of the 17 comorbidities specified in the Charlson Comorbidity Index 27-29 recorded in their discharge summaries, but no differences between MBL cases and their controls were evident (table 1). More information about hospital activity patterns of cases with MGUS/MBL and their general population controls is shown in figure 1, which shows inpatient and outpatient activity (excluding haematology) during the 5 years before and the 3 years after diagnosis of MGUS (figure 1A) and MBL (figure 1B). In the period before diagnosis, patients with MGUS (figure 1A) had consistently higher outpatient activity rates than their controls, the disparity increasing markedly during the 18 months leading up to the formal diagnosis of MGUS by haematopathology where it remained high for about 12 months, before gradually falling and levelling out at a higher level than before diagnosis. Although less pronounced, a similar pattern is evident in inpatient data. With smaller numbers and more scatter, variations in outpatient and inpatient activity in MBL are less evident ( figure 1B). Figure 2 shows outpatient attendance frequencies (at least two specialty-specific visits) in the 3 years before and in the 3 years after MGUS diagnosis for the top 25 clinical specialties, visits within 1 month (±) of diagnosis/ pseudodiagnosis are excluded. As is evident from the plot, the increased outpatient activity seen among cases (figure 1) occurs across a range of clinical specialties, the highest frequencies occurring in ophthalmology, haematology, general surgery, orthopaedics, general (internal) medicine and rheumatology. However, excluding haematology where, as expected, attendances increased markedly just before and after MGUS diagnosis, the largest RRs both before (figure 3A) and after (figure 3B) diagnosis were for nephrology (before diagnosis RR=4.29, 95% CI 3.90 to 4.71; after diagnosis RR=13.8, 95% CI 12.8 to 15.0) and rheumatology (before diagnosis RR=3.40, 95% CI 3.18 to 3.63; after diagnosis RR=5.44, 95% CI 5.08 to 5.83). Other significant associations (p<0.05) with RR point estimates above 2.0 were evident for endocrinology, neurology and respiratory medicine, as well as for the nurse-led monitoring activities which form part of ongoing clinical care across a range of specialties. MGUS data are stratified by subtype in table 2. Accounting for around two-thirds of the total (n=1471; 67.0%), the IgG subtype dominates, followed by IgM (n=350; 16.0%) and IgA (n=266; 12.1%). The remaining 106 (4.8%) 'other' category comprise a mix of subtypes: light chain only (n=60), IgG+IgM (n=17), IgG+IgA (n=6), IgA+IgM (n=1), IgE (n=2) and not recorded (n=20). As expected, progression to myeloma in the 3 years following MGUS diagnosis was largely restricted to the IgG and IgA subtypes. The age distributions, 5-year survival estimates (overall and relative) and non-haematological malignancy frequencies of the main subtypes were broadly similar, although patients in the combined 'other' category tended to be slightly older and to fare less well (RS=77.6%, 95% CI 62.6% to 87.2%). The numbers of patients in the individual groups were, however, too Open access sparse to examine the data in greater depth. Finally, during the study period (2009)(2010)(2011)(2012)(2013)(2014)(2015), in addition to the detection of a paraprotein in peripheral blood, around 70% (1527/2193) of patients with MGUS in our cohort had a confirmatory bone marrow examination taken to exclude an underlying neoplasm. The patient characteristics and secondary care activity patterns of those who had bone marrow examinations were, however, broadly similar to those who did not (data not shown). DISCUSSION Including data on nearly 3000 cases with premalignant clonal disorders and 10 times as many age-matched and sex-matched general population controls, this large UK record-linkage study found that individuals with MGUS not only experienced excess mortality and morbidity after diagnosis, but also excess morbidity in the 5 years before diagnosis. By contrast, only marginal increases in mortality and morbidity were evident for MBL, none of which were consistent or varied significantly from the general population. Interestingly, progression patterns were in the opposite direction: in the years following detection of a premalignant clonal disorder, 3.4% (n=75/2193) of those with MGUS developed a haematological malignancy (48 of which were myelomas) before April 2017, compared with 25.0% (n=140/561) of those with MBL (137 of which were CLLs). The elevated mortality and morbidity following a diagnosis of MGUS, which seems largely independent of progression to cancer, is consistent with reports relating to the potential clinical significance of this disorder. [11][12][13][14][15][16][17][18] Corresponding to the period of diagnostic work-up, our data also demonstrate the pronounced increase in hospital activity in the months surrounding MGUS diagnosis, the highest activity being observed in the 6 months before and 6 months after diagnosis. Of more importance, perhaps, the analyses clearly show that hospital activity in people subsequently diagnosed with MGUS is often elevated many years before diagnosis: excesses being observed in specialties covering most organ and tissue systems including nephrology, endocrinology, neurology, rheumatology, gastroenterology, dermatology Open access and respiratory medicine. By contrast, although hospital activity increased in the months around the time of MBL diagnosis, no consistent differences or patterns either before or after the diagnosis were detected. Furthermore, in agreement with findings reported in other studies that used age-matched and sex-matched controls, no associations with mortality were detected. 3 32 However, given the fact that MBL has been associated with increased susceptibility to infection and non-CLL related mortality, 4 15 33 it is possible that the findings relating to subsequent morbidity and mortality could change as our data mature, length of follow-up increases and linkage to primary care data becomes possible. The age and sex distributions of our population-based cohorts are broadly similar to those of other published MBL 3 34 35 and MGUS 1 2 11 series, as is the dominance of the IgG MGUS subtype. 1 2 Providing nationally generalisable data, additional strengths of our study include its large well-defined population, within which all haematological malignancies and related clonal disorders are diagnosed, monitored and coded using up-to-date standardised procedures at a central haematopathology laboratory. 19 In this context, it is important to bear in mind that most people with premalignant clonal disorders remain asymptomatic and that our cohorts contain a relatively large proportion of people who came to clinical attention in primary and/or secondary care and were referred to haematology for further investigation. More specifically, around 98% of patients in our MBL cohort had high-count MBL, and within the HMRN region, patients with MBL are monitored routinely using flow cytometry so laboratory progressions (B-cell count >5×10 9 /L) may be detected with higher sensitivity than in cases monitored clinically. This is supported by the fact that over the follow-up period (median 3.8 years), around a quarter of patients with MBL progressed to CLL and 3.6% required treatment. Prevalence comparisons with population screening studies also confirm that the majority of those over 50 years of age with monoclonal immunoglobulin in their blood/urine would not be included in our MGUS cohort. 5 20 35-37 In this context, it is important to remember that some members of the control cohort would, if screened, have had a premalignant clonal disorder detected in their peripheral blood. 13 32 33 38 39 Unfortunately, however, information on diagnostic work-up tests and monitoring procedures is not routinely included in nationally compiled HES. Furthermore, the anonymised nature of the control cohort means that individuals cannot be linked to other data sources. Hence, although we know that members of the control cohort had no prior record of an MGUS or MBL diagnosis within the study region in the years leading up to their corresponding case's diagnosis, we do not know how many people developed these conditions after their case was diagnosed. The diversity of morbidity effects seen among individuals with MGUS is consistent with the expanding body of evidence relating to the potential adverse impact that even low levels of circulating monoclonal protein (M-protein) can have. Thus far, the complex underpinning mechanisms identified include: deposition of M-protein aggregations of varying immunoglobulin subtypes in different organs as well as the induction of autoantibodies and cytokines that can have an impact on organs and tissues in a variety of deleterious ways. 13 18 40 41 Indeed, the recognised number of M-proteinmediated entities is increasing, with several affecting multiple organs; well-known deposition syndromes including primary amyloidosis and paraneoplastic conditions such as POEMS syndrome. 7 42 As evidenced in our analysis, kidney involvement is frequent, both in the years before (fourfold excess) and after (14-fold excess) MGUS diagnosis. Indeed, the umbrella term monoclonal gammopathy of renal significance has recently been suggested to cover all M-protein-mediated kidney disorders that fail to meet the diagnostic criteria for multiple myeloma or any other B-cell malignancy. 13 18 43 Other organ-specific terms continue to emerge, and with a view to improving recognition of these complex disorders which clearly pose significant diagnostic and treatment challenges, the overarching term monoclonal gammopathy of clinical significance has also been suggested. 18 From a haematological malignancy perspective, MGUS and MBL are generally considered to be relatively benign conditions. However, both can have other deleterious health consequences, the effect of monoclonal gammopathy being particularly striking. Impacting significantly the survival and having the potential to cause systemic disease and wide-ranging damage to most organs and tissues, the adverse outcomes associated with the M-proteins produced by the abnormal B-cell clone can be severe and extend over many years. Even though most people with monoclonal immunoglobulins never develop a B-cell malignancy or suffer from any other form of M-proteinrelated organ/tissue-related disorder, the consequences for those that do can be extremely serious. In this regard, early targeting of pathogenic B-cell clones could mitigate both cancer and non-cancer effects, but currently, although knowledge is increasing, there is no known way to reliably identify such clones in the absence of other signs/symptoms. Hence, population screening cannot be recommended and diagnosis remains reliant on clinical suspicion. However, the long-standing nature of the comorbidity associations seen prior to MGUS diagnosis in our data suggest that there may be room for improvement and that the implementation of strategies to improve awareness and earlier detection, as well as monitoring of high-risk patient groups, could prove beneficial.
4,537.4
2020-05-30T00:00:00.000
[ "Medicine", "Biology" ]
Observation of Odderon Effects at LHC energies -- A Real Extended Bialas-Bzdak Model Study The unitarily extended Bialas-Bzdak model of elastic proton-proton scattering is applied, without modifications, to describe the differential cross-section of elastic proton-antiproton collisions in the TeV energy range, and to extrapolate these differential cross-sections to LHC energies. In this model-dependent study we find that the differential cross-sections of elastic proton-proton collision data at 2.76 and 7 TeV energies differ significantly from the differential cross-section of elastic proton-antiproton collisions extrapolated to these energies. The elastic proton-proton differential cross-sections, extrapolated to 1.96 TeV energy with the help of this extended Bialas-Bzdak model do not differ significantly from that of elastic proton-antiproton collisions, within the theoretical errors of the extrapolation. Taken together these results provide a model-dependent, but statistically significant evidence for a crossing-odd component of the elastic scattering amplitude at the at least 7.08 sigma level. From the reconstructed Odderon and Pomeron amplitudes, we determine the $\sqrt{s}$ dependence of the corresponding total and differential cross-sections. Introduction Recently the TOTEM experiment measured differential cross-sections of elastic protonproton collisions in the TeV energy range, from √ s = 2.76 through 7 and 8 to 13 TeV, together with the total, elastic and inelastic cross-sections and the real to imaginary ratio of the scattering amplitude at vanishing four-momentum transfer. These measurements provided surprizes and unexpected results. First of all, the shape of the differential cross-section of elastic scattering at √ s = 7 TeV was different from all the predictions. The total cross-section increases with increasing √ s according to theoretical expectations based on Pomeron-exchange, corresponding experimentally to the production of large rapidity gaps in high energy proton-proton and proton-antiproton collisions. These events correspond to large angular regions where no particle is produced. Their fraction, in particular the ratio of the elastic to the total proton-proton cross-section is increased above 25 % at LHC energies. In the language of quantum chromodynamics (QCD), the field theory of strong interactions, Pomeron-exchange corresponds to the exchange of even number of gluons with vacuum quantum numbers. In 1973, a crossing-odd counterpart to the Pomeron was proposed by L. Lukaszuk and B. Nicolescu, the so-called Odderon [1]. In QCD, Odderon exchange corresponds to the t-channel exchange of a color-neutral gluonic compound state consisting of an odd number of gluons, as noted by Bartels, Vacca and Lipatov in 1999 [2]. The Odderon effects remained elusive for a long time, due to lack of a definitive and statistically significant experimental evidence. A direct way to probe the Odderon in elastic scattering is by comparing the differential cross-section of particle-particle and particle-antiparticle scattering at sufficiently high energies [3,4]. Such a search was published at the ISR energy of √ s = 53 GeV in 1985 [5], that resulted in an indication of the Odderon, corresponding to a 3.35σ significance level obtained from a simple χ 2 calculation, based on 5 pp and 5 pp data points in the 1.1 ≤|t|≤ 1.5 GeV 2 range (around the diffractive minimum). This significance is smaller than the 5σ threshold, traditionally accepted as a threshold for a discovery level observation in high energy phyics. Furthermore, the colliding energy of √ s = 53 GeV was not sufficiently large so the possible Reggeon exchange effects were difficult to evaluate and control. These difficulties rendered the Odderon search at the ISR energies rather inconclusive, but nevertheless inspiring and indicative, motivating further studies. However, at larger four-momentum transfers, in the interference (diffractive dip and bump or minimum-maximum) region, the Odderon signals are significant at LHC energies. Let us mention here only two of them: the four-momentum transfer dependent nuclear slope parameter B(t) and the scaling properties of elastic scattering at the TeV energy region. Two independent, but nearly simultaneous phenomenological papers suggested that the four-momentum transfer dependence of the nuclear slope parameter, B(t) is qualitatively different in elastic proton-proton and proton-antiproton collisions [12,22]. The TOTEM experiment has demonstrated in ref. [9] that indeed in elastic pp collisions at √ s = 2.76 TeV, the nuclear slope B(t) is increasing (swings) before it decreases and changes sign in the interference (diffractive dip and bump or minimum-maximum) region. After the diffractive maximum, the nuclear slope becomes positive again. In contrast, elastic pp collisions measured by the D0 collaboration at the Tevatron energy of √ s = 1.96 TeV did not show such a pronounced diffractive minimum-maximum structure, instead an exponentially decreasing cone region at low −t with a constant B(t) is followed by a shoulder structure, without a pronunced diffractive minimum and maximum structure. The TOTEM collaboration presented its results on the elastic pp differential cross-section at √ s = 2.76 TeV and concluded in ref. [9] that "under the condition that the effects due to the energy difference between TOTEM and D0 can be neglected, these results provide evidence for a colourless 3-gluon bound state exchange in the t-channel of the proton-proton elastic scattering". This energy gap has been closed recently, in a model-independent way, based on a reanalysis of already published data using the scaling properties of elastic scattering in both pp and pp collisions at TeV energies: Refs. [30][31][32] reported about a statistically significant Odderon signal in the comparison of the H(x, s) scaling functions of elastic pp collisions at √ s = 7.0 TeV to that of pp collisions at √ s = 1.96 TeV. The difference between these scaling functions carries an at least 6.26 σ Odderon signal, if all the vertical and horizontal, point-to-point fluctuating and point-to-point correlated errors are taken into account. If the interpolation between the datapoints at 7 TeV is considered as a theoretical curve, the significance of the Odderon signal goes up to 6.55 σ . Instead of comparing the cross sections directly, this method removes the dominant s dependent quantities, by scaling out the s-dependencies of σ tot (s), σ el (s), B(s) and ρ 0 (s), as well as the normalization of the H(x, s) scaling function, that also cancels the point-to-point correlated and t-independent normalization errors. The model-independence of the results of refs. [12,[30][31][32] is an advantage when a significant and model-independent Odderon signal is searched for. The domain of the signal region can also be determined with model-independent methods. Both the signal and its domain can be directly determined from the comparison of D0 and TOTEM data. However, a physical interpretation or a theoretical context is also desired, not only to gain a better understanding of the results, in order to have a more physical picture, but also to gain a predictive power and to be able to extrapolate the results to domains where experimental data are lacking, or, to regions where the scaling relations are violated. To provide such a picture is one of the goals of our present manuscript. In this work, we continue a recent series of theoretical papers [33][34][35][36]. These studies investigated the differential cross-section of elastic pp collisions, but did not study the same effects in elastic pp collisions. The framework of these studies is the real extended and unitarized Bialas-Bzdak model, based on refs. [37][38][39][40]. This model considers protons as weakly bound states of constituent quarks and diquarks, or p = (q, d) for short (for a more detailed summary of the model see Appendix A). In a variation on this theme, the diquark in the proton may also be considered to be a weakly bound state of two constituent quarks, leading to the p = (q, (q, q)) variant of the Bialas-Bzdak model [37,38]. The model is based on Glauber's multiple scattering theory of elastic collisions [41][42][43], assuming additionally, that all elementary distributions follow a random Gaussian elementary process, and can be characterized by the corresponding s-dependent Gaussian radii. These distributions include the parton distribution inside the quark, characterized by a Gaussian radius R q (s), the distributions of the partons inside the diquarks, characterized by the Gaussian radius R d (s) and the typical separation between the quarks and the diquarks characterized by the Gaussian radius R qd (s). In refs. [33,34,36] it was shown that the p = (q, (q, q)) variant of the Bialas-Bzdak model gives too many diffractive minima, while experimentally only a single diffractive minimum is observed in pp collisions. This is a result that is consistent with the earlier detailed studies of elastic nucleusnucleus collisions in ref. [44], that observed that a single diffractive minimum occures only in elastic deuteron-deuteron or (p, n) + (p, n) collisions, so the number of diffractive minima increases as either of the elastically colliding composite objects develops a more complex internal structure. In the original version of the Bialas-Bzdak model, the scattering amplitude was assumed to be completely imaginary [37]. This stucture resulted in a completely vanishing differential cross-section at the diffractive minima. This model was supplemented by a real part, first perturbatively [33][34][35], subsequently in a non-perturbative and unitary manner [36]. This way a new parameter called α(s) was introduced, that controls the value of the differential cross-section at the diffractive minimum (it is not to be confused with the strong coupling constant of QCD, that we denote in this work as α QCD s ). Our α(s) is a kind of opacity parameter, that measures the strength of the real part of the scattering amplitude, so it is responsible for both for filling up the dip region of the differential cross-sections and for the description of the real to imaginary ratio ρ at vanishing four-momentum transfer. The structure of this unitary, Real Extended Bialas-Bzdak model (abbreviated as ReBB model) is thus very interesting as there are only four s-dependent physical parameters: R q , R d , R qd and α. However three out of these four parameters is a geometrical parameter, characterizing the s dependence of parton distributions inside the protons. Hence, it is natural to assume, that these distributions are the same inside protons and anti-protons, while the opacity parameter α may be different in elastic pp and pp collisions. So it is natural to expect, that this α(s) parameter may carry an Odderon signal as its excitation function might be very different in elastic pp collisions, that feature a pronounced dip at every measured energy even in the TeV energy range [9], while in elastic pp collisions, a significant dip is lacking even in measurements in the TeV energy range [45]. In this manuscript, we thus extend the applications of the ReBB model from elastic pp to elastic pp collisions using the model exactly in the same form, as it was described in Ref. [36]. We fit exactly the same four physical parameters to describe the differential cross-section of elastic proton-antiproton (pp) scattering. Later we shall see that at the same energy, the geometrical parameters in pp and pp collisions are apparently consistent with one another, within the systematic errors of the analysis we obtain the same R q (s), R d (s) and R qd (s) functions for pp and pp reactions. In this manuscript, we thus can investigate also the following independent questions: -Is the Real Extended Bialas-Bzdak model of ref. [36] able to describe not only elastic pp but also pp collisions? -Is it possible to characterize the Odderon with only one physical parameter: the difference of the opacity parameter α(s) in pp and in pp collisions: α pp (s) = α pp (s)? We shall see that the answer to both of these questions is a definitive yes. The structure of the manuscript is as follows. In Section 2 we recapitulate the definition of the key physical quantities in elastic scattering and mention their main relations. In Section 3 we present the various error definitions and the evaluated χ 2 formulae of both pp and pp datasets. Subsequently, in Section 4 we detail the optimization method and summarize the fit results in terms of four physical parameters determined at four different energies as listed in Table 1, that form the basis of the determination of the energy dependencies of the model parameters in Section 5. The energy dependencies of both proton-proton and protonantiproton elastic scattering in the TeV energy range are determined by a set of 10 physical parameters only, as listed in Table 2. As a next step for establishing the reliability of this s-dependence of the model parameters, we have performed also the so called validation or sanity tests in Section 6: we have cross-checked that the obtained trends reproduce in a statistically acceptable manner each of the measured data also those, that were not utilized so far to establish the s-dependencies of the ReBB model parameters. After establishing that the excitation function of the ReBB model reproduces the measured data, we predict the experimentally not yet available large-t differential cross-section of pp collisions at √ s = 0.9, 4, 5 and 8 TeV and we present the extrapolations of the pp differential cross-sections measured at the LHC energies of 2.76 and 7.0 TeV to the Tevatron energy of 1.96 TeV. Vice versa, we also extrapolate the pp differential cross-sections from the SPS and Tevatron energies of 0.546 and 1.96 TeV to the LHC energies of 2.76 and 7.0 TeV in Section 7. These results are discussed in detail and put into context in Section 8. We summarize the results and conclude in Section 9. This work is closed with four Appendices. For the sake of completeness, the unitary, real part extended Bialas-Bzdak model of ref. [36] is summarized in Appendix A. In Appendix B we derive and detail the relations between the opacity parameter α of the ReBB model and the real-to-imaginary ratio ρ 0 . The main properties of Odderon and Pomeron exchange including the corresponding differential and total cross-sections in the TeV energy range are summarized in Appendix C. Two small theorems are also given here: Theorem I indicates that if the differential cross-sections of elastic pp and pp collisions are not the same in the TeV energy range, then the crossing-odd component of the elastic amplitude (Odderon) cannot vanish, while Theorem II proves that in the framework of the ReBB model, this is indeed due to the difference between the opacity parameters α(s) for pp and pp collisions, linking also mathematically the difference in the dip-filling property of the differential crosssections of elastic scattering to the measurement of ρ at the t = 0 within the ReBB model. The non-linear corrections to the linear in ln(s) excitation functions are also determined with the help of ISR pp data at √ s = 23.5 GeV energy. These results are discussed in Appendix D, and found to have negligible effects on our results presented in the main body of the manuscript, corresponding to the TeV energy range. Formalism The elastic amplitude T (s,t) (where s is the squared central mass energy, and t is the squared four-momentum transfer) is defined in Ref. [36] by Eq. (6), Eq. (9) and Eq. (29), furthermore summarized also in Appendix A. The experimentally measurable physical quantities, i.e. the elastic differential cross section, the total, elastic and inelastic cross sections and the ratio ρ 0 are defined, correspondingly, as: and The earlier results show that the ReBB model gives statistically acceptable, good quality fits with CL ≥ 0.1 % to the pp differential cross section data at the ISR energies of 23.5 and 62.5 GeV as well as at the LHC energy of 7 TeV, in the −t ≥ 0.377 GeV 2 kinematic region [36]. Continuing that study, in this work we apply exactly the same formalism, without any change, to the description of the differential cross-sections of proton-antiproton (pp) scattering. This allows us to search for Odderon effects by comparing the pp and pp differential cross sections at same energies and squared momentum transfer. Any significant difference between the pp and pp processes at the same energy at the TeV scale provides an evidence for the Odderon exchange. In order to make this manuscript as self-contained and complete as reasonably possible, we have provided a derivation of this well-known property, in the form of Theorem I of Appendix C. Fitting method Compered to the earlier ReBB study [36], in order to more precisely estimate the significance of a possible Odderon effect, here we use a more advanced form of χ 2 definition which relies on a method developed by the PHENIX Collaboration and described in detail in Appendix A of Ref. [46]. This method is based on the diagonalization of the covariance matrix, if the experimental errors can be separated to the following types of uncertainties: -Type A errors which are point-to-point fluctuating (uncorrelated) systematic and statistical errors; -Type B errors which are point-to-point varying but correlated systematic uncertainties, for which the point-to-point correlation is 100 %; -Type C systematic errors which are point-independent, overall systematic uncertainties, that scale all the data points up and down by exactly the same, point-to-point independent factor. In what follows we index these errors with the index of the data point as well as with subscripts a, b and c, respectively. In the course of the minimization of the ReBB model we use the following χ 2 function: This definition includes type A, point-to-point uncorrelated errors, type B point-to-point dependent but correlated errors and type C, point independent correlated errors. Furthermore, not only vertical, but the frequently neglected horizontal errors are included too. Let us detail below the notation of this χ 2 definition, step by step: -M is the number of sub-datasets, corresponding to several, separately measured ranges of t, indexed with subscript j, at a given energy √ s. Thus ∑ M j=1 n j gives the number of fitted data points at a given center of mass energy √ s; d i j is the ith measured differential cross section data point in sub-dataset j and th i j is the corresponding theoretical value calculated from the ReBB model; σ i j is the type A, point-to-point fluctuating uncertainty of the data point i in sub-dataset j, scaled by a multiplicative factor such that the fractional uncertainty is unchanged under multiplication by a point-to-point varying factor: where the terms include also the A and B type horizontal errors on t following the propagation of the horizontal error to the χ 2 as utilized by the so-called effective variance method of the CERN data analysis programme ROOT; d i j denotes the numerical derivative in point t i j with errors of type k ∈ {a, b}, denoted as δ k t i j . The numerical derivative is calculated as -The correlation coefficients for type B and C errors are denoted by ε b and ε c , respectively. These numbers are free parameters to be fitted to the data, their best values are typically in the interval (−1, 1); -The last two terms in Eq. (6) are to fit also the measured total cross-section and ratio ρ 0 values along the differential cross section data points; d σ tot and d ρ 0 denote the measured total cross section and ratio ρ 0 values, δ σ tot and δ ρ 0 are their full errors, σ tot,th and ρ 0,th are their theoretical value calculated from the ReBB model; This scheme has been validated by evaluating the χ 2 from a full covariance matrix fit and from the PHENIX method of diagonalizing the covariance matrix of the differential cross-section of elastic pp scattering measured by TOTEM at √ s = 13 TeV [6], using the Lévy expansion method of Ref. [12]. The fit with the full covariance matrix results in the same minimum within one standard deviation of the fit parameters [32], hence in the same significance, as the fit with the PHENIX method. Based on this validation, we apply the PHENIX method in the data analysis described in this manuscript. Let us note also that in case of the √ s = 7 TeV TOTEM data set, analysed below, the B type systematic errors, that shift all the data points together up or down with a t-dependent value are measured to be asymmetric [47]. This effect is handled by using the up or down type B errors depending on the sign of the correlation coefficient ε b : for positive or negative sign of ε b , we utilized the type B errors upwards, or downwards, respectively. Note that the type A errors, that enter the denominator of the χ 2 definition of eq. (6), are symmetric even in the case of this √ s = 7 TeV pp dataset. The χ 2 distribution assumes symmetric type A errors that enter the denominators of the χ 2 definition. Thus, even in this case of asymmetric type B errors, that enter the numerators of eq. (6) at √ s = 7 TeV, the χ 2 distribution can be utilized to estimate the significances and confidence levels of the best fits. Fit results The ReBB model was fitted to the proton-proton differential cross section data measured by the TOTEM Collaboration at √ s = 2.76, 7.0 and 13 TeV, based on refs. [6,9,47] as well as to differential cross section data of elastic proton-antiproton scattering measured at √ s = 0.546 and 1.96 TeV in refs. [45,48,49], respectively. Similarly to earlier studies of refs. [34][35][36][37]40], the model parameters A qq = 1 and λ = 1 2 were kept at constant values throughout the fitting procedure. Here A qq corresponds to a normalization constant and λ describes the mass ratio of constituent quarks to diquarks in the p = (q, d) version of the Real Extended Bialas-Bzak model of ref. [36]. Thus the number of free parameters of this model, for a fixed s and specific collision type is reduced to four: R qd , R q , R d and α. It is natural to expect that R q (s), R d (s) and R qd (s) are the same functions of s, both for pp and pp collisions, as the distribution of partons inside protons at a given energy is expected to be the same as that of anti-partons inside anti-protons. In this section, this is however not assumed but tested and the parameters of the ReBB model are determined at four different colliding energies in the TeV region, using pp data sets at √ s = 2.76 and 7 TeV, and pp datasets at √ s = 0.546 and 1.96 TeV. These fits were performed in the diffractive interference or dip and bump region, with datapoints before the diffractive minimum and after the maximum as well, in each case the limited range is not greater than 0.372 ≤ −t ≤ 1.2 GeV 2 . In this kinematic range, the ReBB model provided a data description with a statistically acceptable fit quality, with confidence levels CL ≥ 0.1 % in each case. In this manuscript, our aim is to extrapolate the differential cross-section of elastic pp and pp collisions to exactly the same energies, in order to conclude in a model dependent way about the significance of a crossing-odd or Odderon effect in these data. For this purpose, a model that can be used to study the excitation function of the pp and pp differential cross-sections in the 0.5 ≤ √ s ≤ 7 TeV domain is sufficient. The results of such kind of statistically acceptabe quality fits are summarized in Table 1 and detailed below. Other data sets, that do not have sufficient amount of data in this interference region were utilized for cross-checks only, to test the extracted energy dependencies of the model parameters as detailed in Sec. 6. Additionally, we also describe the current status of our fits to describe the differential cross-section at √ s = 13 TeV at the end of this section. We thus describe three fits to pp differential cross section data sets at √ s = 2.76, 7 and 13 TeV as well as two fits to pp differential cross section datasets at √ s = 0.546 and 1.96 TeV, respectively. Our fit results are graphically shown in Figs. 1-5. The minimization of the χ 2 defined by Eq. (6) was done with Minuit and the parameter errors were estimated by using the MINOS algorithm which takes into account both parameter correlations and non-linearities. We accept the fit as a successful representation of the fitted data under the condition that the fit status is converged, the error matrix is accurate and the confidence level of the fit, CL is ≥ 0.1 %, as indicated on Figs. 1-4. As these criteria are not satisfied on Fig. 5, the parameters of this fit were not taken into account when determining the excitation functions or the energy dependence of the physical fit parameters in the few TeV energy range. Let us now discuss each fit in a bit more detail. The SppS differential cross section data on elastic pp collisions [48,49] were measured in the squared momentum transfer range of 0.03 ≤ |t| ≤ 1.53 GeV 2 which in the fitted range has been subdivided into two sub-ranges with different normalization uncertainties (type C errors): for 0.37 ≤ |t| ≤ 0.495 GeV 2 σ c = 0.03 and for 0.46 ≤ |t| ≤ 1.2 GeV 2 σ c = 0.1. In case of this data set, the vertical type A errors σ ai are available but the horizontal type A errors (δ a t i ) and the type B errors either vertical (σ bi ) or horizontal (δ b t i ) were not published. The measured total cross section with its total uncertainty is σ tot = 61.26 ± 0.93 mb [50] while the ρ 0 = 0.135 ± 0.015 value was measured at the slightly different energy of √ s = 0.541 GeV. The total, elastic and inelastic cross sections and the parameter ρ 0 are calculated according to Eqs. (2)- (5). The fit is summarized in Fig. 1. The fit quality is satisfactory, CL = 8.74 %. Compared to the available data in the literature [50] (σ in = 48.39 ± 1.01 mb and σ el = 12.87 ± 0.3 mb) the model reproduces the experimental values of the forward measurables within one σ , thus these fit parameters represent the data in a statistically acceptable manner. The elastic pp differential cross section data is available at √ s = 1.96 TeV in the range of 0.26 ≤ |t| ≤ 1.20 GeV 2 , as published by the D0 Collaboration in ref. [45], with a type C normalization uncertainty of σ c = 0.144. For this data set, the vertical type A and type B errors were not published separately. Actually, the quadratically added statistical and systematic uncertainties were published, and as the statistical errors are point to-point fluctuating, type A errors, in our analysis the combined t dependent D0 errors were handled as type A, combined statistical and systematic errors. Horizontal type A and type B errors were not published in ref. [45]. At this energy, we do not find published experimental σ tot and ρ 0 values. The values of the total cross section and parameter ρ 0 at this energy, that we utilized in the fitting procedure, are the predicted values from the COMPETE Collaboration [51]: σ tot = 78.27 ± 1.93 mb and ρ 0 = 0.145 ± 0.006. The quality of the corresponding fit, shown in Fig. 2, is satisfactory, CL = 51.12 %, and the COMPETE values of forward measurables are reproduced within one standard deviation. We conclude that the corresponding ReBB model parameters represent the data in a statistically acceptable manner. Based on the successful description of these two pp datasets at √ s = 0.546 and 1.96 TeV, we find that the form of the ReBB model as specified for pp collisions in ref. [36] is able, without any modifications, to describe the differential cross-section of elastic pp collisions in the TeV energy range. Let us now discuss the new fits of the same model to elastic pp collisions in the TeV energy range. At √ s = 2.76 TeV, the differential cross section data of elastic pp collisions was measured in the t range of 0.072 ≤ −t ≤ 0.74 GeV 2 by the TOTEM Collaboration [9]. Actually, this measurement was performed in two subranges: 0.072 ≤ |t| ≤ 0.462 GeV 2 and 0.372 ≤ |t| ≤ 0.74 GeV 2 . Both ranges had the same normalization uncertainty of σ c = 0.06. During the fit the t-dependent vertical statistical (type A) and vertical systematic (type B) errors (both horizontal and vertical ones), the normalization (type C) errors and the experimental value of the total cross section with its total uncertainty (σ tot = 84.7 ± 3.3 mb [6]) were taken into account. Horizontal type A and type B errors are not published at this energy. The fit quality of the ReBB model is demonstrated on Fig. 3: the fit is satisfactory, with CL = 36.52 %. The experimental values of the forward measurables (σ in = 62.8 ± 2.9 mb, σ el = 21.8 ± 1.4 mb [6,52]) are reproduced within one standard deviations. Experimental data is not yet available for parameter ρ 0 , however the value for ρ 0 , calculated from the fitted ReBB model, is within the total error band of the COMPETE prediction [51]. We thus conclude that the corresponding ReBB model parameters represent the pp data at √ s = 2.76 TeV in a statistically acceptable manner. At √ s = 7 TeV, the pp differential cross section data was published by the TOTEM Collaboration [47], measured in the range of 0.005 ≤ |t| ≤ 2.443 GeV 2 . The measurement was performed in two subranges: 0.005 ≤ |t| ≤ 0.371 GeV 2 and 0.377 ≤ |t| ≤ 2.443 GeV 2 . Both ranges had the same normalization uncertainty of σ c = 0.042. The fit includes only the second subrange with the t-dependent (both vertical and horizontal) statistical (type A) and systematic (type B) errors, the normalization (type C) error and the experimental values of the total cross section and the parameter ρ 0 with their total uncertainties (σ tot = 98.0 ± 2.5 mb and ρ 0 = 0.145 ± 0.091 [53]). The quality of the corresponding fit, shown in Fig. 4, is statistically acceptable with a CL = 0.71 %. The experimental values of the forward measurables (σ in = 72.9 ± 1.5 mb, σ el = 25.1 ± 1.1 mb [53]) are reproduced by the fitted ReBB model within one sigma (the experimental and calculated values overlap within their errors). We thus conclude that the corresponding ReBB model parameters represent these pp data at √ s = 7.0 TeV in a statistically acceptable manner, in the fitted range of 0.377 ≤ |t| ≤ 1.205 GeV 2 , before and after the diffractive minimum. At √ s = 8 TeV, the TOTEM collaboration did not yet publish the final differential crosssection results in the range of the diffractive minumum and maximum. However, preliminary results were presented at conferences [54], and the differential cross-section in the low −t region was published in ref. [55]. We thus use this dataset for a cross-check only, but the lack of the data in the diffractive minimum prevents us to do a full ReBB model fit. Additional data at very low −t, in the Coulomb-Nuclear Interference region is also available from TOTEM at this particular energy [56], however, in the present study we do not discuss the kinematic range, where Coulomb effects may play any role. At √ s = 13 TeV, the differential cross section data was measured by the TOTEM collaboration in the range of 0.03 ≤ |t| ≤ 3.8 GeV 2 [8] with a normalization (type C) uncertainty of σ c = 0.055. As far as we know, the only statistically acceptable quality fit with CL ≥ 0.1 % to this dataset so far was obtained by some of us with the help of the model-independent Lévy series in ref. [12]. We also note that several new features show up in the soft observ-ables of elastic scattering, with a threshold behaviour around √ s = 5 − 7 TeV, certainly below 13 TeV [57]. We have cross-checked, if the ReBB model, that works reasonably well from √ s = 23.5 GeV to 7 TeV, is capable to describe this data set at √ s = 13 TeV in statistically acceptable manner, or not? The result was negative, as indicated in Fig. 5. This fit includes the t-dependent statistical (type A) and systematic (type B) errors, the normalization (type C) error and the experimental values of the total cross section and the parameter ρ 0 with their total uncertainties (σ tot = 110.5 ± 2.4 mb and ρ 0 = 0.09 ± 0.01 [7]). The quality of the obtained fit (Fig. 5) is not satisfactory, CL = 3.17×10 −11 % and neither the experimental values of the cross sections (σ in = 79.5±1.8 mb, σ el = 31.0±1.7 mb [6] ) are reproduced by the fitted ReBB model within one sigma at 13 TeV. However, the value of ρ 0 was described surprisingly well. This TOTEM dataset is very detailed and precise and changes of certain trends in B(s) and the ratio σ el (s)/σ tot (s) are seen experimentally [57]. Theoretically, a new domain of QCD may emerge at high energies, possibly characterised by hollowness or toroidal structure, corresponding to a black ring-like distribution of inelastic scatterings [58][59][60][61]. A statistically significant, more than 5 σ hollowness effect was found at √ s = 13 TeV within a model-independent analysis of the shadow profile at these energies, using the technique of Lévy series [12]. We conclude that the ReBB model needs to be generalized to have a stronger non-exponential feature at low −t to accommodate the new features of the differential cross-section data at √ s = 13 TeV or larger energies. This work is currently in progress, but goes well beyond the scope of the current manuscript. Most importantly, such a generalization is not necessary for a comparision of the differential cross-sections of elastic pp and pp collisions in the few TeV range, as we have to bridge only a logarithmically small energy difference between the top D0 energy of √ s = 1.96 TeV and the lowest TOTEM energy of √ s = 2.76 TeV. We thus find, that the Real Extended Bialas -Bzdak model describes effectively and in a statistically acceptable manner the differential cross-sections of elastic pp and pp collisions in the few TeV range of 0.546 ≤ √ s ≤ 7 TeV and in the squared four-momentum transfer range of 0.37 ≤ −t ≤ 1.2 GeV 2 . Its physical fit parameters represent the data and their energy dependence thus can be utilized to determine the excitation function of these model parameters, as detailed in Section 5. The values of the physical fit parameters and their errors obtained from the above discussed physically and statistically acceptable fits are summarized in Table 1, where four datasets are analyzed and four different physical parameters are extracted at four different energies. These sixteen physical parameters form the basis of the determination of the energy dependencies, that are determined to be consistent with affine linear functions of ln(s). Three scale parameters are within errors the same in elastic pp and pp collisions, while the opacity parameters are different for pp and pp collisions. Thus the excitation functions, the energy dependence of the differential cross-sections both for pp and pp elastic scattering is determined by 5x2 = 10 physical parameters in this framework of calculations. These 10 parameters are summarized in Table 2. We thus conclude, that this Real Extended Bialas-Bzdak model is good enough to extrapolate the differential cross-section of elastic pp collisions down to √ s = 0.546 and 1.96 TeV, and to extrapolate the same of elastic pp collisions up to √ s = 2.76 and 7 TeV. We duly note that, in order to evaluate similar observables at √ s = 13 TeV or at even higher energies in a realistic manner, this model needs to be generalized and further developed. 0.825 ± 0.004 0.869 ± 0.012 0.877 ± 0.014 0.920 ± 0.009 (± 0.002) Fig. 1 The fit of the ReBB model to the pp SPS √ s = 0.546 TeV data [48,49] in the range of 0.37 ≤ −t ≤ 1.2 GeV 2 . The fit includes the published errors, that are statistical (type A) and the normalization (type C) uncertainties, as well as the experimental value of the total cross section with its full error according to Eq. √ s = 1.96 TeV data [45] in the range of 0.37 ≤ −t ≤ 1.2 GeV 2 . The fit includes the t-dependent statistical and systematic uncertainties added together quadratically and treated as type A errors as well as the normalization (type C) uncertainty according to Eq. (6). The values of the total cross section and parameter ρ 0 used in the fit are the predicted values from the COMPETE Collaboration [51]. Otherwise, same as Fig [9]. The fit includes the t-dependent statistical (type A) and systematic (type B) uncertainties, the normalization (type C) uncertainty and the experimental value of the total cross section with its full error according to Eq. (6). Otherwise, same as Fig [8]. The fit includes the t-dependent statistical (type A) and systematic (type B) uncertainties, the normalization (type C) uncertainty and the experimental values of the total cross section and parameter ρ 0 with their full error according to Eq. (6). The fit parameters do not represent the data in a statistically acceptable manner, given that CL 0.1 % . Otherwise, same as Fig. 1. Excitation functions of the fit parameters The values of the physical fit parameters and their errors obtained from the above discussed physically and statistically acceptable fits are summarized in Table 1. This table contains a list of five different physical parameters. Out of them the three scale parameters called R q , R d and R qd can be determined at four different energies, providing 12 numbers, while the opacity parameters α pp and α pp describing pp and pp collisions can both be determined at two different energies only, providing additional 4 numbers, all-together 16 physical input parameters. These 16 physical parameters form the basis of the determination of the energy dependencies, that are determined to be consistent with affine linear functions of ln(s). Namely, we fitted the s-dependence of the model parameters one by one, using the affine linear logarithmic function, where p 0 and p 1 are free parameters, s 0 is fixed at 1 GeV 2 . We obtain good quality fits, with methods and results similar to that of ref. [36], with confidence levels CL 0.1 %, as detailed in Table 2. Three scale parameters are within errors the same in elastic pp and pp collisions, while the opacity parameters are different for pp and pp collisions. Thus the excitation functions, the energy dependene of the differential cross-sections both for pp and pp elastic scattering is determined by 5x2 = 10 phyiscal parameters in the framework of the ReBB model. The energy dependencies of the scale parameters, R q , R d , and R qd are graphically shown in Figs. 6a-6c. These figures clearly indicate that the energy dependence of the geometrical scale parameters consistent with the same evolution, namely the same linear rise in ln(s) for both pp and pp scattering: when we fitted these parameters together, with a linear logarithmic function, we have obtained a statistically acceptable fit in each of these three cases. This result extends and improves the earlier results published in ref. [36] for elasic pp scattering to the case of both pp and pp collisions in a natural manner. For a comparision, these earlier results are also shown with a dotted red line on the panels of Fig. 6, indicating the improved precision of the current analysis, due to more data points are included in the TeV energy range. For the opacity parameter α, seen on panel (d) of Fig. 6, the situation is different: the pp and pp points are not on the same trend, because the α parameters that characterize the dip in the ReBB model, are obtained with great precision both in the pp and in the pp cases. The difference between the excitation functions of α pp (s) and α pp (s) corresponds to the qualitative difference between the differential cross-section of elastic pp and pp collisions in the few TeV energy range: the presence of a persistent dip and bump structure in the differential cross-section of elastic pp collisions, and the lack of a similar feature in elastic pp collisions. Thus in the case of parameter α we have to consider, that there are only two, rather precisely determined data points in both pp and pp collisions from the presented ReBB model studied so far. We can already conclude that they cannot be described by a single line as an affine linear fit with eq. (10) would fail. Without additional information, we cannot determine the trends and its uncertainties as two points can always connected with a straight line, so an affine linear description of both the two pp and the two pp data points would have a vanishing χ 2 and an indeterminable confidence level. This problem, however, is solved by utilizing the results of Appendix B on the proportionality between the model parameter α and the experimentally measurable real-to-imaginary ratio ρ 0 . This proportionality is shown graphically in Fig. 7. The constant of proportionality in the few TeV region is an almost energy independent constant value, ρ 0 /α = 0.85 ± 0.01, well within the errors of the ρ 0 measurements, in agreement with a theoretically obtained function, showed with a red solid line on Fig. 7 and derived in Appendix B. This proportionality allows one to add new datapoints to the trends of α(s) both for the pp and for the pp cases by simply rescaling the mesured ρ 0 values. We found three additional published experimental data of ρ 0 for pp collisions, ρ 0 = 0.135 ± 0.015 at √ s = 0.541 by the UA4/2 Collaboration in ref. [62] and 1.8 TeV by the E-710 and the E811 collaborations in refs. [63,64], respectively. At √ s = 1.8 TeV, we have utilized the combined value of these E-710 and E811 measurements [64], corresponding to ρ 0 (pp) = 0.135 ± 0.044. The constancy of these ρ 0 (s) values in the few TeV energy range, when converted with the help of Fig. 7 to the opacity parameter α(pp) of the Bialas-Bzdak model, leads to the lack of diffractive minima hence an Odderon signal in elastic pp collisions, leading to an α(pp) ≈ 0.16 ± 0.06 which is within its large errors the same as the α = 0.163 ± 0.005 value obtained from the ReBB model fit to D0 data at √ s = 1.96 TeV, summarized on Fig. 2. Similarly the α parameter extracted from ρ 0 at √ s = 0.541 TeV is α ≈ 0.16 ± 0.02 which is within twice the relatively large errors of the ρ 0 analysis the same as the value of α(pp) = 0.117 ± 0.002 obtained from the analysis of the differential cross-section, shown on Fig. 1. These indicate a slowly rising value for α(pp) or correspondingly, ρ 0 (pp) in the TeV energy range. The final values of these datapoints together with the corresponding errors are connected with a long-dashed line in Panel (d) of Fig. 6. Table 2 indicates that for α(pp) the coefficient p 1 (pp) = 0.018 ± 0.002 is a significantly positive number. For the opacity coefficient in elastic pp collisions, α(pp) on the other hand an oppisite effect is seen, when the ρ 0 measurements at √ s = 7 and 8 TeV are also taken into account, based on the data of the TOTEM Collaboration published in refs. [56,65]. As by now it is very well known, these values indicate a nearly constant, actually decreasing trend, and based on the fits of the extracted four data points of α(pp) we find that in the few TeV energy range, this trend is nearly constant, indicated by the solid red line of panel (d) of Fig. 6 . Table 2 indicates that for α(pp) the coefficient of increase with ln(s) is consistent with zero in this energy range, p 1 (pp) = −0.003 ± 0.003, which is significantly less from the above quoted positive number for p 1 (pp) = 0.018 ± 0.002. Thus it is easy to see, that the Odderon signal in this analysis can be an estimated 6 − 7σ effect, as a consequence of the inequality p 1 (pp) = p 1 (pp) alone. In the subsequent sections we first test if the excitation functions, determined with the help of the p 0 and p 1 parameters of Table 2 indeed reproduce the data at all the measured energies in the relevant kinematic range, then we proceed carefully to determine the significance of a model dependent Odderon signal. We perform these cross-checks against all kind of available data, including those data that were not utilized in the determination of the trends for example because their acceptance was too limited to determine all the fit parameters of the ReBB model. Table 2 Summary of the parameter values which determine the energy dependence by fitting a linear logarithmic model according to Eq. (10). The values of the parameters are rounded up to three valuable decimal digits. For R q , R d and R qd , the values of the parameters p 0 and p 1 are given in units of femtometers (fm). For the parameters α(pp) and α(pp), the parameters p 0 and p 1 are dimensionless. Parameter Table 1 and determined by fitting a linear logarithmic model, Eq. (10), to each of them one by one. Sanity tests In this section we show that the determined energy dependence trends are reliable in the kinematic range of 0.546 ≤ √ s ≤ 8 TeV and 0.37 ≤ −t ≤ 1.2 GeV 2 . For this purpose we performed the so-called sanity tests: we have cross-checked if the trends summarized in Table 2 indeed represent all the available differential cross-section data on both pp and pp elastic scattering in the mentioned kinematic range. We used both those data which were and which were not utilized in the determination of the energy dependence trends for example because their acceptance was too limited to determine all the fit parameters of the ReBB model. To perform these cross-checks, the differential cross sections are fitted with all the four physical parameters of the ReBB model, α(s), R q (s), R d (s) and R qd (s), fixed to their extrapolated value obtained with the help of the results summarized in Table 2, while the correlation coefficients of the type B and C errors, or the ε parameters in the χ 2 definition of eq. (6) are fitted to the data as free parameters. The results for the data at √ s = 0.546, 0.63, 1.8, 1.96, 2.76 and 7 TeV are shown in Figs. 8-13. All of these sanity tests resulted in the description of these data with a statistically acceptable confidence level of CL ≥ 0.1 %. As an additional sanity test, we have also cross-checked if this ReBB model describes the pp and pp total cross section σ tot (s) and real to imaginary ratio ρ 0 (s) data in a statistically acceptable manner, or not. These results are presented in Fig. 14 and Fig. 15, respectively. As the calculated confidence levels are higher than 0.1 % in all of these cases, we can happily conclude that the energy dependent trends of the ReBB model are really reasonable and reliable in the investigated 0.541 ≤ √ s ≤ 8 TeV energy and in the 0.377 ≤ −t ≤ 1.2 GeV 2 squared four-momentum transfer range. Thus this model can be used reliably to extrapolate both the pp and the pp differential cross-sections in this limited kinematic range of (s,t), based only on 10 physical model parameters, summarized in Table 2. Fig. 8 Result of the sanity test for the 0.546 TeV pp elastic differential cross section data [48,49] in the range of 0.37 ≤ −t ≤ 1.2 GeV 2 . This sanity test was performed as a fit during which the model parameters R q , R d , R qd and α were fixed to their s-dependent value based on Table 2, while correlation coefficients ε-s in the χ 2 definition, Eq. (6), were fitted as free parameters. Thus the physical parameters R q , R d , R qd and α are printed on the plot without error bars while the fitted correlation coefficients are given with their errors. The best parameter values are rounded up to three valuable decimal digits. Fig. 11 Result of a sanity test, same as Fig. 8, but for the √ s = 1.96 TeV pp elastic differential cross section data [45] in the range of 0.37 ≤ −t ≤ 1.2 GeV 2 . Fig. 12 Result of a sanity test, same as Fig. 8, but for the √ s = 2.76 TeV pp elastic differential cross section data [9] in the range of 0.37 ≤ −t ≤ 0.7 GeV 2 . Fig. 13 Result of a sanity test, same as Fig. 8, but for the pp elastic differential cross section data at √ s = 7 TeV from ref. [47], in the fitted range of 0.37 ≤ −t ≤ 1.2 GeV 2 . [7,56,65] and pp [50] parameter ρ 0 data, as calculated from the model when the values of the parameters R q , R d , R qd and α were taken from eq. 10 and Table 2, corresponding to the linear curves shown on panels (a)-(d) of Fig.6. On this plot, a model dependent Odderon effect is clearly identified: it corresponds to ρ pp 0 (s) = ρ pp 0 (s), the non-vanishing difference between the excitation functions of ρ 0 for pp and ρ 0 for pp collisions, as detailed in Appendix C. Extrapolations According to our findings in Section 5 the energy dependencies of the scale parameters R q , R d and R qd are identical for pp and pp scattering, only the energy dependence of the opacity parameter α differs. The statistically acceptable quality of the fits shown in Fig. 6 and the success of the sanity tests performed in the previous section allow for a reliable extrapolation of the differential cross-sections of elastic pp and pp collisions with the help of the ReBB model [36], limited to the investigated 0.541 ≤ √ s ≤ 8 TeV center of mass energy and in the 0.377 ≤ −t ≤ 1.2 GeV 2 four-momentum transfer range. We extrapolate, in the TeV energy range, the pp differential cross sections to energies where measured pp data exist and the other way round, the pp differential cross sections to energies where measured pp data exist. Thus three of such extrapolations were performed: pp extrapolation to √ s = 1.96 TeV, to compare it to the 1.96 TeV D0 pp dσ /dt data, and pp extrapolations to √ s = 2.76 and 7 TeV, to compare them to the dσ /dt pp data measured by TOTEM at these energies. Since the energy dependencies of the scale parameters R q , R d and R qd are identical for pp and pp scattering, as discussed in Sec. 5, in the course of the extrapolations their values are fixed at their fitted values given in Tab. 1, furthermore, since the energy dependence of the α parameter differs for pp and pp scattering, the α(pp) and α(pp) values are fixed from their energy dependence trend seen in Fig. 6d. In addition, during the extrapolations, the ε parameters in the χ 2 definition, Eq. (6), were optimized, furthermore the last two terms in Eq. (6), i.e., the total cross section and ρ 0 -parameter term, were not included. This way we handled the type B and type C errors of the published pp differential cross-section to match these data as much as possible to the differential cross-section of elastic pp collisions within the allowed systematic errors, and vice versa. The results of the extrapolations are shown in Fig. 16, Fig. 17 and Fig. 18. The error band around these extrapolations is also evaluated, based on the envelope of one standard deviation errors of the R q (s), R d (s), R qd (s) model parameters and the p 0 and p 1 parameters of α(s). As an example, the resulting ten curves -considering that the values of the scale parameters are taken from the original fit while the value α is taken from the trend -are explicitly shown for 1.96 TeV in Fig. 16. While at √ s = 1.96 TeV no statistically significant difference is observed between the extrapolated pp and measured pp differential cross sections, at √ s = 2.76 and 7 TeV, remarkable and statistically significant differences can be observed. In Figs. 17 and 18, even an untrained eye can see, that the dip is filled in case of elastic pp scattering, while it is not filled in elastic pp scattering. Thus we confirm the prediction of ref. [69], that predicted, based on a three-gluon exchange picture that dominates at larger values of −t, that the dip will be filled in high energy pp elastic collisions. In this work, the differences between elastic pp and pp collisions are quantified by the confidence levels obtained from the comparision of the extrapolated curves to the measured data: at 2.76 TeV, the hypothesis that these extrapolations agree with the data is characterized by a CL = 1.092 × 10 −10 %, while at 7 TeV, CL = 0 %. Theoretically the observed difference can be attributed only to the effect of a C-odd exchange, as detailed recently in refs. [30][31][32]. At the TeV energy scale, the secondary Reggeon exchanges are generally known to be negligible. This effect has been also specifically cross-checked and confirmed recently in ref. [70]. Thus in the few TeV energy range of the LHC, the only source of a difference between the differential cross-sections of elastic pp and pp collisions can be a t-channel Odderon exchange. In the modern language of QCD, the Odderon exchange corresponds to the exchange of C-odd colorless bound states consisting of odd number of gluons [2,69,71]. Thus the CL, calculated for the 2.76 TeV pp extrapolation, corresponds to an Odderon observation with a probability of P = 1 − CL = 1 − 1.092 × 10 −12 . This corresponds to a χ 2 /NDF = 100.35/20 and to a 7.12 σ model dependent significance for the observation of a t-channel Odderon exchange, and the existence of the colorless bound states containing odd number of gluons. When extrapolating the pp differential cross-sections from 2.76 down to 1.96 TeV, however, significance is lost, corresponding to a χ 2 /NDF = 24.28/13 and to a 2.19 σ effect, less than a 3 σ effect in this comparison. However, these two significances at 1.96 and 2.76 TeV can be combined, providing a combined χ 2 /NDF = 124.63/33, that corresponds to a statistically significant, 7.08 σ effect. This 7.08 σ combined significance increases to an even larger significance of an Odderon observation, when we extrapolate the differential cross-section of elastic proton -antiproton collisions to √ s = 7.0 TeV, where the probability of Odderon observation becomes practically unity. Given that a 7.08 σ effect is already well above the usual 5 σ , statistically significant discovery level, we quote this as the possibly lowest level of the significance of our model-dependent Odderon observation. As already mentioned in the introduction we have also been recently involved in a truly model-independent search for Odderon effects in the comparision of the scaling properties of the differential cross-sections of elastic pp and pp collisions in a similar s but in the complete available t range. As compared to the model-dependent studies summarized in this manuscript, the advantage of the model-independent scaling studies of refs. [30][31][32] is that they scale out all the effects from the differences between pp and pp elastic collisions due to possible differences in their σ el (s), B(s) and their product, the σ el (s)B(s) = σ 2 tot (s)(1 + ρ 2 0 (s)) functions. As part of the Odderon signal in the ReBB model is apparently in the difference between the ρ 0 (s) excitation functions for pp and pp collisions, the significance of the Odderon signal is reduced in this model independent analysis. When considering the interpolations as theoretical curves, the significance is reduced to a 6.55 σ effect [30], but when considering that the interpolations between experimental data have also horizontal and vertical, type A and type B errors, the signicance of the Odderon signal is further reduced to a 6.26 σ effect [31,32]. Thus we conclude that the Odderon is now discovered, both in a model-dependent and in a model-independent manner, with a statistical significance that is well above the 5 σ discovery limit of high energy particle physics. Finally we close this section with the predictions to the experimentally not yet available large-t differential cross-section of pp collisions at √ s = 0.9, 4, 5 and 8 TeV shown in Fig. 19. Fig. 17 The ReBB model extrapolation for the pp dσ /dt at √ s = 2.76 TeV compared to the pp TOTEM dσ /dt data [9] measured at the same energy. The yellow band is the uncertainty of the extrapolation. The calculated CL value between the extrapolated model and the measured data indicates a significant difference between the pp and pp differential cross sections, corresponding to a 7.1 σ significance for the t-channel Odderon exchange. Fig. 18 The ReBB model extrapolation for the pp dσ /dt at √ s = 7 TeV compared to the pp TOTEM dσ /dt data [47] measured at the same energy. The yellow band is the uncertainty of the extrapolation. The calculated CL value between the extrapolated model and the measured data indicates a significant difference between the pp and pp differential cross sections, hence a significant Odderon effect, that is dominant around the dip region. Discussion In the previous sections, we have investigated what happens if we interpret the data in terms of a particular model, the Real Extended Bialas-Bzdak Model. This allows also to consider the Odderon signal in the excitation function of the model parameter α. We have shown in Appendix B that this model parameter is proportional to the experimentally measured parameter ρ 0 , the ratio of the real to the imaginary part of the scattering amplitude at the optical point, and related the coefficient of proportionality to the value of the imaginary part of the scattering amplitude at vanishing impact parameter, λ (s) = Imt el (s, b = 0), for the √ s ≤ 8 TeV elastic proton-proton collisions, and we have shown that within the framework of this ReBB model, the very different trend of ρ 0 (s) in proton-proton and in proton-antiproton collisions enhances the model-independent Odderon signal, from a 6.26 σ and 6.55 σ effect to a combined, at least 7.08 σ effect. Recently, the TOTEM Collaboration concluded, that only one condition is yet to be satisfied to see a statistically significant Odderon signal: namely, the logarithmically small energy gap between the lowest TOTEM energy of √ s = 2.76 TeV at LHC and the highest D0 energy of 1.96 TeV at Tevatron needs to be closed. This energy gap has been closed in a model-independent way in refs. [30][31][32], using the scaling properties of elastic scattering, and by comparing the H(x) = 1 Bσ el dσ dt scaling functions of elastic proton-proton and proton-antiproton collisions, as a function of x = −tB at √ s = 1.96, 2.76 and 7.0 TeV. The advantages of that method, with respect to comparing the cross sections directly include the scaling out of the s-dependencies of σ el (s), B(s) and their product, σ el (s)B(s) = σ 2 tot (s)(1 + ρ 2 0 (s)), as well as the normalization of the H(x) scaling function that cancels the point-to-point correlated and t-independent normalization errors. The validity of the H(x) scaling for pp collisions and its violation in pp collisions in the few TeV energy range resulted in a discovery level statistical significance of an Odderon signal, characterized in refs. [30][31][32] to be at least 6.26 σ , model independently, based on a careful interpolation of the experimental data-points, their point-to-point fluctuating, point-to-point correlated and data point dependent as well as point-to-point correlated and data point independent errors. If these errors are considered as errors on a theory curve, then the significance goes up to at least 6.55 σ [30]. In high energy particle physics, the standard accepted discovery threshold corresponds to a 5σ effect. In the previous section, we have shown, that the statistical significance of an Odderon observation in the limited 0.541 ≤ √ s ≤ 8 TeV center of mass energy and in the 0.377 ≤ −t ≤ 1.2 GeV 2 four-momentum transfer range is at least a combined 7.08 σ effect, corresponding to a statistically significant and model dependent Odderon observation. The √ s = 7 TeV pp differential cross-sections are measured with asymmetric type B errors. In order to make sure that our results are reliable and reproducible, we have performed several cross-checks to test the reliability of our fit at √ s = 7 TeV. One of these tests related to the handling of the asymmetric type B, t-dependent systematic errors. We have performed cross-checks for taking at every point either the smaller or the larger of the up and down type B errors to have a lower or an upper limit on their effects. We found that the parameters of the ReBB model remained stable for such a symmetrization of the type B systematic errors, as the modification of the fit parameters due to such a symmetrization was within the quoted errors on the fit parameters. Our final fits, presented before, were done with asymmetric type B errors, as detailed in Section 4. So we conclude that our fit at √ s = 7 TeV is stable even for the symmetrization of the type B systematic errors. We have also investigated the stability of our result for the case, when the energy range is extended towards lower values of √ s, in the ISR energy range, detailed in Appendix D. When the √ s = 23.5 GeV energy data are included to those summarized in Table 1, the energy dependence of the model parameters becomes quadratic in ln(s). This provides 3x5 = 15 model parameters for this broader energy range, as summarized in Table 3 and detailed in Appendix D. This way, the non-linear terms are confirmed to be negligibly small in the TeV energy range, where we find the significant Odderon effects, with the help of as little as only 10 model parameters. These 10 parameters are given in Table 2. It turns out in Sec. 4, that the ReBB model as presented in ref. [36] does not yet provide a statistically acceptable fit quality to the differential cross-section of √ s = 13 TeV elastic pp scattering. This might be due to the emergence of the black-ring limit of elastic protonproton scattering instead of the expected black-disc limit. In what follows we shortly discuss the earlier and more recent results on the black ring shaped interaction region of the colliding protons. A complementary way of studying the high-energy scattering processes is by passing from the momentum transfer t to the impact parameter b. In 1963 van Hove introduced the inelasticity profile or the overlap function [72,73], which corresponds to the impact parameter distribution of the inelastic cross section characterizing the shape of the interaction region of two colliding particles. The natural expectation is that the most inelastic collisions are central, i.e., the inelasticity profiles have a maximum at b = 0 consistently with the black disc terminology. The possibility of a minimum at b = 0, i.e., the peripheral form of the inelastic function was first considered in Ref. [74] which implies the shape of a black ring rather than that of a black disc. In Ref. [58], it was shown that the inelasticity profile of protons is governed by the ratio of the slope of the diffraction cone to the total cross section through the variable Z = 4πB/σ tot and the evolution to values of Z < 1 at LHC energies implies a transition from the black disk picture of the interaction region to a black ring (or torus-like) shape. These results were reviewed in Ref. [59] using the unitarity relation in combination with experimental data on elastic scattering in the diffraction cone. Ref. [58] concludes that the shape of the interaction region of colliding protons could be reliably determined if the behavior of the elastic scattering amplitude at all transferred momenta was known. The black ring shape of the interaction region can be interpreted as the presence of a hollow at small impact parameter values. In Refs. [75][76][77][78] the authors study the hollowness phenomenon within an inverse scattering approach based on empirical parameterizations. Ref. [76] concludes that the very existence of the hollowness phenomenon is quantum-mechanical in nature. Hollowness has also been reported to emerge from a gluonic hot-spot picture of the pp collision at the LHC energies [60]. It is shown in Ref. [78] that the emergence of such a hollow strongly depends on the phase of the scattering amplitude. In Ref. [79] the authors demonstrated the occurrence of the hollowness phenomenon in a Regge model above √ s ∼ 3 TeV. Ref. [61] discusses the absorptive (saturation of the black disk limit) and reflective (saturation of the unitarity limit) scattering modes of proton-proton collisions concluding that a distinctive feature of the transition to the reflective scattering mode is the developing peripheral form of the inelastic overlap function. Reflective scattering is detailed also in Refs. [80][81][82]. The authors of Ref. [83] argue that the presence of nonzero real part of the elastic scattering amplitude in the unitarity condition enables to conserve the traditional black disk picture refuting the existence of the hollowness effect. However, as noted in Ref. [79], the criticism that has been raised in Ref. [83] is based on an incorrect perception of the approximations involved and does not address the arbitrariness of the t-dependence of the ratio ρ which is crucial for hollowness. In Refs. [84,85] the hollowness effect is interpreted as a consequence of fundamental thermodynamic processes. Ref. [57] notes that the onset of the hollowness effect is possibly connected to the opening of a new channel between √ s = 2.76 and 7 TeV as indicated by the measured σ el /σ tot ratio and the slope parameter B 0 data. In Ref. [86] the model independent Lévy imaging method is employed to reconstruct the proton inelasticity profile function and its error band at different energies. This method established a statistically significant proton hollowness effect, well beyond the 5σ discovery limit. This conclusion is based on a model independent description of the TOTEM protonproton differential cross-section data at √ s = 13 TeV with the help of the Lévy imaging method, that represents the TOTEM data in a statistically acceptable manner, corresponding to a confidence level of CL = 2 %. Summary Currently, the statistically significant observation of the elusive Odderon is a hot research topic, with several interesting and important results and contributions. In the context of this manuscript, Odderon exchange corresponds to a crossing-odd component of the scattering amplitude of elastic proton-proton and proton-antiproton collisions, that does not vanish at asymptotically high energies, as probed experimentally by the D0 Collaboration for protonantiproton and by the TOTEM Collaboration for proton-proton elastic collisions in the TeV energy range. Theoretically, the observed differences can be attributed only to the effect of a C-odd exchange, as detailed recently in refs. [30][31][32]. Those model independent studies resulted in an at least 6.26 σ statistical significance of the Odderon exchange [30][31][32]. The goal of the research summarized in this manuscript was to cross-check, in a modeldependent way, the persistence of these Odderon-effects, and to provide a physical picture to interpret these results. Using the ReBB model of ref. [36], developed originally to describe precisely the differential cross-section of elastic proton-proton collisions, we were able to describe also the proton-antiproton differential cross section at √ s = 0.546 and 1.96 TeV without any modification of formalism. We have shown also that this model describes the proton-proton differential cross section at √ s = 2.76 and 7 TeV, also in a statistically acceptable manner, with a CL > 0.1 %. Using our good quality, statistically acceptable fits for the 0.5 ≤ √ s ≤ 8 TeV energy region, we have determined the energy dependence of the model parameters to be an affine linear function of ln(s/s 0 ). We have verified this energy dependence by demonstrating that the exctitation functions of the physical parameters of the Real Extended Bialas-Bzdak model satisfy the so-called sanity tests: they describe in a statistically acceptable manner not only those four datasets that formed the basis of the determination of the excitation function, but all other published datasets in the √ s = 0.541 -8.0 TeV energy domain. We have also demonstrated that the excitation functions for the total cross-sections and the ρ 0 ratios correspond to the experimentally estabished trends. Remarkably, we have observed that the energy dependence of the geometrical scale parameters for pp and pp scattering are identical in elastic proton-proton and proton-antiproton collisions: only the energy dependence of the shape or opacity parameter α(s) differs significantly between pp and pp collisions. After determining the energy dependence of the model parameters we made extrapolations in order to compare the pp and pp differential cross sections in the few TeV energy range, corresponding to the energy of D0 measurement at √ s = 1.96 TeV in ref. [45] and the TOTEM measurements at √ s = 2.76 and 7.0 TeV. Doing this, we found evidence for the Odderon exchange with a high statistical significance. We have cross-checked, that this evidence withstands several reasonable cross-checks, for example the possible presence of small quadratic terms of ln(s/s 0 ) in the excitation functions of the parameters of this model. Subsequently, we have also predicted the details of the diffractive interference (dip and bump) region at √ s = 0.9, 4, 5 and 8 TeV 1 We have shown that within the framework of this ReBB model, the very different trend of ρ 0 (s) in proton-proton and in proton-antiproton collisions enhances the modelindependent Odderon signal, from a 6.26 σ and 6.55 σ effect of refs. [30][31][32] to an at least 7.08 σ effect. This gain of significance is due to the possibility of extrapolating the differential cross-sections of elastic pp scattering from √ s = 1.96 TeV to 2.76 TeV. It is important to note that in the evaluation of the 7.08 σ Odderon effect, only pp data at √ s = 1.96 TeV and pp data at √ s = 2.76 TeV were utilized, amounting to a model dependent but successful closing of the energy gap between D0 and TOTEM measurements. Let us also emphasize that our Odderon observation is valid in the limited kinematic range of 0.541 ≤ √ s ≤ 8 TeV center of mass energy and in the 0.377 ≤ −t ≤ 1.2 GeV 2 four-momentum transfer range. When extrapolating the pp differential cross-sections from 2.76 down to 1.96 TeV, however, significance is lost, corresponding to a χ 2 /NDF = 24.28/13 and to a 2.19 σ effect, which is less than a 3 σ effect at 1.96 TeV. However, these two significances at 1.96 and 2.76 TeV can be combined, providing a χ 2 /NDF = 124.63/33, that corresponds to a statistically significant, combined 7.08 σ effect. This 7.08 σ combined significance increases to an even larger significance of an Odderon observation, when we extrapolate the differential cross-section of elastic proton-antiproton collisions to √ s = 7.0 TeV. Given that a 7.08 σ effect is already well above the usual 5 σ , 1 Currently, TOTEM preliminary experimental data are publicly presented from an on-going analysis at √ s = 8 TeV, see ref. [54] for further details. statistically significant discovery level, we quote this as the possibly lowest level of the significance of our model-dependent Odderon observation. Concerning the direction of future research: Odderon is now discovered both in a modelindependent way, described in refs. [30][31][32], and in a model-dependent way, described in this manuscript; so the obvious next step is to extract its detailed properties, both in a modelindependent and in a model-dependent manner. The main properties of the Odderon as well as the Pomeron, based on the ReBB model, are already summarized in Appendix C. Let us also note, that the ReBB model as presented in ref. [36] does not yet provide a statistically acceptable fit quality to the differential cross-section of √ s = 13 TeV elastic pp scattering. This might be due to the emergence of the black-ring limit of elastic protonproton scattering instead of the expected black-disc limit, as detailed in Sec. 8, or due to the very strong non-exponential features of the differential cross-sections in these collisions at low −t 2 , as shown in ref. [6,7]. So we conclude that the Real Extended Bialas-Bzdak model needs to be further generalized for the top LHC energies and above. This work is in progress, but it goes clearly well beyond the scope of the current, already rather detailed manuscript. Importantly, any possible outcome of these follow-up studies is not expected to modify the model behavior at the presently investigated energy range, and hence our work is apparently completed, refinements are not necessary from the point of view of the task solved in this manuscript. In short, we determined the model-dependent statistical significance of the Odderon observation to be an at least 7.08 σ effect in the 0.5 ≤ √ s ≤ 8 TeV center of mass energy and 0.377 ≤ −t ≤ 1.2 GeV 2 four-momentum transfer range. Our analysis is based on the analysis of published D0 and TOTEM data of refs. [6,9,45] and uses as a tool the Real Extended Bialas-Bzdak model of ref. [36]. We have cross-checked that this unitary model works in a statistically acceptable, carefully tested and verified manner in this particular kinematic range. Our main results are illustrated on Figs. 17 and 18. The elastic scattering amplitude in the b impact parameter space can be written in the so called eikonal form as: where Ω (s, b) is the opacity or eikonal function and b = | b|. In general this opacity is a complex valued function [42,87]. The shadow profile function is given as and this is the reason why the shadow profile function is also frequently called as the inelastic profile function, as it describes the probability distribution of inelastic collisions in the impact parameter space. This way the inelastic pp scattering may be characterized by a probability distribution. However, let us stress that elastic scattering is an inherently quantum process, as evidenced by a diffractive interference that results in diffractive minima and maxima of the differential cross-sections. Probabilistic interpretation can be given only to the inelastic scattering, or to the sum of elastic scattering plus propagating without interactions. If the real part of the scattering amplitude can be neglected, then the Ω (s, b) has only a real part given as The inelastic profile function was evaluated with the help of Glauber's multiple diffraction theory [42] for the colliding protons consisting a constituent quark and diquark or p = (q, d) picture in Section 2.2 of ref. [36] and the results were visualized in Figs. 5 and 9 of that paper. The imaginary part of the opacity function in Ref. [36], which generates the real part of the scattering amplitude, is defined to be proportional to the inelastic scattering probability, were α, mentioned earlier, is a free parameter and proportional to ρ 0 (see Appendix B). This ansatz assumes that the inelastic collisions at low four-momentum transfers correspond to the cases when the parts of proton suffer elastic scattering but these parts are scattered to different directions, not parallel to one another. Other models were also tested on TOTEM data in ref. [36], but this physically motivated assumption worked well and was shown to be consistent with the experimental data at √ s = 7 TeV in ref. [36]. The inelastic scattering probability in the BB model [39] for a fixed impact parameter b as a probability distribution, given as where s q , s q , s d and s d are the transverse positions of the quarks and diquarks in the two colliding protons (see Fig. 20). D( s q , s d ) denotes the distribution of quark and diquark inside the proton which is considered to be Gaussian: where λ = m q /m d is the ratio of the quark and diquark masses, furthermore R qd is the standard deviation of the quark and diquark distance emerging as a free parameter. The The term σ ( s q , s d ; s q , s d ; b) is the probability of inelastic interactions at a fixed impact parameter and transverse positions of all constituents and given by a Glauber expansion as follows: where a, b ∈ {q, d}. The terms σ ab ( s) are the inelastic differential cross-sections of the binary collisions of the constituents having Gaussian shapes: where R q , R d and A ab are free parameters. The physical meaning of the R q and R d parameters as well as the impact parameter b and the coordinates s q and s d is illustrated on Fig. 20 and detailed in ref. [36]. The inelastic quark-quark, quark-diquark and diquark-diquark cross sections are obtained by integrating Eq. (A.9): The number of the free parameters of the model can be reduced demanding that the ratios of the cross sections are σ qq,inel : σ qd,inel : σ dd,inel = 1 : 2 : 4 , (A.11) expressing the idea that the constituent diquark contains twice as many partons than the constituent quark and also that the colliding constituents do not "shadow" each other. Using Eq. (A.10) and the assumptions given by Eq. (A.11), the A qd and A dd parameters can be expressed through A qq : Counting the number of free parameters one finds that the model now contains six of them: R q , R d , R qd , α, λ and A qq . However, it was shown in Ref. [36] that the latter two parameters can be fixed. λ = 0.5 if the diquark is very weakly bound, so that its mass is twice as large as that of the valence quark. The Real Extended Bialas Bzdak model describes the experimental data in the √ s ≤ 8 TeV region with A qq = 1 fixed, assuming that head-on qq collisions are inelastic with a probability of 1, corresponding to Eq. (A.9). Substituting Eq. (A.3) and Eq. (A.4) to Eq. (A.1) one obtains for the scattering amplitude: This equation is, in fact, a special solution of the unitarity relation, obtained from the optical theorem. The integral forσ in (s, b) defined by Eq. (A.5) can be calculated analytically with the methods described in Refs. [36,39]. In order to compare the theoretical model to the experimental data, the amplitude in impact parameter space, given by Eq. (A.13), has to be transformed into momentum space by a Fourier-Bessel transformation: (A.14) In the above formula ∆ = | ∆ | is the transverse momentum and J 0 is the zeroth order Besselfunction of the first kind. Here the high energy limit is considered, i.e., √ s → ∞ and then ∆ (t) √ −t. Substituting the expression for the elastic scattering amplitude given by Eq. (A.14) into Eqs. (1)-(5) the model for fitting the scattering data is complete. The Fourier-Bessel integral in the amplitude can be calculated numerically during the fitting procedure. Appendix B: On the proportionality between ρ 0 (s) and α(s) in the ReBB model Let us first of all note that the detailed description of the Real Extended Bialas-Bzdak (ReBB) model is given in Section 2.2 of ref. [36] and also summarized in Appendix A. We have utilized this formalism throughout the fits described in the body of the manuscript, however in this Appendix, we need to develop this formalism a bit further, as in the earlier publications the details of the relations between the ρ 0 parameter (the ratio of the real to the imaginary part of the scattering amplitude at t = 0) and the parameter α of the ReBB model (that is responsible for filling up the singular dip of the original Bialas-Bzdak model of refs. [37-40]) has not yet been detailed before. Let us stress, that ReBB model is unitary, by definition. Thus the elastic scattering amplitude in the ReBB model too has unitary form given by Eq. A.1, where the opacity function Ω (s, b) is, in general, a complex valued function. In the ReBB model, the impact parameter dependent scattering amplitude is given by Eq. A.13. Now we develop two small set of approximations that are based on the physical domain of the ReBB model parameters. From the fits performed so far, we always find α 0.165, corresponding to Table 1 of ref. [36] and Table 1 of the current manuscript. In the physical case, when ασ in (s, b) 1 is small, one obtains for the real and imaginary parts of the scattering amplitude, respectively, and Given that the real part of the scattering amplitude is thus proportional to α while the imaginary part is independent of α, we indeed find that Based on Figs. 5 and 9. of ref. [36] and the model-independent results of the Levy series method detailed in ref. [86], if the colliding energy is in the √ s ≤ 8 TeV domain, corresponding to the domain of our extrapolations, the shadow profile function is nearly Gaussian. Such a behaviour can be obtained easily as follows. Let us approximate the imaginary part of the scattering amplitude with a Gaussian, i.e., where λ (s) Imt el (s, b = 0). Then the inelastic profile or shadow profile function takes the form ofσ This expression, up to second order terms, starts as a Gaussian, but it actually corresponds to the subtraction of a broader and smaller Gaussian from a narrower and larger Gaussian in the physical domain of λ (s) ≤ 1. As P 0 ≡ P(s, 0) =σ inel (s, b = 0) is the value of the profile or inelastic profile function at b = 0, we find the following relation between P 0 and λ (s): When performing the transformation from the impact parameter space to momentum space, the result for the real to imaginary part ratio of the forward scattering amplitude, defined by Eq. (5), is In the above equation, we may consider that λ ≡ λ (s) is a function of P 0 (s) based on Eq. (B.6). Based on the formalism of Section 2.2 of ref. [36], P 0 ≡ P 0 (s) is a function of R q (s), R d (s) and R qd (s) only, but otherwise it is independent of the fourth physical parameter of the Real Extended Bialas-Bzdak model, α(s). Hence the excitation function of P 0 (s) is determined completely by the parameters p 1 and p 0 of the excitation functions of the scale parameters (R q , R d , R qd ), as summarized in Table 2. This way, the P 0 = P 0 R q (s), R d (s), R qd (s) function is uniquely given by with the help of eq. (A.13), corresponding to eq. (29) of ref. [36]. We have cross-checked the result of these analytic considerations compared to the fit results on α(s) and the measured values of ρ 0 (s) at the ISR energies and we find an excellent agreement between the analytic approximations and the numerical results at ISR, corresponding to the λ (s) range of 0.73 -0.78, as illustrated in Fig. 31. The linear relationship between ρ 0 and the ReBB model parameter α is also indicated at the ISR energy range, in Fig. 32. Similarly, we find an excellent agreement between the analytic calculations of eq. (B.7) and the numerical and experimental results at the energy scale of 0.5 ≤ √ s ≤ 8 TeV, as demonstrated on Fig. 7, presented in the body of this manuscript. Appendix C: Pomeron and Odderon from the Real Extended Bialas-Bzdak model In this Appendix we summarize, for the sake of clarity, how we can determine the crossingeven and crossing-odd components of the scattering amplitude, based on the ReBB model. In the TeV energy range, we indentify these components with the Pomeron and the Odderon amplitude, given that the Reggeon contributions in this energy range are generally expected to be negligibly small, as confirmed also by explicit calculations for example in ref. [79]. In this energy range, the proton-proton (pp) as well as the proton-antiproton (pp) elastic scattering amplitudes can be written as where we have suppressed the dependence of these amplitudes on the Mandelstam variables: T pp el ≡ T pp el (s,t) etc. If the pp and the pp scattering amplitudes are known, then the crossing even and the crossing odd components of the elastic scattering amplitude can be reconstructed as In this manuscript, we have utilized the Real Extended Bialas-Bzdak or ReBB model of ref. [36], to determine the elastic scattering amplitude for elastic pp and pp scattering. This model is based on R. J. Glauber's theory of multiple diffractive scattering [41][42][43], and assumes that the elastic proton-proton scattering is based on multiple diffractive scattering of the constituents of the protons. Hence this ReBB model has two main variants: the case when the proton is assumed to have a constituent quark and a diquark component is referred to as p = (q, d) model, while the case when the diquark is assumed to be further resolved as a weakly bound state of two constituent quarks is the p = (q, (q, q)) model. It was shown before that this p = (q, (q, q)) variant predicts too many diffractive minima for the differential cross-section, hence in this paper we utilize the p = (q, d) variant as formulated in ref. [36], without any change. With the help of the ReBB model of ref. [36], we have described in a statistically acceptable manner the pp and pp differential cross-sections. In this ReBB model the pp elastic scattering amplitude depends on s only through four energy dependent parameters, that we denote here, for the sake of clarity, as R pp q (s), R pp d (s), R pp qd (s) and α pp (s): T pp el (s,t) = F(R pp q (s), R pp d (s), R pp qd (s), α pp (s);t). (C.5) Similarly, we described the amplitude of the elastic pp scattering with 4 energy dependent parameters, that we denote here for the sake of clarity as R pp q (s), R pp d (s), R pp qd (s) and α pp (s): Here F stands for a symbolic short-hand notation for a function, that indicates how the left hand side of the pp and pp scattering amplitude depend on s through their s-dependent parameters. The scale parameters R q , R d , and R qd correspond to the Gaussian sizes of the constituent quarks, diquarks and their separation in the scattering (anti)protons. Each of these parameters is s-dependent. Since the trends of R q (s), R d (s) and R qd (s) follow, within errors, the same excitation functions in both pp and pp collisions, as indicated on panels a, b and c of Fig. 6, we have denoted these in principle different scale parameters with the same symbols in the body of the manuscript: On the other hand, the opacity or dip parameters α(s) are different in elastic pp and pp reactions: if they too were the same, then the scattering amplitude for pp and pp reactions were the same, correspodingly the differential cross-sections were the same in these reactions, while experimental results indicate that they are qualitatively different. Hence corresponding to panel d of Fig. 6 and to Table 2. In this form, the ReBB model of ref. [36] provides a statistically acceptable description of the elastic scattering amplitude, both for pp and pp elastic scattering, in the kinematic range that extends to at least 0.372 ≤ −t ≤ 1. Here G and H are just sympolic short-hand notations that summarize how the left hand side of the above equations depend on s through their s-dependent parameters. The differential cross section, Eq. (1), the total, elastic and inelastic cross sections, Eqs.(2)-(4), as well as the real to imaginary ratio, Eq. (5), and the nuclear slope parameter, characterize experimentally the (s,t) dependent elastic scattering amplitudes, T el (s,t) discussed above. These quantities can be evaluated for a specific process like the elastic pp or pp scattering. Given that we evaluate the elastic scattering amplitude for both of them in the TeV energy range, that yields also the (s,t) dependent elastic scattering amplitude also for the Pomeron and the Odderon exchange, we have the possibility to evaluate these quantities for the crossing-even Pomeron (P) and for the crossing-odd Odderon (O) exchange. The momentum space dependent scattering amplitude T el (s,t), for spin independent processes, is related to a Fourier-Bessel transform of the impact parameter dependent elastic scattering amplitudes t el (s, b) as given by Eq. A.14. This impact parameter dependent amplitude is constrained by the unitarity of the scattering matrix S, SS † = I (C.14) where I is the identity matrix. Its decomposition is S = I + iT , where the matrix T is the transition matrix. In terms of T , unitarity leads to the relation which can be rewritten in terms of the impact parameter or b dependent amplitude t el (s, b) is the impact parameter dependent probability of inelastic scattering. It can be equivalently expressed from the above unitarity relation as It follows thatσ as a consequence of unitarity. Given that the ReBB model of ref. [36] is unitary, those dispersion relations that are consequences of the unitarity of the scattering amplitude are automatically satisfied. For example, the dispersion relations discussed in refs. [88] and [89] are automatically satisfied by the unitary ReBB model. The impact parameter dependent elastic scattering amplitudes for elastic pp and pp scatterings are given in terms of the complex opacity or eikonal functions Ω (s, b). The defining relations are As another consequence of the unitary relations, we have In ref. [36], three different possibilities were considered for the solution of the unitarity relation, using various functions to model the imaginary part of the complex opacity function Ω , that corresponds to the real part of the scattering amplitude. Out of the considered three possible choices, the assumption that was found to be consisent with the experimental data on pp elastic scattering at the ISR and LHC energies is defined by Eq. (A.13). At that time it was not yet clear that a similar relation works also for pp collisions. A very important advantage of this particular solution to the unitarity equation is that the multiple diffractive scattering theory of R. J. Glauber predictsσ inel (s, b) to depend only on the s-dependent geometrical scales (R q (s), R d (s), R qd (s)). Given that the R q (s), R d (s), R qd (s) scales are found in panels a, b, and c of Fig. 6 to be independent of the type of the elastic collisions i.e. to be the same in elastic pp and pp collisions in the body of this paper, the imaginary part of the complex opacity function in elastic pp and pp collisions has the same b-dependent factor, but has an s-dependent prefactor that is in principle a different function in the cases of elastic pp and pp collisions: (C.23) These relations yield the following simple expressions for the impact parameter dependent elastic pp and pp scattering amplitudes It then clearly follows that in the ReBB model t pp As detailed in Appendix B, α(s) ∼ ρ 0 (s) both for pp and pp elastic collisions. At the same time α(s) controls the value of the differential cross section in the region of the dip in these collisions. Thus, within the ReBB model there is a deep connection between the t = 0 and the dip region. This supports the findings that the recently observed decrease in ρ 0 (s) around √ s =13 TeV, the dip-bump structure in pp scattering and its absence in pp scattering are both the consequences of the Odderon contribution. In the ReBB model, this Odderon contribution is encoded in the difference between α pp (s) and α pp (s). This conclusion is supported also by the detailed calculations of the ratios of the modulus squared Odderon to Pomeron scattering amplitudes. Thus if ρ pp 0 (s) = ρ pp 0 (s), within the ReBB model it follows that α pp (s) = α pp (s) or equivalently t O el (s, b) = 0 in the TeV region. Within the framework of the ReBB model, we thus can significantly sharpen an Odderon theorem noted in ref. [30]. The weaker, original form of this theorem was formulated in ref. [30] as follows: Theorem 1 -If the pp differential cross sections differ from that of pp scattering at the same value of s in a TeV energy domain, then the Odderon contribution to the scattering amplitude cannot be equal to zero, i.e. This theorem is model-independenty true as it depends only on the general structure of the theory of elastic scattering. The outline of the proof is that the differential cross-section, Eq. (1), is proportional to the modulus squared elastic scattering amplitudes both for pp and pp scattering. If the modulus square of two complex functions is different, then the two complex functions, corresponding to the elastic scattering amplitudes of pp and pp collisions, cannot be identical. Hence their difference, proportional to the Odderon amplitude in the TeV energy range, cannot be zero. Within the ReBB model, this theorem can be significantly sharpened. This sharpened, stronger version of the above theorem thus reads as follows: Theorem 2: In the framework of the unitary Real Extended Bialas-Bzdak (ReBB) model, the elastic pp differential cross sections differ from that of elastic pp scattering at the same value of s in a TeV energy domain, if and only if the Odderon contribution to the scattering amplitude is not equal to zero. This happens if and only if α pp (s) = α pp (s) and as a consequence, if and only if ρ pp 0 = ρ pp 0 : This theorem is proven by the explicit expressions for the impact parameter dependent elastic scattering amplitude for the C-even Pomeron and the C-odd Odderon exchange in the ReBB model as detailed below. These relations are consequences of the unitarity of the ReBB model. It is obvious to note that the Pomeron amplitude given above is crossing-even, while the Odderon amplitude is crossing-odd: . These relations can be equivalently rewritten for the Pomeron amplitude, using the shorthand notationσ in ≡σ in (s, b) and suppressing the s dependencies of α pp (s) and α pp (s), as follows: This form of the Pomeron amplitude is explicitely C-even, as corresponding to the Pomeron amplitude in the unitary, Real Extended Bialas-Bzdak model. Thus if the difference between the opacity parameters α for pp and pp elastic collisions is small, the Pomeron is predominantly imaginary, with a small real part that is proportional to sin α pp +α pp 2σ in . Similary, for the Odderon, we have in the ReBB model the following amplitude This form of the Odderon amplitude is explicitely C-odd and satisfies unitarity, corresponding to the Real Extended Bialas-Bzdak model. If the difference between the opacity parameters α for pp and pp elastic collisions becomes vanishingly small, both the real and the imaginary part of the Odderon amplitude vanishes, as they are both proportional to sin α pp −α pp 2σ in . If this term is non-vanishing, but (α pp + α pp )σ in remains small, the above Odderon amplitude remains predominantly real, with a small, leading order linear in (α pp + α pp )σ in imaginary part. Given that α(s) ∝ ρ 0 (s) in the ReBB model, as detailed in Appendix B, and experimentally ρ 0 (s) ≤ 0.15 at LHC energies, the ReBB model Odderon amplitude is predominantly real at small values of t. Eqs. (C.27,C.28) complete the proof, that the Odderon amplitude in the ReBB model vanishes if and only if the opacity parameters α(s) for elastic pp and pp scattering are equal, corresponding to α pp (s) = α pp (s) . Note that these proofs are independent of the detailed calculations of the inelastic scattering probabilityσ in = σ inel (s, b), hence they are valid both in the p = (q, d) and in the p = (q, (q, q)) variant of the ReBB model. In fact they are valid for possible further generalized ReBB models as well, where for example the distribution of the scattering by quarks or diquarks is not assumed to be a Gaussian anymore, or if further parton contributions get resolved in a future paper. In the following plots, we have evaluated the differential and total cross-sections of the Pomeron and the Odderon exchange, as well as the ratios of these differential cross-sections, to determine the main properties of these processes with the help of the ReBB model of ref. [36]. Fig. 21 indicates the calculated differential cross-section for Pomeron exchange based on the fits presented in the body of this manuscript, utilizing the ReBB model of ref. [36]. This result is based on Table 2, that summarizes the parameters of the excitation functions for the opacity parameters α pp (s), α pp (s) and the scale parameters, R q (s), R d (s), R qd (s), corresponding to Figs. 6a-6d in the body of this manuscript. The top panel indicates the central values for the differential cross-section of Pomeron exchange at various colliding energies √ s, while the lower panel includes our first estimates on the systematic errors of this reconstruction. These first error band estimates were obtained by neglecting the possibly strong correlations between the parameters p 0 and p 1 . These figures also indicate that Pomeron exchange does not lead to a pronounced diffractive minimum structure, in contrast to the experimental results for the diffractive minimum in elastic pp collisions. This differential cross-section is more similar to the neck and shoulder type of structure, experimentally observed in elastic pp collisions, as discussed in the body of this manuscript. Fig. 22 is the same as Fig. 21, but for the C-odd Odderon exchange as evaluated from the ReBB model of ref. [36]. The top panel indicates the central values for the differential cross-section of Odderon exchange at various colliding energies √ s in the TeV domain, while the lower panel includes our first estimates on the systematic errors of this reconstruction, obtained by neglecting the possibly strong correlations between the parameters p 0 and p 1 . These figures also indicate that Odderon exchange may lead even to two pronounced diffractive minima, in contrast to the experimental results for the diffractive minimum in elastic pp collisions. However, the interference between the Pomeron and the Odderon exchange leads to a single well defined and experimentally resolvable diffractive minimum in elastic pp collisions at the TeV scale. Fig. 23 indicates the ratio of the differential cross-sections for Odderon to Pomeron exchange. This figure indicates that the Odderon contribution is important and relatively large in three kinematic regions: near to the t = 0 optical point, near to the position of the diffractive minimum of elastic pp collisions, t dip ≈ −0.5 GeV 2 , and then at higher squared Table 2 in the body of this manuscript. The top panel indicates the central values, while the lower panel includes our first estimates on the systematic errors of this reconstruction. The presented (over)estimates of the systematic error bands were obtained by neglecting the possible correlations between the parameters p 0 and p 1 for each of the excitation functions given in Table 2. momentum transfer values, −t 1 GeV 2 . This figure also highlights with an explicit calculation, that the Odderon contribution to the dip region is correlated with the Odderon contribution at −t = 0, thus the Odderon signals at the dip region appear simultaneously with the Odderon signals at −t = 0 . The last three figures characterize the modulus square of the amplitude for Pomeron and for Odderon exchanges in the ReBB model. A very important information, however, is included to the phase of these amplitudes, that are shown on the subsequent two figures. The phase of Pomeron exchange is indicated on Fig. 24. This indicates that at low −t, the Pomeron contribution is predominantly imaginary, with a real component of the Pomeron exchange starting to be important near the diffractive minimum of elastic pp collisions. On this plot, the principal value of the phase of the Pomeron (C-even) amplitude is indicated with a thin line, while the continuously varying phase evaluated from the multi-valued inverse tangent function is shown with the thick line. The phase of Odderon exchange is indicated on Fig. 25. This indicates that at low −t, the Odderon contribution is predominantly real, with an imaginary component of the Odderon exchange starting to be important already at low −t near to 0.1 GeV 2 . This phase starts to change quickly and the Odderon becomes predominantly real again near the diffractive minimum of elastic pp collisions. On this plot, the principal value of the phase of the Odderon (C-odd) amplitude is indicated with a thin line, while the continuously varying phase evaluated from the multi-valued inverse tangent function is shown with the thick line. Fig. 26 indicates the value of the real to imaginary ratio of the scattering amplitude ρ 0 (s) for elastic proton-proton, proton-antiproton scattering and for Pomeron exchange. Near to the optical point, all of these amplitudes are predominantly imaginary, with a small real part and with an even smaller C-odd contribution, that makes the ρ 0 (s) different for elastic pp and pp collisions, due to the contribution of the C-odd Odderon exchange. Fig. 27 indicates the total cross-sections, as evaluated with the help of eq. (2), for the elastic pp and pp scattering as well as for the Pomeron exchange. The difference between the excitation functions for the total cross-sections of pp and pp scattering seems to be less than the currently very small, of the order of 2 % relative experimental error on the total cross-section measurements at LHC energies. Finally, Fig. 28 indicates the total cross-section corresponding to the Odderon component of the scattering amplitude, as evaluated with the help of eq. (2). This plot indicates that the Odderon cross-section starts to increase in the √ s ≥ 1 TeV energy domain, but the total cross-section of Odderon exchange is at least two orders of magnitude smaller than the total cross-section for elastic pp scattering in the TeV energy scale. Actually we find σ O tot ≤ 0.7 mb for √ s ≤ 20 TeV. Thus effectively, and within the framework of the ReBB model, we conclude that the Odderon occupies at least an order of magnitude smaller radius, as compared to the effective size of the Pomeron exchange. Thus we support the observations of ref. [90] and refs. [28,91], suggesting that the contribution of the Odderon exchange to the total pp cross-section is rather small, of the order of 1 mb or less, even at the currently available largest LHC energies. Nevertheless, we also find that this currently rather small effect is statistically significant, with a significance that is larger than the discovery treshold of 5σ , as detailed in the body of this manuscript. Appendix D: ISR energies and quadratic correctios to the excitation functions In this Appendix we investigate the stability of the obtained linear logarithmic energy dependencies of the ReBB model parameters, discussed in Sec. 5, for the case, when the energy range is extended towards lower values of √ s. In order to do this, we refitted the ISR data [92] at all the five available collision energy ( √ s = 23.5, 30.7, 44.7, 52.8 and 62.5 GeV) in the squared momentum transfer range 0.8 ≤ −t ≤ 2.5 GeV 2 by using the χ 2 definition determined by Eq. 6. The fits included the t-dependent (both vertical and horizontal) statistical (type A) and systematic (type B) errors, the normalization (type C) error and the experimental values of the total cross section and the parameter ρ 0 with their total uncertainties [93]. We have also tested the stability of the fit results for small variations of the fit range or the fitting method. The only data set, where our results remained stable for the variation of the fit range around the selected range and for small variations of the fitting procedure, and where 27 Excitation function of the total cross-section for elastic pp, pp collisions and for the amplitude of Pomeron exchange, as evaluated from the log-linear excitation functions of the opacity parameters α pp (s) and α pp (s) as well as that of the scale parameters, R q (s), R d (s), R qd (s), corresponding to Table 2. The yellow band indicates our conservative estimates on the systematic errors of the total cross-section of the Pomeron exchange. the obtained results were both statistically and physically acceptable fit results describing not only the differential cross-section but the measured value of the total cross-section σ tot and the value of the real to imaginary ratio ρ 0 was the ISR dataset, measured at √ s = 23.5 GeV. The result of this satisfactory fit is shown in Fig. 29. Our other results were similar to the results presented in ref. [36] and particularly resulted in a rather fluctuating description Fig. 28 Excitation function of the total cross-section obtained from the optical theorem using the ReBB model amplitude of Odderon exchange, as evaluated from the log-linear excitation functions of the opacity parameters α pp (s) and α pp (s) as well as that of the scale parameters, R q (s), R d (s), R qd (s), corresponding to Table 2. The yellow band indicates our conservative estimate on the systematic errors of the total cross-section of this Odderon exchange. The result indicates that total cross-section of the Odderon exchange is sharply increasing in the few TeV energy range, but it is two orders of magnitude smaller than the contribution of the Pomeron exchange that is dominant at the same energy scale. of the exctitation function of the α(s) at those ISR energies higher than 23.5 GeV. In the present study such fluctuating fits could not be used to establish the trends and the excitation functions. Taking the restricted opportunities, we utilized the only reasonable ISR energy fit result, i.e., the result at 23.5 GeV to cross-check the compatibility of the linear logarithmic trends obtained in Sec. 5 with the lower energy region. When the √ s = 23.5 GeV energy data are included to those summarized in Table 1, the energy dependence of the model parameters can be determined satisfactorily if model parameters are fitted one by one by applying a quadratic polynomial as a function of ln(s/s 0 ), P(s) = p 0 + p 1 ln (s/s 0 ) + p 2 ln 2 (s/s 0 ), P ∈ {R q , R d , R qd , α}, (D.1) where p 0 , p 1 , p 2 are free parameters and s 0 is fixed at 1 GeV 2 . The obtained results are summarized in Fig. 30. The parameters of the excitation functions are indicated on the subplots of Fig. 30 and also summarized in Table 3. To fit the α parameter we used the same procedure described in Sec. 5, i.e., utilizing also the measured and rescaled ρ 0 values. As seen in Figs. 32 and 31 the linear dependence of the ratio ρ 0 on the parameter α is satisfied at ISR energies as well. In Fig. 30 Fig. 29 The fit of the ReBB model to the pp ISR √ s = 23.5 GeV data in the range of 0.8 ≤ −t ≤ 2.5 GeV 2 [93]. The fit includes the t-dependent statistical (type A) and systematic (type B) uncertainties, the normalization (type C) uncertainty and the experimental values of the total cross section and parameter ρ 0 with their full error according to Eq. (6). The fitted parameters are shown in the left bottom corner and their values are rounded up to three decimal digits. Table 1, determined by fitting a second order logarithmic polynomial, Eq. (D.1), to each of them one by one in the energy range of 23.5 ≤ √ s ≤ 8000 GeV. As a comparison these figures also show the result of the fit in the energy range of 546 ≤ √ s ≤ 8000 GeV with the linear logarithmic model determined by the parameters collected in Table 2. It is clear that allowing for quadratic corrections does not change significantly the linear trends in the kinematic range of 0.5 ≤ √ s ≤ 8 TeV. Table 3 Summary of the parameter values which determine the energy dependence according to the quadratic dependence in ln(s) by Eq. (D.1). The values of the parameters are rounded up to three valuable decimal digits except for p 2 that are rounded up to four valuable decimal digits. These parameters are also shown on the panels of Fig. 30 . For R q , R d and R qd , the values of the parameters p 0 , p 1 and p 2 are given in units of femtometers (fm). For the parameters α(pp) and α(pp), the parameters p 0 , p 1 and p 2 are dimensionless. Parameter
25,902.8
2020-05-28T00:00:00.000
[ "Physics", "Materials Science" ]
Heterogeneous Optical Fiber Sensor System for Temperature and Turbidity Assessment in Wide Range This paper presents the development of an optical fiber sensor system for multiparametric assessment of temperature and turbidity in liquid samples. The sensors are based on the combination between fiber Bragg gratings (FBGs), intensity variation and surface plasmon resonance (SPR) sensors. In this case, the intensity variation sensors are capable of detecting turbidity with a resolution of about 0.5 NTU in a limited range between 0.02 NTU and 100 NTU. As the turbidity increases, a saturation trend in the sensor is observed. In contrast, the SPR-based sensor is capable of detecting refractive index (RI) variation. However, RI measurements in the turbidity calibrated samples indicate a significant variation on the RI only when the turbidity is higher than 100 NTU. Thus, the SPR-based sensor is used as a complementary approach for the dynamic range increase of the turbidity assessment, where a linearity and sensitivity of 98.6% and 313.5 nm/RIU, respectively, are obtained. Finally, the FBG sensor is used in the temperature assessment, an assessment which is not only used for water quality assessment, but also in temperature cross-sensitivity mitigation of the SPR sensor. Furthermore, this approach also leads to the possibility of indirect assessment of turbidity through the differences in the heat transfer rates due to the turbidity increase. Introduction Optical fiber sensors are electrically passive and have electromagnetic immunity [1]. As a result, they are superior to electrical-based sensors (piezoelectric, piezoresistive, and capacitive sensors) for procedures that require high magnetic fields (such as magnetic resonance imaging) [2], electric motors assessment [3] and applications in classified areas [4]. Additionally, some sterilization methods rely on high heat, pressure, and humidity, which can damage the electronic circuits of sensors; optical sensors, however, are more resistant to these effects [5]. They are also compact, lightweight, chemically stable, and capable of multiplexing, making them suitable for a variety of applications [6]. In comparison to conventional electrical transducers, optical fiber sensors offer many well-known and desirable characteristics for label-free methods [7]. There are many advantages to this technology, including its size, immunity to electromagnetic interference, cost, light path control, remote sensor deployment, high transmission rates, ability to hold multiple sensors on a single fiber, and the use of biocompatible materials that are intrinsically safe and inert, thereby reducing their environmental impact [8]. These advantages have led to the use of optical fiber sensors in a variety of fields including medicine [9], environmental monitoring [10], and antibody detection [11]. Optical fiber sensors are selected for such applications due to their operational safety in aqueous environments, as well as the ease with which they can be introduced into the tanks, avoiding the need to collect samples for testing on external instruments [12]. Moreover, sensing devices may be either hand-held probes or a set of remote-controlled devices attached to an optical fiber cable. Every ecosystem relies on water for daily survival, as it is a vital component of life on earth. In spite of the fact that water is an abundant resource on earth, pollution and waste lead to a substantial reduction in the amount of water available for human consumption, causing a water shortage in some regions [19]. A variety of sources contribute to water pollution, including chemical disposal of pharmaceuticals, industrial processes, agriculture, and household waste [20]. In light of the importance of water and the increasing pollution in rivers, oceans, and lakes, the promotion and assessment of water quality have become key priorities in environmental policy [19]. Moreover, water pollution also impacts aquatic species, which are exposed to environmental pollution, resulting in an increase in mortality [20]. In addition to polluting aquatic species, the toxins in fish and other aquatic species can also be transmitted to other aquatic and non-aquatic species, including humans, via the food chain [21]. Water turbidity is defined as the degree to which particles in the water disrupt the passage of light [22]. As a result, turbidity is a measure of water clarity since it reflects the extent to which suspended particles in the water impair the ability to see clearly. These particles can be considered sediment, which includes a wide variety of matter including soil particles, algae, plankton, and microorganisms [22]. Water color can be changed by these particles, which are extremely small [23]. Due to the particles absorbing sunlight, high turbidity increases the water temperature. Water at higher temperatures contains less oxygen, resulting in hypoxic conditions [24]. Due to these higher temperatures, fish use more oxygen due to increased metabolic rates, thereby further limiting the oxygen supply. Considering the environmental impact, light is scattered by suspended particles, preventing it from reaching plants and algae, thus further reducing oxygen levels. In general, different optical fiber sensing technologies have been proposed for the assessment of temperature [25,26] and turbidity [27]. For turbidity assessment, most of the technologies are based on the intensity variation of the transmitted optical signal in the medium [28]. However, complementary methods based on the thermal and refractive index assessment for turbidity measurement have not been thoroughly explored using optical fiber sensing approaches. Aiming at the necessity and significance of simultaneous monitoring of turbidity and temperature, this paper presents the development of a novel heterogeneous optical fiber probe for simultaneous assessment of turbidity and temperature using the wavelength and reflected optical power data. The sensor can effectively measure the turbidity and temperature in a large range through the combination of different optical fiber sensing techniques, namely intensity variation, FBGs and SPR. Materials and Methods The sensor system is based on a heterogeneous optical fiber sensor structure to obtain a sensor system that can effectively measure temperature and turbidity for a wide range of samples, from 0 to 4000 NTU. The sensor structure is based on three different operation principles, namely intensity variation, FBG and SPR, where each operation principle presents superior performance in a range of turbidity. In addition, there is the possibility of sensor fusion between the data for the achievement of a sensor with higher accuracy, dynamic range and resolution. For the first approach on turbidity estimation, an intensity variation sensor is proposed. Such sensors are based on a well-known principle of optical power attenuation between two fibers separated by a medium [29]. The turbidity of the liquid samples between the optical Biosensors 2022, 12, 1041 3 of 10 fibers can be estimated from the transmitted optical power variation for each turbidity sample. In this case, one of the fibers is connected to the light source, whereas the other is connected to a photodetector (or spectrometer), see Figure 1a. Thus, the turbidity increase between both optical fibers leads to a reduction of the transmitted optical power between illuminated and non-illuminated fibers. presents superior performance in a range of turbidity. In addition, there is the possibility of sensor fusion between the data for the achievement of a sensor with higher accuracy, dynamic range and resolution. For the first approach on turbidity estimation, an intensity variation sensor is proposed. Such sensors are based on a well-known principle of optical power attenuation between two fibers separated by a medium [29]. The turbidity of the liquid samples between the optical fibers can be estimated from the transmitted optical power variation for each turbidity sample. In this case, one of the fibers is connected to the light source, whereas the other is connected to a photodetector (or spectrometer), see Figure 1a. Thus, the turbidity increase between both optical fibers leads to a reduction of the transmitted optical power between illuminated and non-illuminated fibers. For the FBG-based sensor, the uniform grating was inscribed in photosensitive single mode silica fiber. The uniform FBGs were produced using a pulsed Q-switched Nd:YAG laser system (LOTIS TII LS-2137U), emitting the fourth harmonic (266 nm) [30], with an emission power lamp energy of 26 J and measured pulse energy of 120 J with a repetition rate of 1 Hz. The laser beam profile was circular with the diameter around 8 mm and divergence less than 1.0 mrad. An effective focal length of 320 mm was used to focus the laser beam onto the fiber core. On the fiber surface, the beam produced a spot size of 8 mm wide and 30 µm high. The phase mask employed was 10 mm long with a pitch of 1064 nm, designed for 266 nm irradiation, resulting in an FBG with Bragg wavelength of 1544 nm. As FBGs are well-known for their temperature sensitivity, the temperature sensor was based on the direct application of the optical fiber (with inscribed FBGs) in the liquid sample. Thus, the temperature was directly evaluated through the heat transfer from the liquid to the FBG [31]. Furthermore, since the turbidity directly affects the liquid thermal properties, it is possible to estimate the turbidity from the thermal dynamics in the liquid. For this reason, the FBG sensor was used not only as a temperature sensor, but also for the turbidity estimation through the thermal dynamics of the fluid obtained from the transient analysis of the FBG response (as presented in [32]). Figure 1b shows the experimental setup for the evaluation of the FBG responses at different fluid turbidity conditions. For the FBG-based sensor, the uniform grating was inscribed in photosensitive single mode silica fiber. The uniform FBGs were produced using a pulsed Q-switched Nd:YAG laser system (LOTIS TII LS-2137U), emitting the fourth harmonic (266 nm) [30], with an emission power lamp energy of 26 J and measured pulse energy of 120 J with a repetition rate of 1 Hz. The laser beam profile was circular with the diameter around 8 mm and divergence less than 1.0 mrad. An effective focal length of 320 mm was used to focus the laser beam onto the fiber core. On the fiber surface, the beam produced a spot size of 8 mm wide and 30 µm high. The phase mask employed was 10 mm long with a pitch of 1064 nm, designed for 266 nm irradiation, resulting in an FBG with Bragg wavelength of 1544 nm. As FBGs are well-known for their temperature sensitivity, the temperature sensor was based on the direct application of the optical fiber (with inscribed FBGs) in the liquid sample. Thus, the temperature was directly evaluated through the heat transfer from the liquid to the FBG [31]. Furthermore, since the turbidity directly affects the liquid thermal properties, it is possible to estimate the turbidity from the thermal dynamics in the liquid. For this reason, the FBG sensor was used not only as a temperature sensor, but also for the turbidity estimation through the thermal dynamics of the fluid obtained from the transient analysis of the FBG response (as presented in [32]). Figure 1b shows the experimental setup for the evaluation of the FBG responses at different fluid turbidity conditions. The D-shape in the optical fiber was performed to expose the optical fiber core prior to the gold deposition for the SPR signal. To clarify the D-shape in the optical fiber, an in-house method was performed. With the in-house home-made method, a segment of a polymer optical fiber (POF), made of polymethyl methacrylate (PMMA) material with a total diameter (core and cladding) of 1 mm, was cut to a length of approximately 30 cm and clamped on two sides, each on opposite sides of the fiber, so that it remained stretched and steady throughout the entire polishing procedure. Using a plier and a blade, a portion of the black plastic jacket covering the PMMA was removed and placed on the V-groove to be polished. This section of removed plastic must have a length greater than the polishing blade, which has a length of one centimeter, so usually five centimeters of plastic are removed. In the final step before polishing began, the POF was connected to a light source on one end and a power meter on the other end, so that power loss could be monitored during the polishing process. There is a custom V-groove in which the POF was placed. This groove has a depth of approximately 700 µm, leaving approximately 300 µm of the PMMA uncovered to be removed by the polishing machine. The polishing machine was held by a claw, which was attached to a 3D platform, the V-groove, on which the POF was placed. During the polishing process, both the claw and the V-groove were moved in three directions to ensure that the polishing machine was parallel to the fiber, the D-Shape was centered, and the sides of the fiber were not polished. Between each step of the SPR sensor fabrication, i.e., the D-shape and the gold deposition, the fiber samples were cleaned with deionized water. A thin Au film, 50 nm thick, was sputtered onto the D-shaped POF samples. For the coating process, first, the fiber was cleaned with isopropanol and then placed in the sputtering chamber (SEM coating Unit E5000 mounted with a sputter target composed of 99.99% Au). An Au layer was deposited on one side of the fiber (in the D-shaped area). By controlling the deposition time, the thickness of the Au layer was estimated. Furthermore, previous tests on the nanolayer thickness for this target were conducted using a scanning electron microscope, where the film thickness was assessed for different deposition times. Subsequently, to enhance the adhesion of the Au on the surface of the POFs, the Au-coated POFs were annealed for two hours at 50 • C. The sensor characterizations were performed as a function of the turbidity (for the intensity variation sensor), temperature (for the FBG sensor) and refractive index (for the SPR sensor). In the case of the intensity variation sensor, the transmitted optical power decreases as the turbidity increases. However, the sensor signal can reach a saturation point in which the changes in the turbidity do not lead to further variations in the transmitted optical power. For this reason, the intensity variation sensor was tested on calibrated samples with turbidities of 0.02 NTU, 20 NTU, 100 NTU and 800 NTU with the sensor positioned inside a Teflon container for mechanical protection of the sensor probe. As shown in Figure 1a, the controlled samples were positioned inside the container and a laser centered at 662 nm was connected to fiber 1 (illuminated fiber), whereas fiber 2 (non-illuminated fiber) was connected to a spectrometer with a detection range from 180 nm to 890 nm (FLAME-T-UV-vis, manufactured by Ocean Optics, Orlando, FL, USA). For the temperature characterization of the FBG, the sensor was positioned inside a climatic chamber with controlled temperature. The test was performed in a range of 25 • C to 45 • C in steps of 5 • C. Thereafter the thermal dynamic tests were performed in 3 different samples, 0.02 NTU, 20 NTU and 60 NTU under temperature variation conditions. In this case, 100 mL of each sample was subjected to a temperature step variation from 25 • C to 50 • C and their transient temperature variations were measured by the FBG sensor as presented in Figure 1b. All the data from the FBG were acquired by the optical interrogator sm125 (Micron Optics, Atlanta, GA, USA). For the RI characterization, the D-shaped, gold-coated POF was positioned in a container similar to that of the intensity variation-based sensor in which the samples with different RI were positioned. Figure 1c shows the experimental setup for this test. The characterization of the SPR sensor includes the RI variation from 1.3398 to 1.3681 and was arrived at by filling the container with different liquid samples. In the tests, one end of the fiber was connected to a halogen lamp and the other end to the spectrometer. Thus, all three principles can detect the turbidity from direct or indirect measurements (related to RI and thermal dynamics differences). For this reason, the principles can be combined for a simultaneous measurement of turbidity and temperature in a larger range, where the intensity variation sensor can be used for smaller ranges of turbidity variation (from 0.02 NTU to 800 NTU), whereas the SPR sensor can be used in higher ranges (from 800 NTU to 4000 NTU) in which there is a higher variation of the RI. Concurrently, the FBG sensor not only provides a real time and continuous monitoring of the temperature, but is also capable of detecting the turbidity from the transient response of the temperature variation in the sample, since higher turbidities lead to faster heat transfer dynamics. Figure 2 presents the spectral responses of all proposed sensors, where it is possible to observe in Figure 2a that the transmitted spectrum of the intensity variation sensor is only the narrow peak centered on the laser center wavelength. Figure 2b shows the reflected spectrum of the FBG used for the thermal assessment of the liquids, whereas Figure 2c shows the transmitted spectrum of the SPR sensor, which presented an SPR signature at around 680 nm. As shown in Figure 2, the sensors operate at different wavelength regions and are individually characterized using the materials discussed in Section 2. It is also worth mentioning that the FBG sensor was inscribed in silica optical fiber, whereas the SPR and intensity variation sensors were used in PMMA POFs. Results and Discussions principles can be combined for a simultaneous measurement of turbidity and temperatur in a larger range, where the intensity variation sensor can be used for smaller ranges o turbidity variation (from 0.02 NTU to 800 NTU), whereas the SPR sensor can be used in higher ranges (from 800 NTU to 4000 NTU) in which there is a higher variation of the RI Concurrently, the FBG sensor not only provides a real time and continuous monitoring o the temperature, but is also capable of detecting the turbidity from the transient respons of the temperature variation in the sample, since higher turbidities lead to faster hea transfer dynamics. Figure 2 presents the spectral responses of all proposed sensors, where it is possibl to observe in Figure 2a that the transmitted spectrum of the intensity variation sensor i only the narrow peak centered on the laser center wavelength. Figure 2b shows th reflected spectrum of the FBG used for the thermal assessment of the liquids, wherea Figure 2c shows the transmitted spectrum of the SPR sensor, which presented an SPR signature at around 680 nm. As shown in Figure 2, the sensors operate at differen wavelength regions and are individually characterized using the materials discussed in Section 2. It is also worth mentioning that the FBG sensor was inscribed in silica optica fiber, whereas the SPR and intensity variation sensors were used in PMMA POFs. Following the characterizations discussed in Section 2, Figure 3 presents the optica power variation as a function of the turbidity for samples in the range of 0.02 NTU to 80 Following the characterizations discussed in Section 2, Figure 3 presents the optical power variation as a function of the turbidity for samples in the range of 0.02 NTU to 800 NTU, the optical power is estimated from the integral of the transmitted spectrum for each turbidity condition. The results in Figure 3 indicate a saturation trend on the sensor response when the turbidity reaches 800 NTU, which leads to a limitation on the sensor operation in terms of turbidity range. The intensity variation sensor presented a high determination coefficient (R 2 ) with an exponential regression (R 2 of 0.99). Results and Discussions NTU, the optical power is estimated from the integral of the transmitted spectrum for each turbidity condition. The results in Figure 3 indicate a saturation trend on the sensor response when the turbidity reaches 800 NTU, which leads to a limitation on the sensor operation in terms of turbidity range. The intensity variation sensor presented a high determination coefficient (R 2 ) with an exponential regression (R 2 of 0.99). The optical fiber sensor behavior on the turbidity assessment indicates the necessity of complimentary approaches to increase the dynamic range of the sensor, which can also impact its resolution and accuracy. As the first complementary approach, the SPR sensor is analyzed as a function of the RI. In this case, the liquid samples with different RI are placed in the container and the transmitted spectra are analyzed for each case, as shown in Figure 4a. The tests were performed at constant temperature conditions of 23 °C. The results indicate a variation in both intensity and wavelength of the optical signal, Figure 4b shows a regression of the optical power and wavelength of the SPR signal as a function of the RI, where it is possible to observe a linear trend in the response with an R2 of around 0.98. The results indicate the possibility of using the SPR-based sensor for the RI assessment, which can be used for the estimation of the samples turbidity. In this case, Figure 4c shows the refractive index measured by a benchtop refractometer for the samples of different turbidity. These results show that the RI of the turbidity samples start to change at around 100 NTU, since there is no RI variation in the turbidities below 100 NTU. The RI is around 1.330 for the turbidities of 0.02 NTU, 20 NTU, and 100 NTU. Then, the RI increases in the samples of 300 NTU, 600 NTU and 800 NTU, where the results indicate an RI of 1.3333 for 800 NTU. Thus, the use of the SPR sensor as a complementary approach for the turbidity assessment increases the dynamic range of the whole sensing system, since the SPR sensor can be used for the turbidity assessment with NTU higher than 100. The optical fiber sensor behavior on the turbidity assessment indicates the necessity of complimentary approaches to increase the dynamic range of the sensor, which can also impact its resolution and accuracy. As the first complementary approach, the SPR sensor is analyzed as a function of the RI. In this case, the liquid samples with different RI are placed in the container and the transmitted spectra are analyzed for each case, as shown in Figure 4a. The tests were performed at constant temperature conditions of 23 • C. The results indicate a variation in both intensity and wavelength of the optical signal, Figure 4b shows a regression of the optical power and wavelength of the SPR signal as a function of the RI, where it is possible to observe a linear trend in the response with an R2 of around 0.98. The results indicate the possibility of using the SPR-based sensor for the RI assessment, which can be used for the estimation of the samples turbidity. In this case, Figure 4c shows the refractive index measured by a benchtop refractometer for the samples of different turbidity. These results show that the RI of the turbidity samples start to change at around 100 NTU, since there is no RI variation in the turbidities below 100 NTU. The RI is around 1.330 for the turbidities of 0.02 NTU, 20 NTU, and 100 NTU. Then, the RI increases in the samples of 300 NTU, 600 NTU and 800 NTU, where the results indicate an RI of 1.3333 for 800 NTU. Thus, the use of the SPR sensor as a complementary approach for the turbidity assessment increases the dynamic range of the whole sensing system, since the SPR sensor can be used for the turbidity assessment with NTU higher than 100. The SPR sensor also has temperature sensitivity, which leads to the necessity of temperature monitoring for the mitigation of the temperature cross-sensitivity. In order to evaluate the temperature of the liquid in which the sensors are immersed, the FBG temperature sensor is characterized as a function of the temperature variation. The results of the temperature tests and their influence on the reflected spectrum of the sensing device are presented in Figure 5a, in which it is possible to observe that there are only wavelength shifts on the reflected optical spectrum. Figure 5b shows the wavelength shift as a function of the temperature for the characterization tests, where there is a well-known high linearity (R 2 = 0.999) and a sensitivity of around 10 pm/°C, as commonly obtained in FBGbased temperature sensors. The SPR sensor also has temperature sensitivity, which leads to the necessity of temperature monitoring for the mitigation of the temperature cross-sensitivity. In order to evaluate the temperature of the liquid in which the sensors are immersed, the FBG temperature sensor is characterized as a function of the temperature variation. The results of the temperature tests and their influence on the reflected spectrum of the sensing device are presented in Figure 5a, in which it is possible to observe that there are only wavelength shifts on the reflected optical spectrum. Figure 5b shows the wavelength shift as a function of the temperature for the characterization tests, where there is a well-known high linearity (R 2 = 0.999) and a sensitivity of around 10 pm/ • C, as commonly obtained in FBG-based temperature sensors. The continuous temperature assessment through the FBG sensor also leads to the possibility of estimating the temperature dynamics of the liquid samples. As liquids with higher turbidity have larger temperature variations due to the particles absorbing sunlight, the transient thermal responses of the samples can indicate their turbidity. In order to verify this, three different samples (turbidities of 0.02 NTU, 100 NTU and 200 NTU) were subjected to temperature variations, whereas the temperature response of each sample was acquired with the FBG temperature sensor. It is important to mention that we used three different samples, since the turbidity variation of a single sample leads to the need to stir the sample for a time to obtain a homogeneous solution, which can affect the transient thermal behavior needed for the turbidity estimation from temperature response and will lead to results different from those expected in a practical application in which there is no forced convection in the sample. Figure 6 presents the results of the transient thermal response of each sample, where the temperature increases from 25 • C to around 40 • C. The slope of the curve related to transient analysis as a function of the time indicates the rate of heat transfer to the samples. For this reason, higher slopes indicate higher rates of heat transfer, which is related to the sample turbidity, as shown in Figure 6. Therefore, it is possible to estimate the turbidity of each sample from the heat transfer rate between the samples. As it is an indirect estimation of the turbidity, the transient thermal analysis can be used as another complementary approach for turbidity assessment in conjunction with the intensity variation-based sensors and SPR sensing approaches. The continuous temperature assessment through the FBG sensor also leads to the possibility of estimating the temperature dynamics of the liquid samples. As liquids with higher turbidity have larger temperature variations due to the particles absorbing sunlight, the transient thermal responses of the samples can indicate their turbidity. In order to verify this, three different samples (turbidities of 0.02 NTU, 100 NTU and 200 NTU) were subjected to temperature variations, whereas the temperature response of each sample was acquired with the FBG temperature sensor. It is important to mention that we used three different samples, since the turbidity variation of a single sample leads to the need to stir the sample for a time to obtain a homogeneous solution, which can affect the transient thermal behavior needed for the turbidity estimation from temperature response and will lead to results different from those expected in a practical application in which there is no forced convection in the sample. Figure 6 presents the results of the transient thermal response of each sample, where the temperature increases from 25 °C to around 40 °C. The slope of the curve related to transient analysis as a function of the time indicates the rate of heat transfer to the samples. For this reason, higher slopes indicate higher rates of heat transfer, which is related to the sample turbidity, as shown in Figure 6. Therefore, it is possible to estimate the turbidity of each sample from the heat transfer rate between the samples. As it is an indirect estimation of the turbidity, the transient thermal analysis can be used as another complementary approach for turbidity assessment in conjunction with the intensity variation-based sensors and SPR sensing approaches. The results indicate different (and complementary) possibilities of measuring turbidity in a large range. However, it is also possible to integrate such sensors, especially the intensity variation sensor and the FBG, for the simultaneous assessment of temperature and turbidity using a single optical fiber probe. It is also worth mentioning that the SPR sensor can be integrated in the probe. In this case, the FBG should be inscribed in the visible wavelength region and the transmitted spectrum also needs to be acquired. Moreover, a D-shape and gold coating in the fiber is needed, where the sensor has the same operating principle as discussed in Section 2, i.e., the SPR signature is analyzed as a function of the RI (related to the sample turbidity). The results indicate different (and complementary) possibilities of measuring turbidity in a large range. However, it is also possible to integrate such sensors, especially the intensity variation sensor and the FBG, for the simultaneous assessment of temperature and turbidity using a single optical fiber probe. It is also worth mentioning that the SPR sensor can be integrated in the probe. In this case, the FBG should be inscribed in the visible wavelength region and the transmitted spectrum also needs to be acquired. Moreover, a D-shape and gold coating in the fiber is needed, where the sensor has the same operating principle as discussed in Section 2, i.e., the SPR signature is analyzed as a function of the RI (related to the sample turbidity). Thus, this approach can result in the integration of all proposed sensors in a single heterogeneous optical fiber probe. Conclusions This paper presented the development, characterization and analysis of a heterogeneous optical fiber sensing structure for simultaneous assessment of turbidity and temperature. The sensor system is composed of three optical fiber sensing approaches: (i) an intensity variation sensor for the direct monitoring of turbidity through the transmitted optical power variation; (ii) an FBG temperature sensor for continuous and real time monitoring of thermal parameters, which can also be used for indirect estimation of the turbidity; and (iii) an SPR-based sensor for turbidity assessment through RI variations. The developed sensors presented complementary behavior, where the FBG temperature sensor can be used for temperature cross-sensitivity mitigation in SPR sensors as well as assessment of fluid thermal properties. The SPR sensor can operate in a larger range of turbidities (from values higher than 100) through the refractive index variations of the samples, whereas the intensity variation sensor presented a high resolution (about 0.5 NTU) in the turbidity assessment in the range of 0.02 NTU to 100 NTU. Therefore, the proposed sensors can be combined for turbidity assessment with higher accuracy and dynamic range, where signal processing techniques can be employed in the data fusion. In addition, the sensors can be integrated into a single probe for compact and multiplexed analysis. Future works will include the use of the proposed device in real application scenarios of environmental monitoring.
7,284.2
2022-11-01T00:00:00.000
[ "Physics" ]
Experimental investigation on the characteristics of supersonic fuel spray and configurations of induced shock waves The macro characteristics and configurations of induced shock waves of the supersonic sprays are investigated by experimental methods. Visualization study of spray shape is carried out with the high-speed camera. The macro characteristics including spray tip penetration, velocity of spray tip and spray angle are analyzed. The configurations of shock waves are investigated by Schlieren technique. For supersonic sprays, the concept of spray front angle is presented. Effects of Mach number of spray on the spray front angle are investigated. The results show that the shape of spray tip is similar to blunt body when fuel spray is at transonic region. If spray entered the supersonic region, the oblique shock waves are induced instead of normal shock wave. With the velocity of spray increasing, the spray front angle and shock wave angle are increased. The tip region of the supersonic fuel spray is commonly formed a cone. Mean droplet diameter of fuel spray is measured using Malvern’s Spraytec. Then the mean droplet diameter results are compared with three popular empirical models (Hiroyasu’s, Varde’s and Merrigton’s model). It is found that the Merrigton’s model shows a relative good correlation between models and experimental results. Finally, exponent of injection velocity in the Merrigton’s model is fitted with experimental results. For diesel engines, the fuel spray atomization and fuel-air mixing are the key factors that affect the engine performance. It is well known that several techniques can be used to improve the fuel atomization and mixing performance, such as high fuel pressure injection 1 , high pressure compressed intake 2 , intake manifold design 3,4 et al. High fuel pressure injection is one of the most effective methods to improve the fuel atomization 5,6 . However, new phenomena may occur during the fuel atomization process with increasing of injection pressure 7,8 . Among these phenomena, the supersonic fuel spray which break the speed of sound is an attractive phenomenon. Existing research shows that fuel jet can easily exceed the speed of sound (Mach 1) by use of modern high-pressure injection systems. It is excepted that further enhance in the injection pressure, the Mach number of fuel jet increases. Based on the interaction between spray and shock waves, the supersonic fuel atomization can be divided into two types: the active and passive case. The passive cases refer to a passive effect of supersonic flow on low speed fuel spray or droplets [9][10][11] . Generally, the passive case can occur in scramjet engines, pulse detonation engines, and shock tubes [12][13][14][15][16] . For example, the fuel spray in supersonic cross air flow in scramjet engines is the passive case discussed above 17,18 . While the active cases occur in the supersonic spray or droplets which generate the induced shock waves, which are induced by supersonic body 19,20 . It is obvious that the active cases may occur in the high/ultra-high pressure fuel spray in the DI engines 21,22 . However, the active cases have been scarcely studied than the passive cases. For the fuel spray atomization in vehicle engines, the effects of high/ultra-high injection pressure on the characteristics of fuel spray field are always the research hotspots 23 . There have been few research carried out on the interactions between spray and induced shock waves. But the differences between supersonic and subsonic sprays may have an significant influence on the combustion system designs, system control strategies, post-processing, etc. Therefore, knowing the mechanisms of the supersonic fuel spray will aid the development of more accurate spray models and the design of the advanced internal combustion engines. In this study, high-speed photography and Schlieren techniques are applied on the research of supersonic fuel spray atomization process, to quantitatively analyze the macroscopic characteristics of fuel spray and configurations of induced shock waves. The interaction mechanisms between shock waves and fuel spray field are also discussed. The droplet size distributions of the supersonic spray are measured using a Malvern's Spraytec (Malvern laser particle analyzer). Consequently, comparisons between the popular Sauter Mean Diameter (SMD) models and experimental results are performed. Experimental Method In this study, the investigation of characteristics of supersonic fuel spray and configuration of induced shock waves is carried out with experimental method. Figure 1 shows the test platform of supersonic fuel spray designed and built by the our research group. The device consists of high pressure accumulator device, filter and fuel supply device, fuel injection control valve, high pressure oil tube, pressure gauge, motor, controller, nozzle, and fixation supports. The pressure accumulator device is designed and produced based on the principle of hydraulic to achieve the ultra-injection pressure. The operation of accumulator device is driven by the direct current (DC) motor. The switch control on the fuel injection is realized by the specially designed rapid response component. The measuring equipment of supersonic fuel spray includes a high speed camera and a Schlieren apparatus. The shock waves induced by the supersonic fuel spray can be captured by the combination of Schlieren technique and high speed camera (the high-speed photography shooting frame is set at 19,200 fps, and frame interval is 52.1 μ s). The configuration of induce shock waves in the supersonic spray field is captured by Schlieren technique. Then the structural characteristics of shock waves, including the leading edge shock wave and the attached shock waves, are analyzed. A measurement of droplets diameter is achieved by the Malvern laser particle analyzer, and the position of measurement point is 30 mm away from the orifice exit along the axis of the jet. It is found that the experimental results of droplets size distribution cannot be obtained because the density spray can derail the laser through spray field if the distance between measurement point and nozzle is too short. Results and Discussion In the study, the ambient pressure is 1 atm, and the ambient temperature is room temperature (the local sound velocity is around 340 m/s). To generate the supersonic fuel spray, a high enough injection pressure should be reached. The liquid injection is performed at a pressure ranging from 200 MPa to 400 MPa. The fuel spray is expected to penetrate at a maximum speed of about Mach 1.7 when the injection pressure reaches 400 MPa. The nozzle is a single-hole type, which diameter is 0.5 mm and the length of it is 3 mm. The fuel used in the test is diesel with kinematic viscosity of 5.952 × 10 −6 m/s, surface tension coefficient of 0.0261 kg/sec 2 , and diesel density of 840 kg/m 3 . Supersonic fuel spray and shock wave evolution process. Figrue 2 presents a set of Schlieren photographs of the supersonic fuel spray under the injection pressure from 200 MPa to 400 MPa. The start of injection time of fuel spray is determined based on the extrapolation method of initial stage of spray penetration. It can be found that the fuel sprays under 200 MPa-400 MPa reach or exceed the local sound speed because lots of shock waves occur at the spray periphery. Under 200 MPa, the morphology of spray front is close to blunt body. There is a bow detached shock wave in front of the spray tip because the air ahead of it starts to compress. Attached shock waves occur along the spray body. According to the figures, the leading edge shock wave is wider than other shock waves due to the stronger interaction between spray tip and the air. The spray tip tends to be blunt body due to the relative weak aerodynamic effect when the injection pressure is 200 MPa. When the injection pressure is set to 300 MPa and 400 MPa, the leading oblique shock wave is formed at the initial stage of liquid injection into a gas. We can also see massive attached shock waves along the spray. Attached and detached shock waves induced by supersonic jet were experimentally observed by Nakahira, T 24 . From the figures, we can find that the intervals between attached shock waves alongside the body of the spray are almost the same even if the spray penetrates soon after injection. This implies that above phenomenon may be related to the initial flow characteristics of the fuel spray or the turbulent vortexes, which have similar coherent structure, due to gas-liquid mixing effect. The formation mechanisms of equally spaced attached shock waves remains to be further in-depth studied. Macroscopic characteristics of supersonic fuel spray. Figure 3 shows the influence of injection pressure on the spray tip penetration. Due to the diameter limitation of test optic windows, the upper limit of spray tip penetration in this study is 10 cm. According to the penetration characteristics, the spray penetration approximate linearly increases with time due to less effect of the air resistance relative to inertial force of the spray. With the increasing of injection pressure, the spray penetration gets longer. This result is related to both higher quantity and higher velocity of the spray. Figure 4 shows the temporal profiles of Mach number of the spray tip for different injection pressure cases. It can be seen that the spray tip velocity at 200 MPa is near the local sonic speed. The spray exceeds the local sound speed value after about 0.1 ms ASOI (after start of injeciton). However, a weak highlight area, which indicates density gradient change, exists in front of fuel spray at 52.1 μ s after injection starting time according to the results of high-speed photography (Fig. 2). Because the measuring principle of Schlieren technique is based on the gradient of light refractive index in flow field. When injection pressure is 200 MPa, the spray tip initial velocity (t < 0.1 ms ASOI) have not reached the local sound speed (Fig. 4), which shows that the highlight leading edge wave in front of spray tip in Fig. 2 at 52.1 μ s is a compressed wave rather than a shock wave. It is also known by the comparison of velocity results that the spray Mach number increases from about 1.0 (at 118 μ s) to 1.17 (at 156.3 μ s). When the injection pressure is set to 300 MPa and 400 MPa, the peak Mach number of spray penetration is 1.4 and 1.7 respectively during the injection. Due to the influence of the expansion of the jet and the effects of associated shock wave in the expansion process, rapid attenuation of spray tip velocity occurs after the time of peak velocity of spray tip, and the occurrence of spray tip velocity attenuation is earlier as injection pressure increases. The attenuation time is defined as the moment when the spray tip reaches its maximum velocity. In addition, the duration of spray tip velocity to maintain high speed is gradually reduced with increasing injection pressure or spray velocity. The attenuation time of spray kinetic energy for the injection pressure of 300 MPa and 400 MPa is 90 μ s and 70 μ s, which is obviously earlier than that of 200 MPa. The results above show that the intensity attenuation of spray kinetic energy rapidly increases with increasing spray velocity. This implies that there is a correlation between attenuation of spray kinetic energy and the wave resistance of shock wave induced by the supersonic fuel spray. When spray moves across the sonic line due to the expansion of the jet, the strong shock waves quickly deter the velocity. Figure 5(a) presents the variation of spray cone angle at injection pressure from 200 MPa to 400 MPa. A large cone angle appears at the moment of spray ejection from the nozzle orifice driven by the liquid expansion in the nozzle, instantaneous pulse of internal flow and strong aerodynamic force. And then the spray cone angle maintains stable. The experimental results show that the relatively stable spray angle (15°~18°) is achieved at around 0.1 ms after the large angle in initial period. As can be seen from Fig. 5(b), the spray cone angle in stable stage slightly reduced with increasing injection pressure. It can be concluded from the results that one of the effects of leading shock waves on the spray body is the shape change of spray tip. The existence of shock wave prevents the penetration of fuel spray, slows down the spray tip velocity and causes the shape change of the spray tip. Owing to the fact that the conventional definition of spray cone angle is determined by the upper and middle area of spray body. Obviously, the conventional definition of spray cone angle could not meet the demand of characteristic analysis on the supersonic fuel spray. Therefore, the concept of "spray front angle" is put forward aiming at the morphological characteristics of spray tip. A equivalent cone angle of spray tip, which is defined as the angle of two tangent lines to the spray tip, is used as the spray front angle as shown in Fig. 6. It is excepted that the spray front angle would be a feasible parameter that could recover the correlation or interaction between spray and the leading shock wave. Based on the images of spray development in Fig. 2, the spray leading edge moves forward generally in a cone shape, which is consistent with the phenomenon of K. Pianthong 25 . Figure 7 shows the variation of spray front angle under different injection pressure. It is found that the spray front angle sharply decreases from 180° to 40°~70° after the injection pressure is more than 300 MPa. And then the leading edge of fuel spray moves forward in cone-shaped body with stable angle. It can also be seen that the leading edge of supersonic fuel spray tends to form a cone with the increasing spray velocity, which is conical in the Schlieren plane. According to the experimental results, when the injection pressure is high enough (up to 300 MPa and 400 MPa), there is a sudden decrease for the spray front angle after a short time ASOI. From the Fig. 2, we can find that the spray tip similar blunt body at the initial stage of injection. With the formation of strong intensity oblique shock waves, the spray tip changes into a sharp body in a short time. The reason for this phenomena may be that the strong leading edge shock wave force the spray tip, which is deformable fluid instead of solid, to deform to reach a new force balance. Characteristics of shock wave induced by supersonic fuel spray. The phenomenon of induced shock wave is the critical characteristic of supersonic fuel spray which is different from the subsonic spray. Due to the high intensity of leading edge shock wave, its structure in Schlieren images is clearer than that of attached shock waves. Additionally, the leading edge shock wave has a significant effect on the penetration behavior of fuel spray and morphology of spray tip. A quantitative analysis of the structure of induced shock waves is conducted as shown in Fig. 8. When the injection pressure is 200 MPa, a normal shock wave appears in front of spray tip. However, when the injection pressure is set to 300 MPa and 400 MPa, oblique shock waves occur at the initial stage. The shock wave angles are 112° and 85° respectively for above two injection pressure. The results show that the leading edge shock wave angles decrease with the increase of injection pressure or spray velocity. Distribution of droplets size in supersonic spray field. It is well known that the injection pressure has a great effect on droplets breakup by increasing the relative velocity of gas-liquid phase. Figure 9 shows the results of droplet size measurement conducted in this study under the injection pressure of 100 MPa, 200 MPa, 300 MPa and 400 MPa. The figure presents the volume fraction of different droplet size which is measured by laser particle analyzer, bimodal distribution of volume fraction of droplet size appears in all four injection pressure cases. As can be seen from Fig. 9, the first peak of the distribution locates in the large droplets size range (100 μ m~ 500 μ m), and the second one locates in the small droplets range (1 μ m~20 μ m). It is known that this kind of volume fraction distribution in bimodal is the typical characteristics of the high pressure fuel spray. As the injection pressure increases, the corresponding droplets size of two peaks in the volume fraction distribution gradually decreases with less amplitude. The droplet size of the right peak values at four injection pressures are 464 μ m, 293 μ m, 185 μ m and 180 μ m respectively. The droplet size of the left peak values at four injection pressures are 14 μ m, 9 μ m, 7 μ m and 6 μ m respectively. The bimodal distribution may result from a process involving breakup of large particles, multiple sources of particles and different wave growth mechanisms in the spray field. Table 1 gives the measurement results of four average cumulative distributions under injection pressures from 100 MPa to 400 MPa. It is concluded that all these droplets size decrease but the rate of decrease reduces, which means that the effect of injection pressure on the reduction of droplets diameter has been weakening with increasing injection pressure. It has now been found some empirical models based on the experimental data are adapted to estimate the SMD (Sauter Mean Diameter, i.e. D32) of atomized droplets in the spray field. Table 2 lists three popular empirical SMD models. Table 2 with experimental results. It is found that the SMD estimated by Merrigton's model is most similar to the experimental results. However, when the injection pressure rises, the deviation between experimental and calculated SMD predicted by Merrigton's model increases. The worst prediction among above three models is given by Hiroyasu model. Then we redetermine the exponent (1.09) of the velocity for Merrigton's model. Figure 10 ) with experimental data. The coefficient of determination R 2 (R 2 = 0.952) shows a good correlation between the modified model and experimental results. Conclusion In this study, the atomization process of supersonic fuel spray is investigated under the injection pressure from 200 MPa to 400 MPa to quantitatively analyze the macroscopic characteristics of supersonic fuel spray and the structural features of shock waves. The main conclusion are as follows. (1) When the Mach number of fuel spray is locates in the transonic region of fuel spray (Ma < 1.2), the spray tip is closer to blunt body. Due to the influence of the expansion of the jet and the effects of associated shock wave in the expansion process, the type of leading edge shock wave gradually evolves from attached to detached shock wave. When the spray Mach number exceeds 1.2, the leading shock evolves from a bow shock wave to an oblique shock wave. (2) The occurrence of spray tip velocity attenuation is earlier as injection pressure increases. Additionally, the degree of kinetic energy attenuation rapidly increases with increasing Mach number. The results above show that the intensity attenuation of spray kinetic energy rapidly increases with increasing spray velocity. (3) The concept of "spray front angle" is introduced to analyze the interaction between the spray and leading edge shock wave. The results show that the spray front angle decreases with the increasing spray tip velocity, making the spray tip tend to form a cone affected and restricted by the induced oblique shock wave. (4) The leading shock wave angle decreases with the increase of spray Mach number. When injection pressure is 200 MPa, the induced leading shock is normal shock. But when injection pressure up to 300 MPa or 400 MPa, a oblique leading edge shock wave appears. The shock wave angles are 112° and 85° respectively for above two injection pressure.
4,607.4
2017-01-05T00:00:00.000
[ "Engineering", "Physics" ]
Zeeman effect in atmospheric O 2 measured by ground-based microwave radiometry Abstract. In this work we study the Zeeman effect on stratospheric O2 using ground-based microwave radiometer measurements. The interaction of the Earth magnetic field with the oxygen dipole leads to a splitting of O2 energy states, which polarizes the emission spectra. A special campaign was carried out in order to measure this effect in the oxygen emission line centered at 53.07 GHz. Both a fixed and a rotating mirror were incorporated into the TEMPERA (TEMPERature RAdiometer) in order to be able to measure under different observational angles. This new configuration allowed us to change the angle between the observational path and the Earth magnetic field direction. Moreover, a high-resolution spectrometer (1 kHz) was used in order to measure for the first time the polarization state of the radiation due to the Zeeman effect in the main isotopologue of oxygen from ground-based microwave measurements. The measured spectra showed a clear polarized signature when the observational angles were changed, evidencing the Zeeman effect in the oxygen molecule. In addition, simulations carried out with the Atmospheric Radiative Transfer Simulator (ARTS) allowed us to verify the microwave measurements showing a very good agreement between model and measurements. The results suggest some interesting new aspects for research of the upper atmosphere. Introduction The Zeeman effect is a phenomenon which occurs when an external magnetic field interacts with a molecule or an atom of total electron spin different from 0. Such an interaction will split an original energy level into several sublevels (Lenoir, 1967). In the atmosphere, oxygen is an abundant molecule which in its ground electronic state has a permanent magnetic dipole moment coming from two parallel electron spins. The interaction of the magnetic dipole moment with the Earth magnetic field leads to a Zeeman splitting of the O 2 rotational transitions. In this state, each rotational level with quantum number N is split into three levels of total quantum number J (J J ) following a Hund's coupling case (Pardo et al., 1995). This effect was studied by Gautier (1967) and Lenoir (1967Lenoir ( , 1968 in the 60 GHz band of the main isotopologue 16 O 2 . It is established, from these works, that the Earth's magnetic field splits the different Zeeman components over a range of a few megahertz around the center of each rotational line. The shape of each component is governed by a pressure broadening mechanism up to 60 km of altitude and by a Doppler mechanism above (Pardo et al., 1995). Zeeman splitting of millimeter-wavelength emissions of oxygen molecules must be taken into account for altitudes above 45 km in the terrestrial atmosphere when modeling the radiative transfer of these molecules. Temperature soundings of the atmosphere at high altitudes are not possible without including this effect (Von Engeln et al., 1998;Von Engeln and Buehler, 2002;Stähli et al., 2013;Shvetsov et al., 2010). Observation of the Zeeman effect from ground-based measurements was first performed by Waters (1973) for atmospheric O 2 at 53 GHz. Pardo et al. (1995) were able to measure the Zeeman substructure for atmospheric 16 O 18 O at 233.95 GHz. For this rare isotopic species the relative abundance is much lower than for 16 O molecule, and its emission from upper atmospheric layers can be observed and the Zeeman substructure detected from the ground (Pardo et al., 1995). The main difficulty for observations of the Zeeman structure of 16 O molecule comes from its very broad tropospheric emission and the high opacity of low layers which eliminate any structure. Published by Copernicus The observation of this effect for 16 O has also been possible from satellite measurements. Hartmann et al. (1996) observed the Zeeman broadening of the oxygen emission line of the 9 + line in the 61.1509 ± 0.062 GHz frequency range using the Millimeter-Wave Atmospheric Sounder on the NASA space shuttle during the ATLAS missions. Comparison of satellite measurements and radiative transfer models including the Zeeman effect have also been addressed (Han et al., 2007(Han et al., , 2010Schwartz et al., 2006). Han et al. (2007) used spectral passband measurements from the Special Sensor Microwave Imager/Sounder (SSMIS) on board the Defense Meteorology Satellite Program F-16 satellite to measure the oxygen magnetic dipole transitions (7+, 9+, 15+, and 17+; Rosenkranz, 1993). These measurements were used to validate a fast model developed from the radiative transfer model of Rosenkranz and Staelin (1988). Moreover, the measurements were also used together with data from the Microwave Limb Sounder (MLS) on board the Aura spacecraft for assimilation in a numerical weather prediction (NWP) model (Hoppel et al., 2013). Schwartz et al. (2006) also reported a comparison of another radiative transfer model with measurements of the 118 GHz oxygen line from MLS. In this work we present an experiment in which the Zeeman broadening of the oxygen emission line at 53.0669 GHz is observed and the polarization state of the radiation due to this effect is detected for the first time using a ground-based microwave radiometer. The measurements were possible using a fast Fourier transform (FFT) spectrometer with 1 GHz of bandwidth to measure the whole oxygen emission line centered at 53.07 GHz and a narrow spectrometer (4 MHz) to measure the center of the line with a very high resolution (1 kHz). These measurements have been compared to a model which includes the Zeeman-splitting effect. The incorporation of this effect to the forward model will allow extension to the temperature retrievals beyond 50 km. This improvement in the forward model will be very useful for the assimilation of brightness temperatures in NWP models. It is also important to note that ground-based measurements of the atmosphere with good temporal resolution complement satellite measurements, which are temporally limited by their satellite's orbital parameters. The paper is organized as follows: in Sect. 2, the instrumentation and the measurements are briefly outlined. The Zeeman effect theory and the modeling are presented in Sect. 3. Section 4 presents the results of this study. Firstly the simulations using a model are addressed and secondly the tropospheric correction performed to the radiometer measurements and the results obtained during this campaign are presented. Finally, the conclusions are given in Sect. 5. Instrumentation and measurements The TEMPERA (TEMPERature RAdiometer) radiometer is a microwave radiometer that provides temperature profiles from the ground to around 50 km (Stähli et al., 2013). This is the first microwave radiometer that measures temperature in the troposphere and stratosphere at the same time. The instrument is a heterodyne receiver at a frequency range of 51-57 GHz. Figure 1 shows a picture of TEMPERA, which is operated in a temperature-stabilized laboratory in the ExWi building of the University of Bern (Bern, Switzerland; 575 m above sea level; 46.95 • N, 7.44 • E). In this lab a styrofoam window allows views of the atmosphere over the zenith angle range from 30 to 70 • . The instrument mainly consists of three parts: the front end to collect and detect the microwave radiation and two back ends consisting of a filter bank and a digital FFT spectrometer for the spectral analysis. The radiation is directed into the corrugated horn antenna using an off-axis parabolic mirror. The antenna beam has a half power beam width (HPBW) of 4 • . The signal is then amplified and downconverted to an intermediate frequency for further spectral analysis. A noise diode in combination with an ambient hot load is used for calibration in each measurement cycle. The noise diode is calibrated regularly (about once a month) using a cold load (liquid nitrogen) and a hot load (ambient). The receiver noise temperature T N is in the range from 475 to 665 K. More details about the calibration with TEMPERA can be found in Stähli et al. (2013). For tropospheric measurements the instrument uses a filter bank with four channels. By switching the local oscillator frequency with a synthesizer, it is possible to measure at 12 frequencies. In this way TEMPERA covers uniformly the range from 51 to 57 GHz at positions between the emission lines. Tropospheric retrievals are not addressed in this paper and more details about this measurement mode can be found in Stähli et al. (2013) and Navas-Guzmán et al. (2014). The second back end is used for stratospheric measurements and contains a digital FFT spectrometer (Acqiris AC240) for the two emission lines centered at 52.5424 and 53.0669 GHz. The FFT spectrometer measures the two emission lines with a resolution of 30.5 kHz and a bandwidth of 960 MHz. The receiver noise temperature T N for the receiver-spectrometer combination is around 480 K. An overview of the technical specifications is given in Table 1. An example of FFT measurements is shown in Fig. 2 (upper panel). This figure shows the brightness temperature on 16 January of 2012 for the oxygen emission line centered at 53.07 GHz. The red box indicates the influence of the Zeeman effect by the broadened line shape in the center with a kind of a plateau (round line shape around the line center: ±1 MHz). A second spectrometer was installed in TEMPERA in order to measure with a higher resolution the narrow spectral region where a broadening in the oxygen emission line is produced due to the Zeeman effect. This narrow-band software defined ratio (SDR) spectrometer consists of 4096 channels which cover a bandwidth of 4 MHz with a resolution of 1 kHz. An example of a monthly mean brightness temperature spectrum centered at 53.07 GHz measured with the SDR spectrometer is shown in Fig. 2 (lower panel). Moreover, a set of two auxiliary mirrors was installed on the roof of the ExWi building in the University of Bern (Fig. 3). A rotating mirror allows one to observe the atmosphere under different azimuth angles and with a fixed elevation angle, while the fixed mirror directs the radiation from the rotating mirror into TEMPERA radiometer. The main goal of using these auxiliary mirrors is to measure the Zeeman-broadened oxygen line under different angles relative to the Earth's magnetic field. A special campaign was carried out in autumn of 2013 in order to detect the Zeeman effect with TEMPERA. Three months of measurements (September-November 2013) were performed using these auxiliary mirrors and the SDR spectrometer. A special measurement cycle was designed for TEMPERA during this period. Periodic cycles of almost 5 min were performed. This whole cycle consisted of 13 subcycles, each one starting with a hot load calibration in combination with a noise diode for 10 s followed by other 10 s of atmospheric measurements in one azimuth direction. A total of 13 azimuth angles were scanned ranging from 71.5 to 191 • in steps of 10 • during the whole cycle. The elevation angle was fixed at 60 • during all the measurements since it was found as the angle at which the intensity of the emission lines was highest (Stähli et al., 2013). Theory The Zeeman effect (Zeeman, 1897) occurs because the spin of unpaired electrons couples to the external magnetic field, changing the internal energy of the molecule. A transition between two of these altered energy levels can change the frequency dependence of the absorption spectrum. The Zee-man energy change is calculated by where g depends on the line and molecule (see, e.g., Christensen and Veseth (1978) for molecular oxygen), µ 0 is the Bohr magneton, M(J M ) is the projection of J on the magnetic field, and H is the magnetic field vector. The quantum numbers necessary can be found in most databases, e.g., HI-TRAN (Rothman et al., 2013). There are 2J + 1 possible M for a state level (these are M = −J, −J + 1, · · ·, J − 1, J ), and M can only change by 0 or ±1. A transition without changing the value of M is called a π transition, and a transition with changing M is called a σ ± transition. The total line strength is not altered by the effect but will be distributed among the new lines. Each line "produced" by this procedure then undergoes the same broadening mechanisms (thermal and pressure) to create the absorption spectrum. In addition to splitting the line, the change in energy level depends on the direction of the magnetic field and the propagation path of the radiation, which means that the absorption also depends on the polarization of the radiation. The main polarization occurs along the magnetic field in the plane perpendicular to the propagating radiation. If H is entirely in this plane, then the radiation will be linearly polarized along H for σ ± transitions and linearly polarized perpendicular to H for π transitions. If H is parallel/anti-parallel to the path of the propagating radiation, then the σ + and σ − transitions will circularly polarize the radiation in opposite ways, and π transitions do not affect the radiation at all. The polarizing effect will generally scale between the two cases above as a function of the angle that H forms with the direction of the propagating radiation. Modeling The first official release of the Atmospheric Radiative Transfer Simulator (ARTS) was by Buehler et al. (2005) as a flexible/modular code base for radiative transfer simulations. Since then, ARTS has been under continual development. One key release is version 2.0 by Eriksson et al. (2011), which describes the ARTS scripting potential and a few of the modules. Presently, ARTS is at version 2.2; the latest version includes, among other new features, a module that calculates the Zeeman effect presented by Larsson et al. (2014). In short, ARTS calculates each of the three polarization components individually before adding their absorption contributions to a Stokes vector propagation matrix. The polarization of the radiation is internally kept in a universal coordinate system defined by the sensor through all of the propagation. The line shape should return both its imaginary and real part to account for dispersion-caused polarization rotation. The input magnetic field is either static or three 3-Dgridded fields, one field for each coordinate: x, y, and z. This propagation matrix is then sent to the radiative transfer calculator, which solves the vector radiative transfer equation ( in, e.g., del Toro Ininiesta, 2003) where I is the Stokes vector, r is the path vector, K is the propagation matrix, and B is the Stokes version of Planck's function for blackbody radiation. For details on the ARTS Zeeman effect module see Larsson et al. (2014). The calculated relative Zeeman pattern for the line measured by TEMPERA can be found in Fig. 4. This otherwise singular line is split into 159 Zeeman lines, 53 for each polarization component. The plot has been renormalized for readability. The strongest split line accounts for less than 1.5 % of the original strength of the line, the maximum splitting from the central line is ∼ 27.99 kHz µT −1 , and the splitting between the individual lines is about 1.08 kHz µT −1 within a component. The last number is significantly small. The thermal broadening in the stratosphere is under normal conditions larger than the magnetic line splitting above Bern, so individual Zeeman lines cannot be discerned from the overall shape. Brightness temperature simulations incorporating the Zeeman effect Brightness temperature spectra have been simulated using the ARTS model which was described in the previous section. ARTS was set with all the information about instrumental aspects and location of TEMPERA in order to simulate the same measurement conditions. The brightness temperature was calculated for 13 azimuth angles (71.5 : 10 : 191.5 • ) and a fixed elevation angle (60 • ) and to simulate the atmospheric conditions of 15 October 2013 (Figs. 5 and 6). The altitude of the platform was set at 12 km in order to avoid any tropospheric effect in the spectra. On 15 October 2013, the total intensity of the magnetic field over Bern at the altitude of 50 km was 46 547 nT with a declination of 1 • 21 44 and an inclination of 62 • 46 16 (www.ngdc.noaa.gov/geomag/ magfield.shtml). Figure 5 shows the calculated brightness temperature spectra for a linear horizontal polarization of the oxygen emission line centered at 53.07 GHz in a range of 5 MHz. From these simulations we can observe that the spectra are almost identical for most of the frequency range plotted here and differences are only observed in the central part when the observational azimuth angle is changed. In the narrow central frequency range we can observe that both the shape and the intensity of the spectra changes for the different observations. For the higher azimuth angles the brightness temperature spectra are lower and the shape is flatter, while for lower angles the spectra are higher and the shape is less flat. The maximum difference in brightness temperature between the most intensive spectrum (91.5 • ) and the least intensive (191.5 • ) is 2.5 K. Figure 6 shows linear vertical polarization. We observe a similar pattern for linear vertical polarization as for linear horizontal polarization, with the peak strength of the signal changing mostly in the center of the line as a function of the azimuthal angle. However, the change is much smaller for linear vertical polarization, which only has a maximum difference of 1 K between the most and least intense spectra. Also, the change with azimuthal angle is inverted compared to linear horizontal polarization. For the linear vertical polarization the most intensive spectrum corresponds to the observational angle of 181.5 • while the least intensive corresponds to 91.5 • . This behavior is clearly associated with the polarized nature of the Zeeman effect, since the polarized state of the observed radiation changes when the angle between the propagation path and the direction of the Earth magnetic field is varied. It is also interesting to note from Figs. 5 and 6 that the differences between horizontal and vertical polarization are very small when close to the 181 • azimuth angle. This is in good agreement with theory, as this direction corresponds to measurements of radiation which has been propagated along the magnetic field towards TEMPERA. This parallel propagation results in minimal differences among linear polarizations. The brightness temperature has also been simulated without considering the Zeeman effect in the ARTS model. These simulations correspond to the dashed lines shown in Figs. 5 and 6. We found that when the Zeeman module is not active there is no difference in the spectra for difference observational angles. Moreover, the spectrum presents higher brightness temperature values and it does not show any broadening in the center of the oxygen emission line. In order to compare the simulated spectra from ARTS with the measurements, the effects of the different optical components of TEMPERA on the polarization state of the radiation, as well as the vertically polarized observing antenna, have to be considered. A full characterization of the polarization state of the radiation can be done by means of the Stokes vector, s, which is defined as where ε and µ are the electric and magnetic constants, respectively, < · > indicates time average, and E v and E h are the complex amplitudes for vertical and horizontal polarization. The first Stokes component (I ) is the total intensity, the second component (Q) is the difference between vertical and horizontal polarization, and the last two components, U and V , correspond to linear ±45 • and circular polarization, respectively. The Stokes components are converted to brightness temperature by inverting the Planck function (Eriksson et al., 2011); this new Stokes vector of brightness temperatures is denoted as s . The calculus of the measured brightness temperature (T p b ) considering the sensor polarization response can be expressed as (cf. Eriksson et al. (2011), Eq. 19) where p is a row vector of length 4 which describes the sensor polarization response. In the case of TEMPERA, whose antenna is vertically polarized, the vector p is [1100]. The rotation of the Stokes reference frame due to the reflection in the different mirrors and the rotation of the external mirror is considered using the transformation matrix L(χ), which allows one to obtain a consistent definition between the polarization directions for atmospheric radiation and sensor re- sponse. This matrix is defined as (Liou, 2002) The rotational angle (χ) has been calculated using the GRASP software package (www.ticra.com/products/ software/grasp). This software package allows design and analysis of complex reflector elements using physical optics, physical theory of diffraction and the method of moments. Figure 7 shows the setup of this simulation, where we can see the different TEMPERA components (horn antenna, parabolic mirror and the two auxiliary mirrors, the fixed and the rotating mirror) and the ray tracing of an electric field which is propagated from the antenna to the atmosphere. The calculated angle χ can be expressed as χ = ϕ +ϕ offset , where ϕ is the observational azimuth angle defined in our experiments (ϕ = 71.5 : 10 : 191.5 • ) and ϕ offset is 141.5 • . Once the sensor polarization response and the rotation of the polarized radiation have been characterized, the effective brightness temperature can be calculated as (Eriksson et al., 2006) Figure 8 shows the effective brightness temperature spectra calculated for the case simulated in ARTS (15 October 2013) in Figs. 5 and 6. For these spectra we can appreciate again the same pattern as in the ARTS simulations, with almost the same intensity on the wings of the oxygen emission line and some differences in the central frequencies when the azimuth angle is changed. The highest brightness temperature is found at 71.5 • , while the lowest is found at 191.5 • . The latter position corresponds to the maximum broadening found when the direction of observation is almost antiparallel to the direction south-north. The maximum difference between the most and the least intense spectra is 2.5 K. In order to study the difference in the broadening of each azimuth observational spectrum due to the Zeeman effect, we have calculated the ratio among each spectrum to the averaged spectrum from all the observational angles. Figure 9 shows these ratios for the different azimuth angles. Azimuthal behavior with a ratio below unity in the center of the line also has a ratio above unity in the wings, which means that the line experienced more than average broadening. The opposite is also true: azimuthal behavior with values above unity in the center means less than average broadening. The different ratios show a clear azimuth dependence, indicating that the highest broadening is found when the azimuth angle is 191.5 • while the smaller broadening is found at 71.5 • . Tropospheric correction of SDR spectrometer A ground-based microwave radiometer measures a superposition of emission and absorption of radiation at different altitudes. The received intensity at ground level can be expressed in the Rayleigh-Jeans limit (hυ kT ) as a function of the brightness temperature. In these conditions the radiative transfer equation is given by where T b is the brightness temperature at frequency υ, T 0 is the brightness temperature of the cosmic background radiation, T (z) is the physical temperature at height z, z 0 is the Earth surface, z 1 is the upper boundary in the atmosphere, α is the absorption coefficient, and τ is the opacity. The opacity is defined as The contribution of the troposphere to the brightness temperature measured with a microwave radiometer at ground level is very important and it could be very different depending on the observational direction or on the period of measurements. After oxygen, water vapor and liquid water (clouds) are the most important components in the atmosphere, the emissions of which have relevance in the microwave spectrum. It is very important to correct our measurements for any tropospheric effect in order to ensure that the changes observed in our measurements for different observational directions come from the stratosphere (Zeeman effect) and not from the troposphere. Since the tropospheric portion of the pathlength provides a relatively spectrally flat signal the microwave radiative transfer equation can be rewritten as where T b (z trop ) is the brightness temperature as observed from the tropopause, τ is the tropospheric zenith opacity, and T trop is the effective temperature of the troposphere. From this equation the opacity can be calculated as Since the atmospheric opacity is dominated by the contribution from the troposphere, the stratospheric contribution is considered negligible and the cosmic background radiation, T bg , is in practice used instead of T b (z trop ) in Eq. (10). This means that the calculated τ actually is approximately the total atmospheric opacity and hence includes the minor contribution from altitudes above the troposphere (e.g., absorption by stratospheric O 2 and H 2 O) (Forkman et al., 2012). T trop has been estimated using a linear model between the weighted tropospheric temperature and the ground temperature (Ingold et al., 1998). The weighted tropospheric temperature was calculated using radiosonde measurements. Radiosondes are launched twice a day at the aerological station of MeteoSwiss in Payerne (40 km west of Bern). One year of radiosonde data was used and the linear fit found between T trop at 53 GHz and the ground temperature T z 0 was T trop = 0.8159T z 0 + 47.21 K. The constant term T bg is independent of frequency and has a value of 2.7 K (Gush et al., 1990). The term T b (z 0 ) is measured using the wings of the oxygen emission line centered at 53.07 GHz for every azimuth angle. The simultaneous measurements performed with the FFT spectrometer allow us to measure in the wings of the oxygen emission line, where most of the contribution to the brightness temperature comes from the troposphere. In the frequency range of interest, the tropospheric attenuation increases with increasing frequency. In order to account for this, we determine the correction factor at each frequency using a linear fit between the frequency ranges highlighted in red in Fig. 10. Once all the terms are calculated, the brightness temperature corrected for tropospheric effects can be obtained as It is interesting to note that for the correction presented in this section we have used the scalar radiative transfer equation, since this tropospheric correction is independent of polarization state. This assumption is valid if scattering can be neglected, which should hold in the absence of strong precipitation. Stratospheric brightness temperature measurements As already described in Sect. 2, a special campaign of microwave radiometer measurements has been performed for 3 months in autumn 2013 in the ExWi building of University of Bern. During this campaign, TEMPERA was set with a special configuration in order to be able to observe the Zeeman effect from ground-based measurements. Radiometer measurements in different azimuth angles (13 angles) were carried out in order to scan the atmosphere under different angles between the propagation path and the local Earth magnetic field. Figure 11 shows mean monthly brightness temperature spectra obtained for different azimuth angles in October 2013. All the measurements were corrected for tropospheric effects following the procedure described in the previous section. Figure 11a shows the whole range (4 MHz) measured with the SDR spectrometer. From this plot we observe that the mean spectra for the different azimuth angles show almost identical values outside of the narrow central region. However, differences in the intensity and Monthly Brightness temperature from SDR corrected of tropospheric effects (October 2013). show a higher broadening for the highest azimuth angle for both, measurements and ions. In order to compare in a more quantitative way the measurements with the model we mpared the ratio between the maximum mean brightness temperature of each spectrum and n value for all the spectra at the central frequencies (range of ± 0.25 MHz). Equation 12 s explicitly the expression of these calculation: in the shape are observed in the very narrow range centered on 53.067 GHz. Figure 11b shows a zoom of the spectra in the central frequencies for some selected azimuth angles. We can observe that for higher azimuth angles the spectra show lower values of brightness temperature and flatter shapes in the central frequency range (±0.5 MHz), while higher brightness temperature and less flat shapes are observed in lower azimuth angles. These results are in good agreement with the simulations performed including the Zeeman effect with ARTS (Sect. 4.1). However, we can notice that there is an offset in the brightness temperature spectra from model (highest peak ∼ 64 K) and from measurements (highest peak ∼ 54 K). The offset could be due to an inappropriate consideration of the continuum absorption from secondary species (water vapor, ozone, etc.) in the forward model and the fact that the contribution from line mixing to the oxygen spectra is not modeled. Other reasons that could explain some differences could be related to the uncertainties of the tropospheric correction in the measurements and to the fact we are comparing different periods of measurements: 1 day for the simulations (15 Figure 12. Brightness temperature spectra from the SDR spectrometer (solid lines) and simulated from ARTS (dash lines) for two different observational angles. October 2015) and 1 month of integrated measurements (October 2013). In any case, while the baseline offset affects the absolute difference between model and data, the shape of the center line (±2 MHz) is not altered. Thus the main conclusions of Zeeman polarization measurements and the ARTS module validation are solid. Figure 12 shows a direct comparison of the brightness temperature spectra from SDR measurements (solid lines) and from ARTS model (dashed lines) for two observational azimuth angles (91.5 and 181.5 • ). An offset correction has been applied to the simulated spectra in order to compare with the measurements. Although the absolute values are not exactly the same for the modeled and measured spectra in the center of the oxygen emission line we can clearly observe that the behavior of the spectra for the two azimuth angles are the same. The spectra show a higher broadening for the highest azimuth angle for both measurements and simulations. In order to compare the measurements with the model in a more quantitative way, we have compared the ratio of the maximum mean brightness temperature of each spectrum to the mean value for all the spectra at the central frequencies (range of ±0.25 MHz). Equation (12) indicates explicitly the expression of these calculations: where υ 1 and υ 2 indicate the frequency range which corresponds to an interval of 0.5 MHz centered at 53.067 GHz. ψ i is the observational azimuth angle for a specific position and n t is the total number of positions scanned by TEMPERA (13 positions). Figure 13 shows these ratios calculated with the ARTS model simulating the conditions of 15 October 2013 and the ones obtained from the mean monthly spectra (October 2013) Figure 13. Ratios of maximum brightness temperature of each spectrum to the mean value for all the spectra at the central frequencies for TEMPERA radiometer (red points) and ARTS (blue points). measured by TEMPERA. We can observe that, in general, there is a very good agreement between the measurements and the model. Both simulations and measurements show higher ratio values (> 1), which indicate a smaller broadening regarding the averaged spectrum for the smallest azimuth angles and a larger broadening (ratios < 1) for the largest angles. The relative differences between both ratios are lower than 1 % for all the azimuth angles. We observe that the ratios for some azimuth angles are almost identical while some discrepancies are observed for other ones. The errors for the TEMPERA measurements have been estimated by evaluating the uncertainties associated with the different terms of the tropospheric correction (Eq. 11). The error bars shown in Fig. 13 have been calculated using error propagation theory and they presented values very similar (∼ 0.01) for all the observational angles. The errors associated with the simulations were obtained by evaluating the ratio of the simulated spectra plus Gaussian white noise. The calculated uncertainties presented values much smaller than the ones found for the measurements (maximum value of 4.6 × 10 −4 ). It is important to note that the differences found between measurements and simulations are within the measurement uncertainties. From this comparison we can conclude that the agreement between measurements and model is clear. These results show the polarized state of the radiation due to the Zeeman effect, which is revealed for a different broadening in the spectra when the angle of the Earth magnetic field and the observational path is changed. Conclusions This work presents an experiment in which the Zeeman broadening of the oxygen emission line at 53.0669 GHz is observed and the polarization state of the radiation due to this effect is detected for the first time using a ground-based microwave radiometer. A special campaign was carried out in order to detect this effect with the TEMPERA radiometer. The installation of a fixed and a rotating mirror in front of TEMPERA allowed us to measure under different angles between the observational path and the Earth's magnetic field direction. A total of 13 azimuth angles were scanned ranging from 71.5 to 191.5 • . In addition, the use of a narrow spectrometer (4 MHz) allowed us to measure the center of the oxygen emission line with a very high resolution of 1 kHz. The brightness temperature spectra for the different azimuth angles were simulated using the ARTS model. This forward model applies a vector radiative transfer code which includes the Zeeman effect. ARTS was set up with all the information about instrumental aspects and location of TEM-PERA in order to simulate the same measurement conditions. These simulations showed almost identical spectra for most of the frequency range (4 MHz) and differences were only observed in the central part when the observational azimuth angle was changed. The spectra considering linear horizontal polarization showed lower values of brightness temperature and flatter shapes for the highest azimuth angles, while for lower angles the spectra showed higher values and the shapes were less flat. The maximum difference in brightness temperature between the most intensive spectrum (91.5 • ) and the least intensive (191.5 • ) was 2.5 K. For the linear vertical polarization the effect in the central frequencies was smaller, with a maximum difference of brightness temperature of 1 K between the most and the least intensive spectra; the azimuthal order was the inverse. These results are an evidence of the polarized nature associated with the Zeeman effect, which shows changes in the polarized state of the observed radiation when the angle between the propagation path and the direction of the Earth's magnetic field is varied. In order to compare the ARTS simulations with the measurements the effects on the polarization state of the radiation due to the different optical components of TEMPERA radiometer were taken into account using the GRASP software package. The effective brightness temperature calculated after this correction showed that the most intense and least-broad spectrum was found at 71.5 • and the least intense and most-broad spectrum was found at 131.5 • . The maximum difference between both spectra was 2.3 K. Similar behavior to the simulations was observed for the measured spectra from the TEMPERA radiometer. A direct comparison of the ratios of the maximum brightness temperature of each spectrum to the mean value for all the spectra at the central frequencies showed a very good agreement between the model and the measurements. Both simulations and measurements showed a smaller broadening for the smallest azimuth angles and a larger broadening for the largest angles. The small discrepancies found for some azimuth angles were always within of the measurement uncertainties.
8,331.4
2015-04-23T00:00:00.000
[ "Physics", "Environmental Science" ]
A Cartoon-Texture Decomposition Based Multiplicative Noise Removal Method We propose a new frame for multiplicative noise removal. To improve the multiplicative denoising performance, we add the regularization of texture component in the denoising model, designing a multiscale multiplicative noise removal model. The proposed model is jointly convex and can be easily solved by optimization algorithms. We introduce Douglas-Rachford splitting method to solve the proposed model. In the algorithm, we make full use of some important proximity operators, which have closed expression or can be executed in one time iteration. In particular, the proximity of norm is deduced, which is just the Fourier domain filtering. In the process of simulation experiments, we first analyze and select the needed parameters and then test the experiments on several images using the designed algorithm and the given parameters. Finally, we compare the denoising performance of the proposed model with the existing models, in which the signal to noise ratio (SNR) and the peak signal to noise ratios (PSNRs) are applied to evaluate the noise suppressing effects. Experimental results demonstrate that the designed algorithms can solve the model perfectly and the recovery images of the proposed model have higher SNRs/PSNRs and better visual quality. Introduction Image denoising is a basic and important task in image processing.The relatively mature developed denoising model is the additive noise model in which case the noise is assumed to obey a Gaussian distribution, that is, = + , ∼ (0, 2 ) . (1) However, the noises involved in many applications do not conform to the characteristic of the additive one; they may corrupt an image in other forms.In this paper, we are concerned with the denoising problem under the assumption that the original image has been corrupted by some multiplicative noise.The corresponding examples include the uneven phenomena during the magnetic resonance imaging (MRI) and the speckle noise in the ultrasonic and in synthetic aperture radar (SAR) image.The degradation model of the multiplicative noise can be expressed as where ⋅ refers to the componentwise multiplication.To model the actual problems, we assume that the multiplicative noise obeys some random distribution; for example, the noise in SAR images is assumed to obey the Gamma distribution and the one in ultrasonic image obeys the Rayleigh distribution. Multiplicative Noise Removal where the mean of noise is assumed to be 1 and the variance is assumed to be 2 .Since (3) is a nonconvex problem, it is very difficult to be solved.The second classic model is called AA model [2] which is given by Aubert and Aujol under the hypothesis of Gamma distribution: The objective function in ( 4) is also nonconvex.However, Aubert and Aujol showed the existence of minimizers of the objective function and employed a gradient method to solve (4) numerically.There are several improved works based on (4) developed in recent years. Recently, Shi and Osher [3] proposed considering a logarithm transformation on the noisy observation, log = log+log , and derived the TV minimization model for multiplicative noise removal problems.Huang et al. also proposed the log-domain denoising model using the transformation = exp() in (4).Consider which is known as EXP model [4].The objective function in ( 5) is strictly convex in and the authors promoted an alternating minimization algorithm to solve the model and showed the convergence of the method simultaneously.Nevertheless, model ( 5) is convex in rather than in in the original image domain.At the same time, Durand et al. [5] proposed a method composed of several stages.They also used the log-image data and applied reasonable suboptimal hard thresholding on its curvelet transform; then they applied a variational method by minimizing a specialized hybrid criterion composed of an 1 data-fidelity term of the thresholded curvelet coefficients and a TV regularization term in the logimage domain.The restored image can be obtained by using an exponential of the minimizer, weighted in such a way that the mean of the original image is preserved.Their restored images combine the advantages of shrinkage and variational methods.Besides the above approaches, dictionary learning methods and nonlocal mean methods have also been proposed and developed for multiplicative denoising [6][7][8][9].In [10], Zhao et al. developed a convex optimization model for multiplicative noise removal.The main idea is to rewrite a multiplicative noise equation such that both the image variable and the noise variable are decoupled.That is to say, rewrite problem (2) as where = diag() is a diagonal matrix of which the main diagonal entries are given by [] .According to (2), when there is no noise in the observed image, we obtain that = , a vector of all ones.When there is a multiplicative noise in the observed image, we expect that {[] 0 ̸ = 0, ∀}, and moreover they are greater than zeros.So we can say that is invertible, and ( 6) is equivalent to where = diag() is the diagonalizable matrix of vector and In (8), the first term is to measure the variance of , the second term is the data term, and the third term is the TV regularization term.If the first term is absent and can be arbitrarily assigned, then minimizing min In [11], it was pointed out that minimizing (10) can extract the large scale component and leave out the small scale contents of image .In other words, we can extract cartoon component with different scales based on the selection of parameter = 2 / 1 .We verify the above conclusion through the following experiment.The example is shown in Figure 1(a), an image with four squares with size of 3 × 3, 5 × 5, 20 × 20, and 80 × 80 pixels, respectively.The experiment results are the following: when = 0.01, all the four squares can be extracted, as shown in Figure 1(b); when = 0.05, three squares can be extracted while the one of 3×3 is lost, as shown in Figure 1(c); when = 0.2, the two big squares appear, and another two are lost, as shown in Figure 1(d); when = 2, only the biggest square 80×80 remains, while all other ones are lost, as shown in Figure 1(e).The above results are consistent with the theoretical analysis in [11]; the bigger is, the bigger the scale of extraction is.As a result, we have sound reasons to believe that the minimizer of model (10) is almost the cartoon component of the whole restored image we expect. The above analysis shows that, if we want to recover more contents from an image , we should choose a small , which bring much more small scales and details.However, for a noisy image, the small scale component may contain much more noise, which may hinder the denoising quality.For this reason, we will improve the denoising model through adding the prior information of the texture component and design a cartoon-texture decomposition based denoising model.Then, we design the numerical algorithm based on the Douglas-Rachford splitting algorithm and three useful proximity operators.Before conducting the experiment, we analyze the selection of the corresponding parameters.Finally, we verify the performance of our proposed model through comparing the SNRs/PSNRs indexes with some existing models. The rest of the paper is organized as follows.We propose our new model in next section.In Section 3, the numerical method is presented, in which the Douglas-Rachford (DR) splitting algorithm and some proximal operators used in this paper are firstly reviewed, and the algorithm is then designed and described in detail.In Section 4, the denoising experiments on two types of multiplicative noise are implemented, and the experiment results and performance analyses are unfolded.Finally, we conclude our work in Section 5. The Proposed Model Based on the analysis in Section 1.2, the recovery image obtained by solving (8) mainly contains the cartoon component of what we expect, while most of the significant texture components are lost.To optimize the denoising performance, we improve model ( 8) by adding the texture information and alter the regularization term ‖‖ TV to be ‖‖ TV , where denotes the cartoon component of the restored image . Taking into consideration the prior information of texture V = − , we select ‖V‖ 2 −1 as the regularization term about V, for that minimizing ‖⋅‖ 2 −1 can extract the texture component very well.In a word, the new model is and the restored image is * = * +V * .In (11), 1 , 2 , and are the positive regularization parameters to control the balance among the terms in the objective function. The proposed model possesses the following superiorities.Firstly, we add the term ‖V‖ 2 −1 in the regularization part, which can help to recover more image information and improve the quality of the restored image.Secondly, 2 / 1 and can be adjusted according to the noise level, which makes the model have multiscale property.Thirdly, when → +∞, then V → 0, and (11) degrades to model (8); that is, (11) is the modified and improved version of (8).Finally, the proposed model keeps the structure of (8), is jointly convex for (, , V), and can be easily solved by optimal methods.In the following section, we use alternative iteration method and Douglas-Rachford method to deal with it, which combine the application of several proximity operators. The Numerical Method 3.1.Douglas-Rachford Algorithm and Some Proximity Operators.Before proposing the numerical scheme of ( 11), we will review some basic algorithms and operators.The first one is the Douglas-Rachford splitting algorithm.Definition 1.Let Φ() and Ψ() be the proper, l.s.c., and convex functions such that and Ψ() + Φ() → +∞ as ‖‖ → +∞; then the problem admits at least one solution and, for any ∈ (0, +∞), its solutions are characterized by Algorithm 2. Secondly, we review the definition of proximity operator and present several special examples that will be used in the numerical scheme.Definition 3. Let Φ be a proper, l.s.c., and convex function; for every ∈ , the minimization problem admits a unique solution, which is denoted by prox Φ (), and the operator prox Φ () : → thus defined is called the proximity operator of Φ. In the following, we list three examples of proximity operator which will be used in this paper: (1) Φ() = ‖ ‖ 1 , = ( 1 , 2 , . . ., ) ∈ .Set = prox Φ (); then can be componentwise expressed in the close form: which is known as the soft thresholding operator, and we denote it in short by = ST (). ( There are many strategies to solve it.One of the most classical methods is the semi-implicit gradient descent algorithm proposed by Chambolle in [12]; another is the Forward-Backward method developed in [5].Both of the above methods need inner loop in the processing of numerical scheme.In this paper, we adopt the variable splitting method and express the problem in discrete form: where ∇ is the gradient operator given by ∇ = (∇ (1) , ∇ (2) ) , [∇ ()] = ([∇ (1) ()] , [∇ (2) ()] ) , = 1, 2, . . ., , We introduce an auxiliary variable ∈ ×2 to transfer ∇: with a sufficiently large penalty parameter .We solve (19) by an alternating minimization scheme given in Algorithm 4. Algorithm 4 (proximity of TV).Consider the following: (1) Fixing , compute componentwise It is worth mentioning that the above alternating minimization scheme is embedded into the whole algorithm with just one time iteration, which makes the algorithm have no inner iteration. (3) Consider Φ() = ‖‖ 2 −1 .We will deduce prox Φ () in the continuous situation, with the purpose of describing the derivation process clearly; this does not affect the processing of the numerical implementation.The proximity of Φ() is expressed as Denote the objective function in (20) by while minimizing (V) is equivalent to minimizing Ĥ(V) in Fourier domain: minimizing Ĥ(V) in V yields the unique solution V = Lx, where Taking inverse Fourier transform, we have It is indicated from (24) that the proximity of −1 norm is just the Fourier domain filtering. Solving the Proposed Model Numerically.We solve (11) by alternatively updating , , V in turn.The framework is given as in Algorithm 1. In Algorithm 1, updating is to compute the proximity of ‖ ⋅ ‖ 1 , which is just the soft thresholding operator and can be realized by (15).Updating is to solve a TV-L1 problem, and there are many methods to deal with it; to improve the algorithm efficiency, we realize the updating using Douglas-Rachford splitting algorithm, combining the computing of prox ‖⋅‖ 1 and prox ‖⋅‖ TV .Updating V is to solve an L1- −1 problem, which is also dealt with by Douglas-Rachford splitting algorithm, combining the soft thresholding and Fourier domain filtering.A detailed description of the whole process is shown as in Algorithm 2. There are some issues that need to be illustrated. Implementation Details and Experimental Results This section is mainly devoted to numerical simulation of image restoration in the presence of multiplicative noise.We test the performance of our model under the corruption of two types of multiplicative noise: Gamma and Rayleigh. Gamma Noise.The probability density function of Gamma noise is given by where > 0, > 1.The mean and the variance of are As multiplicative noise of an image, the mean of is set to be 1, so = 1/, and the variance of is 2 = 1/.In model (10), = {1/()} =1 , and the value of its mean is estimated as [10] Rayleigh Noise.The probability density function of Rayleigh noise is given by where is a positive parameter.The mean value of is equal to √/2, the variance of is equal to (2 − /2)/ 2 , and the We test experiments on three images: Cameraman (size of 256 × 256), Lena (size of 512 × 512), and Aerial image of some city (size of 512 × 512); see Figure 2. In this paper, we use the signal to noise ratio (SNR) and the peak signal to noise ratio (PSNR) as the evaluation indicators.These indicators are measured between clean and restored images.Let new represent the image restored from the noisy image noisy , and let 2 0 represent the average variance of the clean image 0 .Then, we define the SNR indicator by the formula as follows: ) , and the PSNR is defined as ) , where denotes the length of the images. 4.1.Parameters Selection.Some necessary parameters 1 , 2 , , 1 , 1 , 1 , and 2 are needed to be given to start up the new algorithm; 1 , 2 , and are regularization parameters to keep balance among the terms in model (11).In particular, = 2 / 1 is an important indicator to trade off between the data term and the regularization term. 1 and 1 are two relaxation parameters of the Douglas-Rachford splitting algorithms in Algorithm 2 and are empirically set to be 0.5. 1 and 2 are the parameters of the proximity operator of ‖ ⋅ ‖ TV and ‖ ⋅ ‖ 2 −1 , respectively, and are empirically set to be 10. According to the experimental analysis in [10], the ratio = 2 / 1 in ( 8) is almost a constant depending on the type of noise and was empirically estimated to be 5/6, 5/8 for Gamma noise and Rayleigh noise, respectively.From a number of experiments, we find that the best value of 2 / 1 can be chosen from the range [0.6, 0.85], and the exact selection depends on the different noise levels and images.We give the values of 2 / 1 in Table 1. For , it is a key parameter to promote the performance of model (11).Since a proper value of is critical to the recovery of a satisfying image, it is necessary for us to emphatically discuss its impact on the denoising results.Here, we show its influences on image decomposition by using the image Cameraman.Fixing other parameters, we decompose the image into cartoon component and texture component with different and put the experimental results in Figure 3.It is clear from Figure 3 that the different selections of can bring completely different results.If is selected too large ( ≥ 100), there are very few contents in the texture V, and many details are found in .With the decrease of , more and more features get involved in the texture component V. However, when is selected too little (e.g., ≤ 0.1), the block effect in cartoon component is very serious, and some major features are included in the texture component V.Moreover, in the process of denoising, the too little value of may bring much more details or small scale parts appearing in V, including some noise we do not expect.For this reason, we experimentally select in the interval [20, 80] according to the different noise level, and the exact selection can be found in Table 1. Experimental Results in the Assumption of Gamma Noise. In the assumption of Gamma noise corruption, we test denoising performances of AA model, convex model, and the proposed model on the images of Figure 2 which is solved by the gradient descent method where Δ is step size and is set as 0.01 for all experiments and is the regularization parameter and is adjusted according to the images and noise level; the exact value can be found in Table 1.The convex model is solved by the ADMM method; see [10]. In the test, the original images are corrupted by Gamma noise with = 4, = 10, and = 15, respectively.In Table 1, we list values of related parameters and obtained SNRs/ PSNRs by taking average of ten noise cases under the same experimental setting.It is clear that the proposed model performs quite better in terms of PSNR values and SNR values than AA model and convex model. We further show the denoised results obtained by three methods in Figures 4-6.It is observed that when = 15 and = 10, as shown in Figures 4 and 5, the restored images from the new model possess more clear contents and much better visual effects than the AA model and the convex model.However, when = 4, that is, the noise level is higher, we can observe from Figure 6 that the new model possesses little advantage over the convex model in the visual effect.We also observe from Table 1 that, when = 4, the SNRs/PSNRs of our model and the convex model are neck to neck.The main reason for this phenomenon is that the larger the variance of the noise is, the more the texture components are seriously destroyed and regarded as noise part, and the bigger value should be adjusted.As described in the analysis (c) of the new model, there will be a little content in the V parts, and denoising results of the new model will approximate that of the convex model. Experimental Results in the Assumption of Rayleigh Noise. The Rayleigh noise is generated by √−2 log(1 − ), where is a uniformly distributed random variable generated by "rand" in MATLAB.The mean of Rayleigh noise is set to be 1, and the variance is 0.2732.Since AA model is deduced under the assumption of Gamma noise, we just test the denoising performance of the convex model and the proposed model.Table 2 shows the SNRs/PSNRs results of the two models.we notice that the proposed model gets higher SNRs and PSNRs than the convex model.The experiment results are shown as in Figure 7.It can be observed that the restored images from the new model possess more clear contents and much better visual effects. Further Analysis of the Restored Images. In this section, we further analyze the cartoon part and the texture part V of the restored image of model (11).Take the case under Gamma noise as an example; we exhibit the experiment results when = 10. It can be observed from Figure 8 that there are rich contents in the texture parts V, which enhance the image quality in both the SNRs/PSNRs index and the visual effects.So, the proposed model can also be regarded as a cartoontexture decomposition method under the multiplicative noise corruption, on which we need further research. Conclusions A novel cartoon-texture decomposition based multiplicative noise removal model has been proposed in this paper.The main advantage of our work is embodied in the following two aspects: the one in which we take most of the a priori information of texture component into the denoising model, which makes the model possess more flexibility and efficiency.Another is that we dexterously solve the new convex model using optimal splitting algorithms and Fourier domain filtering method.The experiment results demonstrate that the proposed model can better remove the noise than the two other models, and the recovery images have higher SNRs/PSNRs and better visual effects. Figure 5 : Figure 5: Experiment results under the corruption of Gamma noise, = 10. Figure 6 : Figure 6: Experiment results under the corruption of Gamma noise, = 4. Figure 7 : Figure 7: Experiment results under the corruption of Rayleigh noise with the mean 1 and the variance 0.2732. Figure 8 : Figure 8: Experiment results of the proposed model.From (a-l), (a, e, i) are the noisy images corrupted by Gamma noise ( = 10), (b, f, j) are the restored cartoon parts, (c, g, k) are the restored texture parts, and (d, h, l) are the whole recovery images. [1]els.There are many multiplicative noise removal models; one of the most classic models based on TV regularization is called RLO model proposed by Rudin et al.[1]: 2, . . ., ; Table 1 : PSNR and SNR results for the Gamma noise removal. Table 2 : PSNR and SNR results for the Rayleigh noise removal.
5,038.6
2016-08-29T00:00:00.000
[ "Computer Science", "Mathematics" ]
Magnetically brightened dark electron-phonon bound states in a van der Waals antiferromagnet In van der Waals (vdW) materials, strong coupling between different degrees of freedom can hybridize elementary excitations into bound states with mixed character1–3. Correctly identifying the nature and composition of these bound states is key to understanding their ground state properties and excitation spectra4,5. Here, we use ultrafast spectroscopy to reveal bound states of d-orbitals and phonons in 2D vdW antiferromagnet NiPS3. These bound states manifest themselves through equally spaced phonon replicas in frequency domain. These states are optically dark above the Néel temperature and become accessible with magnetic order. By launching this phonon and spectrally tracking its amplitude, we establish the electronic origin of bound states as localized d–d excitations. Our data directly yield electron-phonon coupling strength which exceeds the highest known value in 2D systems6. These results demonstrate NiPS3 as a platform to study strong interactions between spins, orbitals and lattice, and open pathways to coherent control of 2D magnets. Report for "Magnetically brightened dark electron-phonon bound states in a van der Waals antiferromagnet". In this manuscript, Ergecen and Ilyas et al revealed a unique electron-phonon bound state below the Néel transition in NiPS3, and more importantly, identified the electronic origin as localized d-d excitations and the phonon source as A1g phonon at ~ 7THz, using transient absorption spectroscopy with a dynamic range. Van der Waals (vdW) magnets emerge as a flourish platform that hosts strong coupling between multiple degrees of freedom and realizes novel bound states down to the two-dimension (2D) limit. Yet, it is highly challenging to detect the presence of these bound states and more so to resolve the composition of them. This manuscript has shown, for the first time in vdW magnets, the unambiguous identification of electronic and phononic origins of an electron-phonon bound state with the strongest electron-phonon coupling known so far. This manuscript is of high interest and high quality. I would recommend it for publication at Nature Communications, after addressing the following questions. 1. Could the authors please provide an equilibrium absorption spectrum? How does this equilibrium spectrum compare with the transient absorption spectra? Are the fine oscillations for the electronphonon coupled states resolvable in the equilibrium spectrum? 2. What is the role of demagnetization with the high intensity pump in detecting the phonon replicas in the transient absorption spectroscopy? Why is the transient reflectivity probed 2 picoseconds after the pump? What is the temporal dynamics of the phonon replicas? 3. When fitting the spectrum with a sum of Gaussians weighted by a Poisson distribution to extract the Huang-Rhys factor g, there are a few questions that I am interested in: a. Why do the authors choose Gaussian, instead of Lorentzian? b. How do the authors determine where the zeroth order of the Poisson distribution is? c. Is there a broad background in the absorption spectrum? What happens if fitting with a broad background plus a sum of Gaussians weighted by a Poisson distribution? d. The linewidth of individual Gaussians should be the convolution between the phonon linewidth and the d-d excitation linewidth. However, the d-d excitation linewidth, reported in PRL 120, 136402 (2018), seems to be much larger than the Gaussian linewidth here. Any reasons? Response to Reviewers The manuscript by Ergecen et al. reported the observation of phonon replicas through the ultrafast optical spectroscopic study of antiferromagnetic vdW insulator NiPS3. These phonon replicas only appeared below Neel temperature. So the authors called these states as magnetically brightened dark electron-phonon bound states. Furthermore, the authors employed the energy resolved coherent phonon spectroscopy to differentiate the origin of these electron-phonon bound states. They found that the coupling between localized d-d transitions and A1g phonon mode is more relevant, in comparison to the coupling between spin-orbit-entanged excitons and A1g phonon mode. The results are interesting and may be considered for the publication in Nature Communications. Our response: We thank the reviewer for carefully reading our work and providing insightful remarks, and recommending for publication. Here we try to provide comprehensive answers to the questions. Here I have some technical questions about the work. 1. There are many phonon modes observed in Raman spectroscopy, as seen in ref. 11. And the A1g phonon mode at ~7 THz or ~225 cm-1 is not so prounced in Raman spectra. Why is this specific phonon mode coupled to the d-d transitions? Our response: First, we would like to clarify the phonon mode that strongly couples to the d-d transitions and gives rise to replica formation. As shown in the Supplementary Information, Fig. S5, the Fourier transform of the replica signal is broad and spans an energy interval between 25 meV to 35 meV. This range includes both the A1g phonon mode with 253 cm-1 wavenumber (31 meV) and Eg phonon mode with 225 cm-1 wavenumber (28 meV). Since our transient absorption measurements do not have enough energy resolution to distinguish between these phonons, we perform energy-resolved coherent phonon spectroscopy, which has a better frequency resolution than our transient absorption measurements, to pinpoint the phonons that couple to the d-d transitions. In NiPS3, as observed by our coherent phonon spectroscopy measurements, d-d transitions couple to three distinct phonon modes: 1) 5.2 THz phonon oscillation, corresponding to an Eg phonon mode with 173 cm-1 wavenumber. This phonon mode has been reported in Raman measurements (Scientific Reports 6, 20904 (2016)). 2) 7.5 THz phonon oscillation, corresponding to an A1g phonon mode with 253 cm-1 wavenumber. This phonon mode has been denoted as P5 in Ref. 11, and carries a strong spectral weight as evidenced in Raman measurements. In the manuscript, this phonon mode was referred to as "~7 THz A1g mode". 3) 11.5 THz phonon oscillation, corresponding to an A1g phonon mode with 384 cm-1 wavenumber. This phonon mode has been reported in Raman measurements (Scientific Reports 6, 20904 (2016)). The Eg phonon mode at 225 cm-1 which has negligible spectral weight in Raman measurements does not appear in our coherent phonon spectroscopy as shown in our Supp. Fig. S7, and therefore has negligible coupling to the d-d transitions. In the light of our coherent spectroscopy measurements, the energy of the A1g phonon mode with 253 cm-1 wavenumber (31 meV) matches the spectral distance between the replicas, as shown in the Fourier transform of the phonon replicas (Supp. Fig. S5). The energy splitting between the d-d levels is dictated by the overlap between the ligand p-orbitals and d-orbitals. Any modulation that alters this overlap (such as strain, pressure and phonon excitation) will cause a shift in d-d energies. As the reviewer pointed out, in our measurements we only observe phonon replicas formed by the d-d transition and A1g phonon mode with 253 cm-1 wavenumber (31 meV). As shown in Figure 4 of Scientific Reports 6, 20904 (2016), this A1g phonon mode corresponds to the out-of-plane motion of sulfur atoms. This distortion modulates the distance between local nickel sites and sulfur ligands, and therefore leads to replica formation by modulating the d-d transition energy. On the other hand, the other A1g phonon mode with 384 cm-1 wavenumber (11.5 THz) corresponds to the collective out-of-plane motion of nickel and sulfur sites with a slight in-plane component for the sulfur sites. Since this phonon mode does not significantly alter the ligand-transition metal distance, it does not contribute to the replica formation. We thank the reviewer for insightful comments, and we have made the following changes in the text to clarify the phonon modes that couple to the d-d transition and their signatures in Raman and coherent phonon spectroscopy: A) [They manifest themselves… -Line 38] -We clarified the frequency of the phonon mode that participates in the replica formation. E) [We observe two … -Line 94] -We added a new sentence that summarizes the phonon modes observed in our coherent phonon spectroscopy measurements. F) [The frequency of the dominant … -Line 95] -We clarified the frequency of the phonon mode that participates in the replica formation. G) [These spectral features start … -Line 130] -We clarified the frequency of the phonon mode that participates in the replica formation. 2. The stripy phase of antiferromagnetic order in NiPS3 is globally inversion symmetric. To understand the correlation between phonon replica and magnetic order, the authors proposed a picture of local inversion symmetry breaking. I feel uneasy about this picture. There may be other possibilities such as magnetic point defects and stacking faults in bulk crystals. I suggest the authors add their comments in the manuscript. Our response: Because of dipole transition rules, transitions between d-levels are not allowed if the inversion symmetry of the transition metal atom is not broken. In the case of NiPS3, the d-d transitions are silent above the magnetic ordering temperature, as the nickel atom is an inversion center. The appearance of d-d transitions at the onset of magnetic order requires a mechanism that links the long range magnetic order and the on-site inversion symmetry breaking. The stacking faults would not give rise to temperature dependent d-d transitions and replica peaks. In addition, magnetic point defects and dislocations can cause loss of inversion symmetry adjacent to the defect sites and can couple to the magnetic order. However, even though defects can influence the optical properties in their vicinity, we think that they cannot give rise to an optical signal that is independent of sample position. In addition, existence of sharp magnetic excitons in our samples is also indicative of high sample quality and sparse defect distribution. Following the reviewer's comments, we have added the following comments in the revised text, stating that magnetic point defects and stacking faults are unlikely to result in temperature dependent phonon replicas of d-d transitions: 1) [Although stacking faults and lattice defects... -Line 138] -We added a sentence to discuss the stacking faults and lattice defects that can give rise to inversion symmetry breaking. 3. A bulk crystal of NiPS3 cannot be called 2D magnet. Can the authors show the result in few-layer NiPS3? Our response: We agree with the reviewer on the fact that a bulk crystal of NiPS3 cannot be called a 2D magnet. However, the phenomenon reported here for bulk crystals should be valid for few layer flakes as long as the magnetic order is preserved. This is due to the fact that d-d transitions are localized transitions, and their properties do not depend on the interlayer coupling. As the magnetic order gets suppressed in the monolayer limit (Ref. 11), the phonon replicas are not expected in the monolayer limit. In addition, phonon replicas have been shown to be existent down to bilayer limit in another publication (see Nature Nanotechnology 16, 655-660, 2021) published during the preparation of this manuscript. Reviewer #2 (Remarks to the Author): The manuscript by Ergecen etc., reports the measurements of broadband transient absorption in NiPS3, revealing a phonon replica state. Further coherent phonon spectroscopy shows a phonon oscillation of the same energy as the phonon replica, which only exists within the spectral energy range that corresponds to d-d excitations. The authors conclude that the d-d excitations as the origin of the phonon replica, and deduce electron-phonon coupling strength. The experimental results, like phonon replica and oscillations in coherent phonon spectrum, are clear and convincing. My biggest scientific concern is the lack of temperature dependence in the coherent phonon spectrum, which seems to contradict the behavior of phonon replica and raise more questions in the authors' data interpretation (explained in my detailed comments). Before addressing this main issue, I cannot recommend its publication yet in Nature Communications. Below is my detailed comments: 1. Main concern: Temperature dependence The phonon replica states in the transient absorption spectrum shows the expected temperature dependence, which disappears above TN, because of the symmetry change across the TN as proposed by the authors. But in the coherent phonon spectroscopy (Fig. S6), the oscillations, which is attributed to the same phonon mode (~7THz) as in the phonon replica state, is almost identical as a function of temperature. If they correspond to the same phonon mode, what is the reason for this distinctly different temperature dependence? The author also mentions that this 7 THz mode agrees with previously assigned Raman modes in Ref 11. In Ref 11, the Ag Raman modes of similar energy all show an obvious temperature dependence, which will agree with the observation in transient absorption spectrum but not the coherent phonon spectrum. Our response: We thank the referee for examining our work thoroughly and providing insightful comments and inputs. First, we would like to point out that the Fourier transform of the replica signal is broad and spans an energy interval between 25 meV to 35 meV (Supp. Fig. S5). This range includes both the A1g phonon mode with 253 cm-1 wavenumber (31 meV) and Eg phonon mode with 225 cm-1 wavenumber (28 meV). To pinpoint the phonon mode that is responsible for replica formation, we perform energy-resolved coherent phonon spectroscopy. The most pronounced phonon mode observed in our coherent phonon spectroscopy is the A1g phonon mode with 253 cm-1 wavenumber. In the Raman spectrum, this corresponds to the mode denoted as P5 in Ref. 11. The energy of the A1g phonon mode with 253 cm-1 wavenumber (31 meV) matches the spectral distance between the replicas. On the other hand, the phonon mode at 225 cm-1 (denoted as P4 in Ref. 11), which has negligible spectral weight in Raman measurements, does not appear in our coherent phonon spectroscopy, as shown in Supp. Fig. S7. Therefore, the phonon mode at 225 cm-1 has negligible coupling to the d-d transitions and cannot be responsible for replica formation. We would like to point out that the temperature dependence of both P4 and P5 phonon amplitudes in Ref. 11, is not correlated with the magnetic ordering temperature. Furthermore, both of them exist above and below the magnetic ordering temperature. This observation indicates that the Raman amplitude for 253 cm-1 phonon mode is not directly influenced by the magnetic order. As we have mentioned in the main text, the coupling between the d-d transitions and 253 cm-1 phonon mode exists above the magnetic ordering temperature, but they are not optically active because of local inversion symmetry. At low temperatures, d-d transitions become optically active because of local inversion symmetry breaking arising from the magnetic order. As the referee alluded to, the Raman amplitudes of both P4 (225 cm-1) and P5 (253 cm-1) phonon modes show temperature dependence, and this dependence is not observed in our coherent phonon spectroscopy data. Even though both Raman and coherent phonon spectroscopy are sensitive to phonon modes, the excitation and detection of the phonon modes are completely different for these spectroscopy modalities. In Raman, a single photon inelastically scatters and spontaneously creates a single phonon excitation through a virtual transition, highly detuned from equilibrium excited states. Thus, the Raman amplitude depends on the structure of the equilibrium excited state (detuning etc.) and the thermal occupation of phonon modes, which can change as a function of temperature. On the other hand, in coherent phonon spectroscopy, a probe pulse samples the real time phase coherent phonon oscillations following a pump excitation. Unlike Raman scattering, these coherent phonon oscillations are launched by impulsive Raman scattering and displacive excitation of coherent phonons (DECP). In the case of DECP, the amplitude of the phonon mode is proportional to the phonon displacement induced by the pump pulse, which is determined by the difference between equilibrium and nonequilibrium lattice positions. Our coherent phonon spectroscopy data for NiPS3 implies that the excitation amplitude of the P5 phonon mode does not change with temperature. We think that the discrepancy in temperature dependences of the Raman and coherent phonon spectroscopy is not unexpected, as they can launch and probe phonon modes differently. To further clarify the differences between Raman and coherent phonon spectroscopy, we would like to compare and contrast two studies (one Raman and one coherent phonon spectroscopy) performed on isostructural & isoelectronic van der Waals magnets CrSiTe3 & CrGeTe3. For these compounds, the A1g phonon mode at 136 cm-1 does not show any temperature dependence in Raman (Fig. 4 -https://arxiv.org/pdf/1604.08745.pdf), whereas coherent phonon spectroscopy (https://arxiv.org/pdf/1910.06376.pdf) shows a dramatic change in amplitude because of a spin dependent phonon excitation mechanism. We hope that this example clears up the differences between coherent phonon spectroscopy and Raman measurements. We have added a supplementary note (Supp. Note 10) that describes the differences between Raman and coherent phonon spectroscopy more clearly, and made the following changes in the main text: 1) [They manifest themselves… -Line 38] -We clarified the frequency of the phonon mode that participates in the replica formation. Minor comments/concern: 2. Page 3, 2nd line '… with strong vibronic coupling [10]'. I'm not sure if this ref 10 is the best for this purpose. Our response: This was a mistake. We have corrected it by replacing the reference with the right one. We thank the referee for pointing this out. 3. Compared to Fig. 2, it's hard to resolve the phono replicas in Fig.S4. Is it just because of the color scaling difference? Our response: We agree with the referee. In Fig. S4, it is harder to resolve the phonon replicas. The data in Fig. 2, was taken by fixing the delay time (at 2 ps), and averaging for a longer time. On the other hand, the data in Fig. S4 had been averaged less for the sake of time, since we are examining the time dependence as well. 4. When extracting the phonon replica energy and oscillation frequency, please also add the error bar the number. It seems to have quite some energy distribution. Our response: We thank the referee for bringing this to our attention. Following this comment, we made the following changes: 1) [Supp. Note 9] -we have added the error bars to the extracted values of replica energy and oscillation frequency for two different undressed d-d transition lineshapes, Gaussian and Lorentzian. 2) [The energy spacing between -Line 66] -We rephrase this sentence to better describe the energy distribution of the replicas seen in our transient absorption measurements. 5. In Fig. 2b, the arrows indicating energy spacing can be misleading. It should have corresponded to the length start/end at the center of the replica. The size of the arrow (compared to x scale) is also smaller than the extracted 28.5meV. Our response: We have corrected this in our revised version of the manuscript. The arrows were for demonstration purposes only. We removed them in our revised manuscript. Thanks for pointing this out. Reviewer #3 (Remarks to the Author): Report for "Magnetically brightened dark electron-phonon bound states in a van der Waals antiferromagnet". In this manuscript, Ergecen and Ilyas et al revealed a unique electron-phonon bound state below the Néel transition in NiPS3, and more importantly, identified the electronic origin as localized d-d excitations and the phonon source as A1g phonon at ~ 7THz, using transient absorption spectroscopy with a dynamic range. Van der Waals (vdW) magnets emerge as a flourish platform that hosts strong coupling between multiple degrees of freedom and realizes novel bound states down to the two-dimension (2D) limit. Yet, it is highly challenging to detect the presence of these bound states and more so to resolve the composition of them. This manuscript has shown, for the first time in vdW magnets, the unambiguous identification of electronic and phononic origins of an electron-phonon bound state with the strongest electron-phonon coupling known so far. This manuscript is of high interest and high quality. I would recommend it for publication at Nature Communications, after addressing the following questions. 1. Could the authors please provide an equilibrium absorption spectrum? How does this equilibrium spectrum compare with the transient absorption spectra? Are the fine oscillations for the electron phonon coupled states resolvable in the equilibrium spectrum? Our response: We are thankful to the referee for carefully reading our work, and appreciating the importance of our findings. The equilibrium absorption spectrum of NiPS3 has been reported in a recent publication (Phys. Rev. Lett. 120, 136402 (2018) -Fig. S6). The equilibrium absorption spectrum cannot resolve the fine spectral oscillations. This fact highlights the power of transient absorption measurements, which have a high dynamic range and allow us to observe faint spectral details. In addition, equilibrium linear dichroism (birefringence) spectroscopy measurements have also observed the phonon replicas (see Nature Nanotechnology 16, 655-660, 2021) published during the preparation of this manuscript. Because NiPS3 exhibits magnetically induced birefringence, equilibrium linear dichroism (birefringence) spectroscopy is sensitive to the magnetic order induced spectral features with high dynamic range. 2. What is the role of demagnetization with the high intensity pump in detecting the phonon replicas in the transient absorption spectroscopy? Why is the transient reflectivity probed 2 picoseconds after the pump? What is the temporal dynamics of the phonon replicas? Our response: The high intensity pump pulse heats up the electronic system by efficiently generating electron-hole pairs. This in turn melts the magnetic order and "washes out" any spectral features pertinent to magnetic order. We measure the difference between this nonequilibrium absorption spectrum (at 2 ps time delay) and equilibrium one (at negative time delay). This is what allows us to measure small spectral signals, which cannot be detected with conventional equilibrium absorption methods. The 2 ps time delay is not a special point. We actually have examined the temporal dynamics of phonon replicas up to 100 ps delay times (see Fig. S4) and have observed that phonon replicas survive up to this delay times with no visible change in oscillation amplitudes. This indicates that the spectral oscillations are indeed equilibrium phenomena. 3. When fitting the spectrum with a sum of Gaussians weighted by a Poisson distribution to extract the Huang-Rhys factor g, there are a few questions that I am interested in: a. Why do the authors choose Gaussian, instead of Lorentzian? Our response: The model we use to obtain the Huang-Rhys factor g takes the undressed d-d transition energy as a free parameter. In our fitting procedures, we both used Gaussian and Lorentzian distributions for the undressed d-d transition lineshapes. The selection of the lineshape function does not significantly change the extracted Huang-Rhys factor and the phonon frequency. Therefore, none of the conclusions are affected by the selection of the lineshape function. We have added a subsection into Supp. Note 9 that shows the fitting results obtained using a Lorentzian lineshape. b. How do the authors determine where the zeroth order of the Poisson distribution is? Our response: In our fits, the model we use takes the bare d-d transition energy as a free parameter. The fit outputs the zeroth order of the Poisson distribution. c. Is there a broad background in the absorption spectrum? What happens if fitting with a broad background plus a sum of Gaussians weighted by a Poisson distribution? Our response: The model used in this paper fits the d-d transition region very well without any need of a background. d. The linewidth of individual Gaussians should be the convolution between the phonon linewidth and the d-d excitation linewidth. However, the d-d excitation linewidth, reported in PRL 120, 136402 (2018), seems to be much larger than the Gaussian linewidth here. Any reasons?
5,405.4
2022-01-10T00:00:00.000
[ "Physics" ]
Linear adjoint restriction estimates for paraboloid We prove a class of modified paraboloid restriction estimates with a loss of angular derivatives for the full set of paraboloid restriction conjecture indices. This result generalizes the paraboloid restriction estimate in radial case from [Shao, Rev. Mat. Iberoam. 25(2009), 1127-1168], as well as the result from [Miao et al. Proc. AMS 140(2012), 2091-2102]. As an application, we show a local smoothing estimate for a solution of the linear Schr\"odinger equation under the assumption that the initial datum has additional angular regularity. Introduction Let S be a non-empty smooth compact subset of the paraboloid, (τ, ξ) ∈ R × R n : τ = |ξ| 2 , where n 1. We denote by dσ the pull-back of the n-dimensional Lebesgue measure dξ under the projection map (τ, ξ) → ξ. Let f be a Schwartz function and define the inverse space-time Fourier transform of the measure f dσ C p,q,n,S f L p (S;dσ) , where 1 p, q ∞. The famous restriction problem is to find the optimal range of p and q such that the estimate (1.2) holds. It is known that the condition (1.3) q > 2(n + 1) n and n + 2 q n p ′ , is necessary for (1.2), see [24,29]. Here p ′ denotes the conjugate exponent of p. The adjoint restriction estimate conjecture on paraboloid reads as follows. There is a large amount of literature on this problem. For n = 1, Conjecture 1.1 was proved by Fefferman-Stein [11] for the non-endpoint case and by Zygmund [36] for the endpoint case. Conjecture 1.1 in high dimension case becomes much more difficult. For n 2, Tomas [33] showed (1.2) for q > 2(n + 2)/n, and Stein [25] fixed the limit case q = 2(n + 2)/n. Bourgain [1] further proved estimate (1.2) for q > 2(n + 2)/n − ǫ n with some ǫ n > 0; in particular, ǫ n = 2 15 when n = 2. Further improvements were made by Moyua-Vargas-Vega [16] and Wolff [34]. Tao [31] used the bilinear argument to show that estimate (1.2) holds true for q > 2(n + 3)/(n + 1) with n 2. This result was improved by Bourgain-Guth [2] when n 4. This conjecture is so difficult that it remains open up to now. For more details, we refer the reader to [2,[29][30][31][32]34]. On the other hand, the restriction conjecture becomes simpler (but not trivial) when a test function has some angular regularity. For example, Conjecture 1.1 is proved by Shao [22] when test functions are cylindrically symmetric and are supported on a dyadic subset of the paraboloid in the form of (τ, ξ) ∈ R × R n : M |ξ| 2M, τ = |ξ| 2 , M ∈ 2 Z . Indeed, many famous conjectures in harmonic analysis (such as Fourier restriction estimates, Bochner-Riesz estimate etc.) have easier counterparts when the corresponding operators act on radial functions. Let S n−1 denote the unit sphere in R n and L q sph := L q θ (S n−1 ), the intermediate situation is to replace the L q (R n ) by L q r n−1 dr L 2 sph in (1.2). This intermediate case has been settled for adjoint restriction estimates for a cone by the authors of [17]. More precisely, if S is a non-empty smooth compact subset of the cone: S = (τ, ξ) ∈ R × R n : τ = |ξ| , then for q > 2n/(n − 1) and (n + 1)/q (n − 1)/p ′ we have C p,q,n,S f L p (S;dσ) . The L 2 sph -norm allows us to use spherical harmonic expanding, so the problem is converted to L q (ℓ 2 )-bounds for sequences of operators {H k } where each H k is an operator acting on radial functions. The pioneering paper using such intermediate space is the Mockenhaupt Diploma in which he proved weighted L p inequalities and then sharp L p rad (L 2 sph ) → L p rad (L 2 sph ) estimates for the disc multiplier operator, see either Mockenhaupt [14] or Córdoba [5]. Sharp endpoint bounds for the disk multiplier were obtained by Carbery-Romera-Soria [4]. Müller-Seeger [15] established some sharp mixed spacetime L p rad (L 2 sph ) estimates in order to study a local smoothing of solutions for the linear wave equation. Córdoba-Latorre [9] revisited some classical conjecture including restriction estimate in harmonic analysis in this kind of mixed space-time. Gigante-Soria [12] studied a related mixed norm problem for Schrödinger maximal operators. Concerning the sphere restriction conjecture, Carli-Grafakos [7] also treated the same problem for spherically-symmetric functions and Cho-Guo-Lee [8] showed a restriction estimate for q > 2(n + 1)/n and s (n + 2)/q − n/2 where dσ is the induced Lebesgue measure on S n and H s (S n ) denote the L 2 -Sobolev space of order s on the sphere. An advantage of the proof consists in a fact that inequality (1.5) is based on L 2 -spaces. The advantage of using the L 2 -based Hilbert space also allows us to use effective the T T * arguments to obtain Strichartz estimate with a wider range of admissible indexes by compensating with extra regularity in angular direction; see Sterbenz [21] for wave equation, Cho-Lee [9] for general dispersive equations and the authors [18] for wave equation with an inverse-square potential. Concerning other results in this direction, Cho-Hwang-Kwon-Lee [10] studied profile decompositions of fractional Schrödinger equations under the angular regularity assumption. In this paper, we prove that estimate (1.2) holds for all p, q in (1.3) by compensating with some loss of angular derivatives. Our strategy is to use a spherical harmonic expanding as well as localized restriction estimates. In contrast to the radial case, e.g. [7,22], the main difficulty comes from the asymptotic behavior of the Bessel function J ν (r) when ν ≫ 1. It is worth to point out that the method of treating cone restriction [17] is not valid since it can not be used to exploit the curvature property of paraboloid multiplier e it|ξ| 2 . We note that the bilinear argument used in [22], which is in spirit of Carleson-Sjölin argument or equivalently the T T * argument, can be used to deal with the oscillation of the paraboloid multiplier. To use this argument, one needs to write the Bessel function J ν (r) ∼ c ν r −1/2 e ir when r ≫ 1. This expression works well for small ν (corresponding to the radial case) but it seems complicate to write the Bessel function in that form when ν ≫ 1. Indeed, as in [37], one can do this when ν 2 ≪ r, but it will cause more loss of derivative for the case ν r ν 2 , since it is difficult to capture simultaneously the oscillation and decay behavior of J ν (r). Our new idea here is to establish a L 4 t,x -localized restriction estimate by directly analyzing the kernel associated with the Bessel function. The key ingredient is to explore the decay and oscillation property of J ν (r) for r ≫ ν, and resonant property of paraboloid multiplier. We also have to overcome low decay shortage of J ν (r) (when ν ∼ r ≫ 1) by compensating a loss of angular regularity. Before stating the main theorem, we introduce some notation. Incorporating the angular regularity, we set the infinitesimal generators of the rotations on Euclidean space: Hence ∆ θ is the Laplace-Beltrami operator on S n−1 . Define the Sobolev norm · H s,p sph (R n ) by setting Given a constant A, we briefly write A + ǫ as A + or A − ǫ as A − for 0 < ǫ ≪ 1. As an application of the modified restriction estimate, we show a result on the local smoothing estimate for the Schödinger equation for initial data with additional conditions angular regularity by Rogers's argument in [20]. Our result here extend [20, Theorem 1] from q > 2(n + 3)/(n + 1) to q > 2(n + 1)/n under the assumption that initial data has additional angular regularity. More precisely, we have the following local smoothing result. This paper is organized as follows: In Section 2, we introduce notation and present some basic facts about spherical harmonics and Bessel functions. Furthermore, we use the stationary phase argument to prove some properties of Bessel functions. Section 3 is devoted to the proof of Theorem 1.1. In Section 4, we prove the key Proposition 3.1. We prove Corollary 1.1 in the final section. Acknowledgements: The authors would like to express their great gratitude to S. Shao for his helpful discussions. The authors were supported by the NSFC under grants 11771041, 11831004 and H2020-MSCA-IF-2017(790623). Preliminaries 2.1. Notation. We use A B to denote the statement that A CB for some large constant C which may vary from line to line and depend on various parameters, and similarly employ A ∼ B to denote the statement that A B A. We also use A ≪ B to denote the statement A C −1 B. If a constant C depends on a special parameter other than the above, we shall write it explicitly by subscripts. For instance, C ǫ should be understood as a positive constant not only depending on p, q, n and S, but also on ǫ. Throughout this paper, pairs of conjugate indices are written as p, p ′ , where 1 p + 1 p ′ = 1 with 1 p ∞. Let R > 0 be a dyadic number, we define the dyadic annulus in R n by For each M ∈ 2 Z , we define L M to be the class of Schwartz functions supported on a dyadic subset of the paraboloid in the form of Spherical harmonics expansions and Bessel function. We recall an expansion formula with respect to the spherical harmonics. Let For every g ∈ L 2 (R n ), we have the expansion formula is the orthogonal basis of the spherical harmonics space of degree k on S n−1 . This space is recorded by H k and it has the dimension It is clear that we have the orthogonal decomposition of L 2 (S n−1 ) It follows that Using the spherical harmonic expansion, as well as [19,28], we define the action of (1 − ∆ ω ) s/2 on g as follows Given s, s ′ 0 and p, q 1, define For our purpose, we need the inverse Fourier transform of a k,ℓ (ρ)Y k,ℓ (ω). We recall the Bochner-Hecke formula, see [13] and [26, Theorem 3.10] Here ν(k) = k + n−2 2 and the Bessel function J ν (r) of order ν is defined by where ν > −1/2 and r > 0. It is easy to verify that there exists a constant C independent of ν such that To investigate a behavior of asymptotic bound on ν and r, we recall the Schläfli integral representation [35] of the Bessel function: for r ∈ R + and ν > − 1 Clearly, E ν (r) = 0 when ν ∈ Z + . An easy computation shows that There is a number of references for the asymptotic behavior of a Bessel function, see e.g. [9,23,25,35]. We recall some properties of a Bessel function for a convenience. Lemma 2.1 (Asymptotics of Bessel functions). Let ν ≫ 1 and let J ν (r) be the Bessel function of order ν defined as above. Then there exists a large constant C and small constant c independent of ν and r such that: • When r ν 2 , we have • When r 2ν, we have where |a ± (ν, r)| C and |E(ν, r)| Cr −1 . Proof of Theorem 1.1 In this section, we prove Theorem 1.1 by using some localized linear estimates whose proof are postpone to the next section. Since inequality (1.7) is a special case of (1.8), we aim to prove (1.8). Since (1.8) is a direct consequence of the Stein-Tomas inequality [25] for the case p 2, it suffices to prove (1.8) for the case p 2. More precisely, we will only establish the estimate for q > 2(n + 1)/n, (n + 2)/q = n/p ′ with p 2 Recall the notation L M and A R in the subsection 2.1. We decompose f into a sum of dyadic supported functions To prove (3.1), we need localized linear restriction estimates. Proposition 3.1. Assume f ∈ L 1 and R > 0 is a dyadic number. Then the following linear restriction estimates hold true. Localized restriction estimate In this section we prove Proposition 3.1. We start our proof by recalling where g(ξ) = f (|ξ| 2 , ξ) ∈ S(R n ) with supp g ⊂ {ξ : |ξ| ∈ [1, 2]}. We apply the spherical harmonic expansion to g to obtain Recalling ν(k) = k + (n − 2)/2, we have by (2.5) Here we insert a harmless smooth bump function ϕ supported on the interval (1/2, 4) into the above integral, since a k,ℓ (ρ) is supported on [1,2]. Now we estimate the quantity To this end, we first prove the following lemma. Lemma 4.1. Let µ(r) = r n−1 dr and ω(k) be a weight specified below. For q 2, we have . (4.3) Proof. Since q 2, the Minkowski inequality and the Fubini theorem show that the left hand side of (4.3) is bounded by . We rewrite this by making the variable change ρ 2 ρ . (4.4) We use the Hausdorff-Young inequality with respect to t and we change variables back to obtain LHS of (4.3) . Now we prove that the inequalities (3.3) and (3.4) with R 1. For doing this, we need Lemma 4.2. Let q 2 and R 1, we have the following estimate where ω(k) = (1 + k) 2(n−1)(1/2−1/q) . We postpone the proof of this lemma for a moment. Note that for q ′ 2 p, we use (4.5), (2.4), the Minkowski inequality and the Hölder inequality to obtain where m = (n − 1)( 1 2 − 1 q ). In particular, for q = 2 and 4 q 6, this proves (3.3) and (3.4) when R 1. Hence it suffices to consider the case R ≫ 1 once we prove Lemma 4.2. Proof of Lemma 4.2. By scaling argument in variables t, x and (4.2), we obtain . . When R ≫ 1, inequality (3.4) is a consequence of the interpolation theorem and the following proposition. • For q = 6, we have Remark 4.1. It seems to be possible to remove the ǫ-loss in (4.8), but we do not purchase this option here because we do not need it in this paper. Proof. By the scaling argument and (4.2), it suffices to estimate the quantity . (4.10) In the following, we consider the three cases. For the first two cases, we establish the estimates for general q 4 so that we can use them directly for q = 6 later. (4.20) For the first purpose, we consider the operator where |h ν (r)| C/r. By a similar argument as in the proof of Lemma 4.1, it is easy to see Hence we have (1 + k) (n−1)/2 a k,ℓ (ρ) which implies (4.19). We next prove (4.8) in Proposition 4.1. We need to prove the following lemma. Lemma 4.6. Let R ≫ 1 and f ∈ L 1 , we have the following estimate for every 0 < ǫ ≪ 1 Proof. It suffices to estimate, by a scaling argument, the following quantity . (4.32) We divide the above integral into three cases. (4.34) On the other hand, by (2.11), one has |I ν (r)| r −1/2 when k ∈ Ω 3 . Consider the operator On the one hand, it is easy to see On the other hand, we have the claim that for any ǫ > 0 We postpone the proof of this claim to the end of this section. Hence, by the interpolation of the above two estimates, for any ǫ > 0, we obtain that H ν (a)(t, r) L 6 t,r (R×R n ) This shows (1 + k) 2(n−1)/3 a k,ℓ (ρ)ϕ(ρ) 2 . The proof of claim (4.35). The same argument in the proof the (4.20) shows the claim (4.35). Recall the kernel (4.23), it is enough to estimate the integral H ν (a)(t, r) 4 where we use the kernel estimate (4.24) and (4.26) in the first inequality. Local smoothing estimate K. M. Rogers [20] developed an argument showing that a restriction estimate implies a local smoothing estimate under some suitable conditions. For the sake of convenience, we closely follow this argument to prove Corollary 1.1. In fact, by making use of the standard Littlewood-Paley argument, it can be reduced to prove the claim Here we denote by F the Fourier transform. We also use the notationĥ to express the Fourier transform of h. Let h = (1 − ∆ θ ) −s/2 u 0 . Denote by P N the Littlewood-Paley projector, i.e. P N h = F −1 χ |ξ| N ĥ , χ ∈ C ∞ c ([1/2, 1]). By the Littlewood-Paley theory and the claim (5.1), one has for α > 2n(1/2−1/q)−2/q e it∆ h 2 Here we use Hölder's inequality for the third inequality, Sobolev imbedding for the fourth one. Hence we have e it∆ u 0 L q t,x ([0,1]×R n ) (1 − ∆ θ ) s/2 u 0 W α,q x (R n ) . Now we are left to prove claim (5.1). Assume suppf ⊂ [0, 1]. Note that e it∆ f = 1 (it) n/2 R n e i|x−y| 2 /t f (y)dy, ∀ t ∈ R\{0}. On the other hand, we have for t = 0 . By making use of Theorem 1.1, we obtain for q > 2(n + 1)/n and n+2 q = n p ′ e it∆f L q t,x (|t|∼N −2 ;|x| 1) f L p µ(r) (R + ;H s,p θ (S n−1 )) . where R ≫ 1, and f is frequency supported in unite ball B n . Then for all ǫ > 0 e it∆ f L q x (R n ;L r t (I)) Since q > p when q > 2(n + 1)/n, for any 0 < ǫ ≪ 1, we have by this lemma
4,345.6
2015-07-22T00:00:00.000
[ "Mathematics" ]
Field assessment of dog as sentinel animal for plague in endemic foci of Madagascar. BACKGROUND The epidemiology of Yersinia pestis, the causative agent of plague, involves vectors and reservoirs in its transmission cycle. The passive plague surveillance in Madagascar targets mainly rodent and fleas. However, carnivores are routinely surveyed as sentinels of local plague activity in some countries. PURPOSE The aim of this study is to assess the use of domestic dog (Canis familiaris) as sentinel animal for field surveillance of plague in a highly endemic area in Madagascar. PROCEDURES Cross-sectional surveys of plague antibody prevalence in C. familiaris were conducted in endemic areas with contrasting histories of plague cases in humans, as well as a plague free area. Rodent capture was done in parallel to evaluate evidence for Y. pestis circulation in the primary reservoirs. In two sites, dogs were later re-sampled to examine evidence of seroconversion and antibody persistence. Biological samplings were performed between March 2008 and February 2009. Plague antibody detection was assessed using anti-F1 ELISA. FINDINGS Our study showed a significant difference in dog prevalence rates between plague-endemic and plague-free areas, with no seropositive dogs detected in the plague free area. No correlation was found between rodents and dogs prevalence rates, with an absence of seropositive rodents in some area where plague circulation was indicated by seropositive dogs. This is consistent with high mortality rates in rodents following infection. Re-sampling dogs identified individuals seropositive on both occasions, indicating high rates of re-exposure and/or persistence of plague antibodies for at least 9 months. Seroconversion or seropositive juvenile dogs indicated recent local plague circulation. CONCLUSIONS In Madagascar, dog surveillance for plague antibody could be useful to identify plague circulation in new areas or quiescent areas within endemic zones. Within active endemic areas, monitoring of dog populations for seroconversion (negative to positive) or seropositive juvenile dogs could be useful for identifying areas at greatest risk of human outbreaks. This article is protected by copyright. All rights reserved. INTRODUCTION Plague, caused by Yersinia pestis, is a flea-borne zoonotic disease. It induces severe disease in rodent hosts which is characterized by epizootic periods that cause widespread rodents die-offs followed by quiescent period with little or no evidence of disease in rodent (Gage et al. 1995). Although most commonly associated with rodents, nearly all mammals can become infected with Y. pestis (Pollitzer 1954;Gage & Kosoy 2005). In 1898, rat-infested steamships from India brought plague to the seaport of Toamasina in Madagascar (Brygoo 1966). In the 1920s, plague reached the central highlands where it became endemic at altitudes above 800 m where the black rat (Rattus rattus), the most abundant small mammal, is the main plague reservoir. In the highlands, high human plague season extends from October to April (hot and rainy) when rat population are low due to low reproduction and plague epizootics while low human plague season occurs from May to September (cold and dry season) when rat reproduction is high. Maximum abundance of rodents in the field is observed in July and August, followed by the maximum abundance of fleas from September to November. Conversely in the west coastal plague focus of Mahajanga, which experienced 4 successive plague outbreaks from 1995 to 1998, outbreaks of human plague occurred during the dry and cold season (Andrianaivoarimanana et al. 2013). In Madagascar, plague surveillance (in humans and rodents) is a key priority of the Plague National Control Program (PNCP) established since 1993 . Among the objectives of the PNCP are the determination of plague activity in rodent populations in endemic areas and implementation of control measures to reduce human plague (Chanteau 2006). Rodent and human serology has become an important component in plague surveillance but is not necessarily representative of Y. pestis transmission since it is performed on surviving individuals. After an epizootic event, rodent populations will only be composed of resistant rodents and newly born susceptible rodents, while plague is a fatal disease in humans without prompt and appropriate treatment (Andrianaivoarimanana et al. 2019). Therefore, neither rodent serology nor human serology can represent the true extent of plague transmission. In such cases; identifying a suitable alternative surveillance approach is necessary. Animal sentinels may be used to detect pathogens or disease outbreaks in a new area, monitor changes in prevalence or incidence, or track expansion of a pathogen over time and space. The ideal sentinel would be susceptible to but also survive infection, and would develop a detectable and measurable response, whether clinical or immunological (Schmidt 2009). Dogs have been proposed as excellent sentinels for certain infectious-disease pathogens in Canada and are recommended for California serogroup viruses, other viruses, bacteria, and parasitic diseases surveillance (Bowser & Anderson 2018). Most studies of carnivores as potential sentinel animal for the detection of plague are done opportunistically (Salkeld & Stapp 2006). Carnivores are able to acquire Y. pestis infection by multiple routes of infection. They may become infected by bites from infected fleas or by ingesting infected prey (Barnes 1982;Thomas et al. 1989), but tend to develop asymptomatic or low severity disease. Dogs seem to have low susceptibility to plague and develop antibodies against Y. pestis, which may persist for several months (Rust et al. 1971). Thus, the serologic study of dogs can give an indication of plague circulation in the surveyed area. In Madagascar, people in remote areas usually have pet dogs but the animals are free-roaming and mainly follow the owner in the field. In these ways, dogs may be exposed to infection by exploring the surroundings, by feeding on garbage which may contain infected rodent carcasses and ectoparasites, and hunting domestic, peri-domestic, and/or wild small mammals. In this study, we assessed the use of domestic dogs (Canis familiaris) in plague surveillance in the context of Malagasy foci. We conducted a cross-sectional survey to assess the Y. pestis seroprevalence among C. familiaris and a serology follow-up (resampling) to assess the seroconversion among surveyed C. familiaris. We compared plague circulation among rodents and dogs populations. Study design and setting This study was conducted from March 2008 to February 2009. A cross-sectional survey of seropositivity of antibody against plague in dogs was conducted in 4 sites with a follow-up serology survey of dogs in 2 of these sites. For the sites with serology follow-up, initial sampling (session-1) occurred in April to May 2008 during the quiescent period of plague transmission to humans, with the follow-up sampling (session-2), conducted in September 2008 to March 2009, during the high human plague transmission period. Our surveys included 3 sites in plague endemic areas which differed in their plague endemicity levels. is a plague-free area located outside the limit of plague foci and included as the negative control site (Fig. 1). In Madagascar, the administrative breakdown is divided into 3 sub-units starting with the Fokontany (basic administrative sub-units), Commune, and District. Animal sampling Blood samples were collected from the saphenous vein on the hind leg of each dog with verbal consent and assistance from the owner. In the serology follow-up site, the same dogs were re-sampled during session-2. In parallel, rodents were captured according to our standard protocol (Rahelinirina et al. 2010). All captured rodents were euthanized and morphologically identified to the species level. Rodent handling was done in accordance with the directive 2010/63/EU of the European Parliament and of the Council (Official Journal of the European Union 2010) and the American Society of Mammalogists for the use of wild mammals guidelines (Sikes et al. 2016). Blood samples were collected either on sterile Eppendorf tube or on dried blood spot filters, and rodent spleen samples were stored in Cary-Blair transport medium for Y. pestis isolation. Laboratory analysis Y. pestis expresses a specific capsule-like surface antigen, the fraction 1 protein or F1 antigen which is highly immunogenic. Anti-F1 Ig G antibodies have been used for serological diagnosis of plague infection in animals (Rajerison et al. 2009;Tollenaere et al. 2010; Andrianaivoarimanana et al. 2012). An enzyme linked immunosorbent assay (ELISA) for anti-F1 IgG detection was performed on dog sera as previously described (Andrianaivoarimanana et al. 2012) with modifications. Briefly, anti-F1 IgG detection was assessed on a plate previously coated with F1 antigen diluted in carbonate buffer and in parallel with a plate coated with carbonate buffer alone (for background identification). Dog sera were diluted 1/100, 1 negative, 2 positive control sera (high and low titers), and 2 well control (without sera) were included in each series of experiments. An anti-dog IgG peroxidase conjugate (Byosis, 1:4000) was used for the revelation step. The mean optical density (OD) obtained against the coating buffer alone was subtracted from the OD against F1 antigen (delta OD). The threshold of positivity was set at 0.450 and samples were considered positive when the mean OD was above the defined threshold. Detection of anti-F1 IgG antibodies in rodent was conducted using modified (Dromigny et al. 1998) and previously described protocols (Andrianaivoarimanana et al. 2012). Rodent's spleen samples were tested using the rapid diagnostic test (RDT) for F1 antigen detection based on lateral flow immunochromatography (Chanteau et al. 2003) and those yielded positive result for RDT were subsequently assessed on bacteriology for Y. pestis isolation (Rasoamanana et al. 1996). Statistical analysis Prevalence rate, defined as the number of animals positive for anti-F1 IgG antibodies divided by the total number tested, was determined for dog and rodent from each site. Other indicators were studied, such as Y. pestis infection rate which is the ratio of captured rodents in which Y. pestis was isolated among the total tested. Correlation between rodent serology and dog serology was evaluated by Spearman's rank correlation. For sites 1 and 2, changes in the anti-F1 seropositive rate for dogs between the 2 sampling periods were evaluated using chi-square test. Significance was set at P < 0.05. RESULTS AND DISCUSSION A total of 107 dogs with a median age of 3 years (range: 1 month-15 years) and 414 rodents were sampled in the plague endemic study sites during the study period. Twenty-five (25) dogs with median age 0.67 years (2 months-10 years) and 87 rodents were sampled in the control site (Table 1). As expected, no seropositive dog was observed in the plague-free control site (Table 1). Within the plague endemic area, prevalence rates ranged from 6% (95% CI = 10-27) in the coastal focus, to 48% (95% CI = 32-65) to 95% (95% CI = 76-99) in the central highland sites (Table 1). The difference in dog prevalence rate between endemic areas and the plague-free area was strongly significant (chi-square test, χ 2 = 14.01, P < 0.001). As expected no seropositive rodents were detected in the plague-free control site. There were also no seropositive rodents observed from the inactive coastal focus site (Table 1). Rodent prevalence rate ranged from 0% (95% CI = 0-2) to 28% (95% CI = 18-41) for the sampling events in the central highlands. No correlation was found in plague prevalence rate between rodents and dogs (Spearman's rank correlation r = 0.07, P = 0.84, Fig. 2). Our results highlight the value of using dogs as sentinels for detecting plague circulation in an area, compared to surveillance of the reservoir rodent population directly. As infected rodents usually die of infection during epizootics, surveillance of rodents can yield low prevalence rates, even in areas of recent 4 Figure 2 Comparison between rodent and dog prevalence rates active plague circulation (e.g. Site 2b), because sampled rodents will mostly be rats that escaped infection or newly born individuals. In contrast, as dogs typically survive after plague infection and develop antibodies, prevalence rates tend to be higher than for rodents, and seropositive dogs can therefore provide evidence of plague circulation, even when rodent sampling yields no seropositive individuals. This is highlighted by our data from the inactive coastal focus of Mahajanga (Site 3), where no seropositive rodents were detected but a seropositive dog, aged 4 months old, indicates a recent circulation of Y. pestis. Indeed, since sampling for this study was conducted, further sampling in this focus has isolated Y. pestis from a rodent (Rahelinirina et al. 2017), confirming that Y. pestis continues to circulate despite a lack of confirmed human plague cases since 2000. For the serology follow-up, among 14 seropositive dogs on session-1 sampling, 5 became seronegative on session-2, and 4 out of the 10 seronegative on session-1 became seropositive on session-2 ( Table 2). The seroconverted negative to positive dogs are likely to indicate recent local transmission of plague in the rodent population during the time between the 2 sampling sessions. For Site 1, an increase in the prevalence rate of Y. pestis antibodies against plague among captured rodent was identified between session-1 and session-2 although not significant (Table 1). For Site 2, Y. pestis was isolated from captured rodents (one Y. pestis strain isolated from spleen culture among 2 RDT positive rodents). In this case, surveillance of dogs for seroconversion negative- positive would be a valuable predictive marker to estimate the risk of plague in humans. The finding that some dogs are seropositive in both sampling sessions is consistent with previous studies of other carnivores (Hopkins & Gresbrink 1982;Brinkerhoff et al. 2009). For seroconverted positive to negative dogs, 3 dogs from Site 1 were highly seropositive during the first sampling and became negative 9 months later. The naturally infected dogs in our study may be exhibiting persistence of anti-F1 IgG, on a time-scale consistent with previous studies of experimentally infected dogs, where antibodies persisted for at least 300 days after infection (Rust et al. 1971). Indeed, Site 1 is an isolated focus with human plague case observed on February 2008 after 20 years of silence (Rajerison 2008, unpublished data) suggesting that dogs' seroconversion from positive to negative status might be explained by the absence of antibody boosting production due to a low exposure of animals to plague. Alternatively, as some dogs in our study go from seropositive to seronegative between the 2 sampling occasions, some of the dogs with persistent antibodies may have been re-exposed to plague infection. CONCLUSION Dogs are useful as sentinel animals for plague surveillance as they typically survive Y. pestis infection, produce detectable levels of anti-F1 antibodies, and are longer lived than rodents. In contrast, as rodents often die following infection, surveillance based on rodents may yield "false negatives," where plague circulation goes undetected. Moreover, as effective surveillance could be achieved sampling fewer individuals, dog blood-sampling could be more cost-effective. In Madagascar, dog surveillance for anti-F1 antibodies could be useful in 2 ways. First, dogs could be used as sentinels in plague-free areas thought to be at risk from plague emergence or areas that are quiescent in terms of human plague occurrence. In such areas, plague may be present but at relatively low levels of transmission in the reservoir community, so that dog surveillance may be useful for picking up increased plague circulation before human cases occur. Secondly, to detect increased plague activity within active plague endemic areas, surveillance during the quiescent period of human plague could target juvenile dogs (<4 months or those born during the period between human plague seasons). Seropositivity among this population could indicate plague circulation among the rodent population a few months before sampling, and highlight areas at increased risk. Such surveillance could be focused on areas which experienced relatively low numbers of plague cases in the preceding plague season, but were close to areas with outbreaks of human plague and could therefore be at higher risk in the following season. This would provide an early warning of risk and allow in-time implementation of appropriate control measures. Further study using antibody titration in combination with ELISA anti-F1 IgG detection as well as an evaluation of maternal antibody persistence would help us to better understand persistence of antibodies in dogs and further optimize the use of older and younger dogs as sentinel for plague. ACKNOWLEDGEMENT Sincere thanks to Mrs. L Angeltine Ralafiarisoa for technical assistance and the staff of the Plague Unit for their assistance during sample collections. This work was funded by an internal research grant (Ref: PA 14.25) from the Institut Pasteur de Madagascar. This research was also funded in part by the Wellcome Trust [095171/Z/10/Z]. For the purpose of Open Access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
3,795.4
2021-03-17T00:00:00.000
[ "Medicine", "Biology" ]
Different critical points of chiral and deconfinement phase transitions in (2+1)-dimensional fermion-gauge interacting model Based on the truncated Dyson-Schwinger equations for fermion and massive boson propagators in QED$_3$, the fermion chiral condensate and the mass singularities of the fermion propagator via the Schwinger function are investigated. It is shown that the critical point of chiral phase transition is apparently different from that of deconfinement phase transition and in Nambu phase the fermion is confined only for small gauge boson mass. I. INTRODUCTION The chiral and deconfinement phase transitions of nonperaturbative systems are important issues of continuous interests both theoretically and experimentally. Although the mechanism is unknown, the originally chiral symmetric system may undergo chiral phase transition (CPT) into a phase with dynamical chiral symmetry breaking (DCSB) which explains the origin of constituent-quark masses in QCD and underlies the success of chiral effective field theory [1,2]. In the chiral limit, the order parameter of CPT is defined via the fermion propagator . (1) The two functions A(p 2 ) and B(p 2 ) in the above equation are related to the inverse fermion propagator S −1 (p) = iγ · pA(p 2 ) + B(p 2 ). ( The deconfinement phase transition is then related to the observation of the free particle and also the corresponding propagator. If the full fermion propagator has no mass singularity in the timelike region, it can never be on mass shell and the free particle can never be observed where the confinement happens [3]. Accordingly, the appearance of the mass singularity in the system directly implies deconfinement. So in this way we can learn the deconfinement phase transition from the analytic structure of the fermion propagator. To indicate DCSB and confinement, it is very suggestive to study some model that reveals the general nonperaturbative features while being simpler. Threedimensional quantum electrodynamics (QED 3 ) is just such a model which has many features similar to quantum *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>chromodynamics (QCD), such as DCSB and confinement [2][3][4][5][6][7][8]. Moreover, its superrenormalization obviate the ultraviolet divergence which is present in QED 4 . Due to these reasons, it can serve as a toy model of QCD. In parallel with its relevance as a tool through which to develop insight into aspects of QCD, QED 3 is also found to be equivalent to the low-energy effective theories of strongly correlated electronic systems. Recently, QED 3 has been widely studied in graphene [9][10][11] and high-T c cuprate superconductors [12][13][14][15]. The study of DCSB in QED 3 has been an active subject near 30 years since Appelquist et al. found that DCSB vanishes when the flavor of massless fermions reaches a critical number N c ≈ 3.24 [16]. They gain this conclusion by solving the truncated Dyson-Schwinger equation (DSE) for the fermion propagator in the chiral limit. Later, extensive analytical and numerical investigations showed that the existence of DCSB in QED 3 remains the same after including higher order corrections to the DSE [17,18]. On the other hand, the achievement in research of the mass singularity and confinement in QED 3 is caused by a paper of P. Maris who found that the fermion is confined by the truncated DSE for the full fermion and boson propagators at N < N c [3] where chiral symmetry is broken. This result might imply that the existence of confinement and DCSB depend on the same boundary conditions. Moreover, the authors of Ref. [2,19] pointed out that restoration of chiral symmetry and deconfinement are coincident owing to an abrupt change in the analytic properties of the fermion propagator when a nonzero scalar self-energy becomes insupportable. Nevertheless, the above result will be altered when the gauge boson acquires a finite mass ζ through the Higgs mechanism [20,21]. For a fixed N (< N c ) and with the increasing boson mass, the fermion chiral condensate falls and diminishes at a critical value ζ c (which, of course, depends on N ) and then chiral symmetry restores. Since DCSB and confinement are nonperaturbative phenomena, both of them occur in the low energy region and might disappear with the rise of boson mass. There-fore, it is very interesting to investigate whether or not both phase transitions occur at the same critical point in this case. In this paper, we will adopt the truncated DSEs for the full propagators to study the behaviors of the mass singularity and the fermion chiral condensate with a range of gauge boson mass and try to answer this question. II. SCHWINGER FUNCTION The Lagrangian for massless QED 3 in a general covariant gauge in Euclidean space can be written as where the 4-component spinor ψ is the massless fermion field, ξ is the gauge parameter. This system has chiral symmetry and the symmetry group is U (2). The original U (2) symmetry reduces to U (1) × U (1) when the massless fermion acquires a nonzero mass due to nonperaturbative effects. Just as mentioned in Sec. I, the chiral symmetry is broken by the dynamical generation of the fermion mass (here N = 1). If one adopts the full boson propagator, the results of Euclidean-time Schwinger function reveal that the fermion propagator has a complex mass singularity and thus corresponds to a nonphysical observable state [3] which means the appearance of confinement. On the contrary, if the Schwinger function exhibits a real mass singularity of the propagator, the fermion is observable and the fermion is not confined [24,25]. Therefore, we also adopt this method to analyze those nonperaturbative phenomena. The Schwinger function can be written as with M (p 2 ) = B(p 2 )/A(p 2 ). If there are two complex conjugate mass singularities m * = a ± ib associated with the fermion propagator, the function will show an oscillating behavior for large (Euclidean) t. However, the system reveals a stable observable asymptotic state with a mass m for the fermion propagator, then By this way, the analysis of mass singularity can be used to determine whether or not the fermion is confined. Since the Schwinger function is determined by the fermion propagator and the DSEs provide us an powerful tool to study it, we shall use the coupled gap equations to calculate this function. III. TRUNCATED DSE Now let us turn to the calculation of A(p 2 ) and B(p 2 ). These functions can be obtained by solving DSEs for the fermion propagator, where Γ ν (p, k) is the full fermion-photon vertex and q = p − k. The coupling constant α = e 2 has dimension one and provides us with a mass scale. For simplicity, in this paper temperature, mass and momentum are all measured in unit of α, namely, we choose a kind of natural units in which α = 1. Form Eq. (2) and Eq. (7), we obtain the equation satisfied by A(p 2 ) and B(p 2 ) Another involved function D σν (q) is the full gauge boson propagator which is given by [20,21] where Π(q 2 ) is the vacuum polarization for the gauge boson which is satisfied by the polarization tensor and ζ is the gauge boson mass which is acquired though Higgs mechanism which happens when the gauge field interacts with a scalar filed in the phase with spontaneous gauge symmetry breaking (Here, we adopt the massive boson propagator to investigate the oscillation behavior of Schwinger function in DCSB phase, more details about Higgs mechanism in QED 3 can be found in Ref. [20,22]). Using the relation between the vacuum polarization Π(q 2 ) and Π σν (q 2 ), we can obtain an equation for Π(q 2 ) which has an ultraviolet divergence. Fortunately, it is present only in the longitudinal part and is proportional to δ σν . This divergence can be removed by the projection operator and then we obtain a finite vacuum polarization [5,23]. Finally, we choose to work in the Landau gauge, since the Landau gauge is the most convenient and commonly used one. Once the fermion-boson vertex is known, we immediately obtain the truncated DSEs for the fermion propagator and then analyze the deconfinement and chiral phase transitions in this Higgs model. A. Rainbow approximation The simplest and most commonly used truncated scheme for the DSEs is the rainbow approximation, since it gives us rainbow diagrams in the fermion DSE and ladder diagrams in the Bethe-salpeter equation for the fermion-antifermion bound state amplitude. In the framework of this approximation, the coupled equations for massless fermion and massive boson propagators reduce to the three coupled equations for A(p 2 ), B(p 2 ) and Π(q 2 ), with G(k 2 ) = A 2 (k 2 )k 2 + B 2 (k 2 ). By application of iterative methods, we can obtain A, B and Π. B. Improved scheme for DSE To improve the truncated scheme for DSE, there are several attempts to determine the functional form for the full fermion-gauge-boson vertex [26][27][28][29], but none of them completely resolve the problem. However, the Ward-Takahashi identity provides us an effectual tool to obtain a reasonable ansatze for the full vertex [30]. The portion of the dressed vertex which is free of kinematic singularities, i.e. BC vertex, can be written as, Since the numerical results obtained using the first part of the vertex coincide very well with earlier investigations [18], we choose this one as a suitable ansatze to be used in our calculation. Following the procedure in rainbow approximation, we also obtain the three coupled equations for A(p 2 ), B(p 2 ) and Π(q 2 ) in the improved truncated scheme for DSEs, IV. NUMERICAL RESULTS After solving the above coupled DSEs in rainbow approximation by means of the iteration method, we can obtain the three function A, B, Π for the propagator and plot them in Fig. 1. From Fig. 1 it can be seen that A(p 2 ) increases with increasing momenta but almost equal to one at large p 2 . In the range of small momenta, it decreases but does not vanish when p 2 → 0. Both of the other two functions B(p 2 ) and Π(q 2 ) decrease at large momenta but their rates of decreasing are different. B(p 2 ) decreases as rapidly as ∼ 1/p 2 , while Π(q 2 ) decreases as rapidly as ∼ 1/ q 2 . In addition, all the three functions are constant in the infrared region. Thus, we can obtain the values of the corresponding functions A, B and Π at zero momenta, which, as functions of the gauge boson mass ζ, are also shown in Fig. 1 Then, substituting the obtained A and B into Eq. (4), we immediately obtain the behavior of the Schwinger function with nonzero boson mass which is shown in Fig. 2. At small ζ, the Schwinger function reveals its typical oscillating behavior which illustrates the conjugate mass singularities like m * = a ± ib m * ∼ 0.043 ± 0.063i at ζ = 0.01, (24) m * ∼ 0.023 ± 0.025i at ζ = 0.06, (25) associated with the fermion propagator and thus the free particle can never be observed where the fermion is confined. As the rise of ζ, the oscillating behavior remains but it vanishes at another critical value ζ R dc ≈ 0.068 and around which both of the propagators do not exhibit any singularity. Beyond ζ R dc , the function ln[Ω(t)] ∼ −mt where the stable asymptotic state of the fermion is observable m ≈ 0.021 at ζ = 0.07, (26) m ≈ 0.0041 at ζ = 0.09 (27) and hence the deconfinement phase transition happens, but the DCSB remains. With the enlargement of ζ, the absolute slope of ln[Ω(t)] decreases and m disappears at ζ R c . To validate the difference between ζ c and ζ dc , we also give the behavior of the Schwinger function beyond rainbow approximation in Fig. 3. In the BC 1 truncated scheme for DSE, the oscillation of the Schwinger function only appears at small ζ, which denotes the existence of confinement, but it disappears at ζ BC1 dc ≈ 0.038, which exhibits that deconfinement phase transition occurs but here ψ ψ = 0. As the rise of ζ, the Schwinger function shows the real mass singularity of the propagator and chiral symmetry gets restored when the boson mass reaches ζ BC1 c ≈ 0.071. V. CONCLUSIONS The primary goal of this paper is to investigate chiral and deconfinement phase transition by application of an Abelian Higgs model through a continuum study of the Schwinger function. Based on the rainbow approximation of the truncated DSEs for the fermion propagator and numerical model calculations, we study the behavior of the Schwinger function and the fermion chiral condensate. It is found that, with the rise of the gauge boson mass, the vanishing point (ζ dc ) of the oscillation behavior of the Schwinger function is apparently less than that of the fermion chiral condensate and each of the propagators does not reveal any singularity near ζ dc . To make know the difference between the two critical points, we also work in an improved scheme for the truncated DSEs and show that the above conclusion remains despite the two critical numerical values alter. The result indicates that, with the increasing gauge boson mass in the chiral model, the occurrence of de-confinement phase transition is apparently earlier than that of chiral phase transition. VI. ACKNOWLEDGEMENTS We would like to thank Prof. Wei-min Sun and Guozhu Liu for their helpful discussions. This work was supported by the National Natural Science Foundation
3,173.8
2014-02-12T00:00:00.000
[ "Physics" ]
Analysis of Distribution Chain of Arabica Coffee in Semarang Regency in 2015 Article Information ________________ History of Article: Received June 2016 Approved July 2016 Published August 2016 ________________ INTRODUCTION The development of the agricultural sector in Indonesia is very useful through the development results that have been achieved all this time.It cannot be denied considering that Indonesia has a very great capital of natural resource wealth; thus it provides the potential and opportunities for the development of agricultural businesses including the plantation crops.The plantation crop production is one source of foreign exchange agricultural sector (Sairdama 45: 2013). The important role of plant and plantation sector is indicated by GDP at constant prices in 2013, in which the plantation dominated after the groceries, but the plantation sector fluctuated when there was an increase of 6.22% in 2012 but not followed by the next year.More information about the rate of GDP at constant prices in 2010 until 2013 can be seen in Table 1 below.One plantation crop commodity having an important role in the national economy is the coffee plant, which is one plantation commodity that is much cultivated by the farmers; besides, coffee is the leading export commodity.The livelihoods of 100 million people depend on coffee (Pendergrast 1999) dalam (Bunn et al., 2015).One area that has the area and land which conditions are suitable for the coffee plantation is Central Java Province among 34 provinces in Indonesia.The popularity and worldwide appeal of coffee, which stems from its unique flavour, make it currently one of the most desirable and frequently consumed beverages (Ayelign & Sabally, 2013).Central Java has 31 regions that develop the coffee commodity, and the largest land areas for the Arabica coffee are dominated by three regencies those are Temanggung with 10,768 hectares, Semarang with 3,668 hectares, and Wonosobo with 3,263 Ha.Semarang Regency is included in three regencies having the largest land areas in Central Java, but there is a decrease in the coffee production, in which in 2012 Semarang Regency produced 60.00 tons but not matched in 2013 that decreased to be 57.28 tons.It happened to the price at which the margin is quite large between the farmers and the consumers, which is shown in table 1 below. Table 1 about the importance of Arabica coffee commodity for the farmers will require a clear overview of the marketing process of Arabica coffee from the peasant producers to the final consumer.Sairdama (45: 2013) in a research entitled "Analysis of Arabica Coffee Farmers' Income and Marketing Margins in Kamu District, Dogiyai Regency" explains that the distribution chain of Arabica coffee involves several actors those are the farmers, the traders at the regional level, the traders at the provincial level, the retailers, and up to the consumers' hands.The distribution process of Arabica coffee from the production centers to the final consumer involves the marketing agencies, thus it makes the marketing agencies try to make profits.The local coffee traders only sell at the central auction markets (Petit, 2007) in (Gelaw et al., 2016).The amount of profits of each marketing agency involved will affect the marketing margin of Arabica coffee.The parties involved in the distribution of Arabica coffee from the farmers to the consumers make the price received by farmers relatively lower than the price paid to the consumers, thus it needs to know how the pattern of flow, the actors, and the marketing margin of Arabica coffee in Semarang Regency are.Therefore, it is necessary to conduct a research about "Analysis of Distribution Chain of Arabica coffee in Semarang Regency" that aims to determine the pattern of flow, the marketing actors, and how big the margin of distribution chain of Arabica coffee in Semarang Regency is. RESEARCH METHODS To answer the problem formulation, the researcher uses the mixed method (Sarwono 2011: 2).Mixed method is a method that uses two or more methods that are taken from two different approaches those are the quantitative and qualitative approaches. In this research, the locus taken is Getasan District and Banyubiru Districts, which have the largest value production of Arabica coffee in Semarang Regency. The focus in this research is: (1) Distribution pattern of Arabica coffee in Semarang District; (2) Distribution value chain of Arabica coffee in Semarang Regency.The variable in this research is the marketing margin of Arabica coffee in Semarang Regency. Types of data sources used in this research are the primary and secondary data.The sources of primary data are obtained through the interviews and observations with 44 coffee growers, 5 coffee fruit gatherers, 2 coffee fruit wholesalers, 3 Arabica coffee traders, and Arabica coffee consumers by using questionnaires.The secondary data is obtained from the institutions / agencies.The data collection technique uses the interview techniques and the snowball sampling. The data analysis methods used in this research are: (1) The interactive qualitative using the supply chain system, which is to determine the flow pattern of the supply system in the Arabica coffee agro-industry.Supply chain is a system that connects the suppliers of the raw materials, the agro-industry, the traders, and the consumers.The relationship is expected that the agro-industrial activities may run smoothly and efficiently so the Arabica coffee can reach the consumers; (2).The quantitative descriptive using the analysis of marketing margin, which is used to measure the profits of each actor involved in the distribution process of Arabica coffee in Semarang Regency. the Arabica coffee farmers in which the farmers sell their coffee to the collectors or middlemen, and then the collectors sell them to the wholesalers.The wholesalers distribute the coffee to the retailers in the market.In this marketing channel pattern the traders or the retailers become the last ones before reaching the hands of consumers.The marketing channel pattern II is essentially the same as the marketing channel pattern I.The difference is that in the marketing channel pattern II the role of the wholesalers is not visible.In this marketing channel, the collectors sell the coffee directly to the traders or retailers in the market.This marketing channel is usually conducted by the farmers because the plantation location is closer to the market and the collectors.In this marketing channel, the role of the collectors is visible because they distribute the coffee directly to the traders or retailers in the market, then the traders or retailers sell them to the consumers.While in the marketing channel pattern III there is no involvement of the collectors and traders because the location of the farmers' land is close to the market.The farmers have contacts with the traders or retailers in the market and this channel is dominated by the farmers who have quite large land.The important role of the farmers is very visible in this marketing channel.The farmers sell directly to the traders or retailers in the market then the retailers sell directly to the hands of consumers. Pattern Based on the three patterns of distribution channels of Arabica coffee in Semarang Regency, it can be seen that the collectors have an important role in distributing the coffee to the hands of consumers, in which the collectors facilitate the farmers in selling their farming products.The collectors also distribute the Arabica coffee to the hands of wholesalers and retailers in the market.Besides participating in the sale and purchase, the collectors also perform the marketing functions such as sorting, drying, and delivering into the hands of wholesalers and retailers in Semarang.Table 2 above explains pattern I of the marketing margin.It is visible that the marketing margin between the farmers and collectors is large enough that is Rp.20,000 / Kg of Arabica coffee.This is because the shaping change of the coffee itself that is originally in the wet form from the coffee farmers and the collectors try to change it to be more empowered to sell into Arabica coffee (ose) in the form of rice that has been dried and sorted.It is also because in pattern I the marketing channel is too long.This leads to the profits received by each marketing agency vary, in which the biggest share in pattern I is received by the collectors amounted Rp 7,000 / Kg.The wholesalers only get Rp 3,500 / Kg while the traders or retailers as the last actors in this marketing channel only get Rp 4,500 / Kg.Table 3 above explains pattern II.It can be seen that the margin in pattern II is not much different from pattern I, but in pattern II the role of wholesalers is not visible.The collectors sell directly to the traders or retailers in the market because of being close to the market.The biggest profit is in the collectors amounted Rp 13,000 / Kg.It is because the collectors have an important role in distributing and also sorting in accordance with the quality expected by the traders or retailers before reaching the hands of consumers.While the traders or retailers as the last actors in the marketing channel pattern II before reaching the In pattern III it can be seen that the farmers sell the coffee directly to the traders or retailers in the market so that there is a little difference in price between the farmers and the consumers.It is because there is no role of the collectors and wholesalers like what happens in pattern I and pattern II.In pattern III there is only the traders or retailers in the market that enjoy the profit amounted Rp 10,000 / Kg.It can be shown by CONCLUSIONS The existing distribution pattern of farming grows naturally in accordance with the needs and development of the actors.The actors who are present in this pattern are the farmers, collectors, wholesalers, retailers, and consumers.The selection of marketing channel depends on the followings: the land area, the amount of production, transportation, facilities, and capabilities of the farmers.The farmers having few facilities and capability and being far from the market tend to choose pattern I because they do not have to deliver the goods.The farmers located closer to the market and tending to have a lot of production prefer pattern II and III because of their higher selling prices. The longer the distribution chain pattern is, the higher the margins will be between the farmers and consumers. Pruning the distribution pattern that has grown naturally should be conducted by restoring the function of farmers' groups in order to facilitate the activities related to the Arabica coffee plantation agriculture in Semarang Regency.With the return of the function of farmers' groups, the farmers can receive information on the market price and control the market price.The farmers' groups are expected to improve the market structure and manner and to manage the marketing network that is expected to increase the farmers' income. The government should create a system of pure competitive market by shortening the chain of distribution pattern, improving the value-added products, and improving the bargaining position of the farmers. Figure 1 . Figure 1.Growth Rate of Gross Domestic Product at Constant Prices in 2013 Based on Business Field (%), 2010-2013 Source: Central Bureau of Statistics MPFigure 2 Figure 2 shows three marketing channels: Pattern I, Pattern II, and Pattern III.The marketing channel pattern I is mostly applied by Table 1 . Price of Coffee Plantation Crops In Semarang Regency in 2015 (Rp) Source: Department of Plantation, Semarang Regency, 2015 Distribution Chain of Arabica Coffee in Semarang Regency Table 2 . Marketing Margin, Distribution Margin, Arabica Coffee In Semarang Regency, Pattern I Table 4 below : Table 4 . Marketing Margin, Distribution Margin, Arabica Coffee In Semarang Regency,Pattern III
2,654.4
2018-03-14T00:00:00.000
[ "Economics", "Agricultural and Food Sciences" ]
Socioeconomic Indicators for the Evaluation and Monitoring of Climate Change in National Parks: An Analysis of the Sierra de Guadarrama National Park (Spain) : This paper analyzes the importance of assessing and controlling the social and economic impact of climate change in national parks. To this end, a system of indicators for evaluation and monitoring is proposed for the Sierra de Guadarrama National Park, one of the most important in Spain. Based on the Driving forces-Pressure-State-Impact-Response (DPSIR) framework, the designed system uses official statistical data in combination with data to be collected through ad hoc qualitative research. The result is a system of indicators that monitors the use of natural resources, the demographic evolution, economic activities, social interactions, and policies. Adapted to different contexts, these indicators could also be used in other national parks and similar natural protected areas throughout the world. This type of indicator system is one of the first to be carried out in Spain’s national parks. The result is a system that can be useful not only in itself, but also one that can catalyze climate change planning and management of national parks. Introduction Anthropogenic climate change, which is produced by greenhouse gas emissions from human activities added to natural climate variability [1], is one of the most serious problems of global environmental change faced by contemporary societies [2]. The need to identify the current and foreseeable impacts of climate change as well as its mitigation and adaptation presents challenges in scientific, political, economic and social spheres [2]. Among these challenges is addressing the potential impacts on national parks [3]. National parks are privileged spaces for monitoring climate change impacts [4][5][6][7]. As they are protected spaces in their biophysical characteristics and limited in their socioeconomic activities, they are easier to control than other spaces that are subjected to social and economic dynamics. In addition, high mountain areas-as is the case of the Sierra de Guadarrama National Park-are a good indicator of the possible effects of climate change on other parts of the planet, as they are particularly sensitive to global environmental changes [8,9]. Consequently, the identification, evaluation and monitoring of the impact of climate change on the park values (biological, cultural, etc.) is an important task for science and for identifying appropriate management actions [3,6,10,11]. There is already experience in monitoring systems with indicators related to biophysical conservation and evaluation of conservation management [12][13][14][15][16] as well as the impact of global environmental change on national parks [4,6,7,17]. However, monitoring the social systems that are both producing climate change and being impacted by climate change in national parks is much scarcer [3,[18][19][20][21]. There are fifteen national parks in Spain, and for only two of these-Picos de Europa and Sierra de Guadarrama-has a system of indicators for the assessment and monitoring of the socioeconomic impact of climate change been developed. Given the recent creation of these monitoring systems, they have not yet collected enough time-series of data to detect trends in any socioeconomic indicators. In this paper, we present the system of indicators developed for the Sierra de Guadarrama National Park. We first describe the special biophysical and cultural characteristics of the Sierra de Guadarrama. Secondly, we highlight the relevance of a system of socioeconomic indicators to evaluate and monitor climate change in the Sierra de Guadarrama National Park. Then, we explain the methodology used to develop the indicator system. Finally, we present the selected indicators, the conclusions, and some lines of discussion. Sierra de Guadarrama: Object of Desire for Kings, Nobles, Clergymen and Novelists, Since the Middle Ages The Sierra de Guadarrama National Park occupies 33,960 hectares, and is located in the mountain range of the Central System (Figure 1), forming part of the natural division between the northern and southern plateaus that make up the center of the Iberian Peninsula (Spain and Portugal). In addition, its peripheral protection zone is 62,687.26 hectares (this has its own legal regime, designed to promote the values of the park in its surroundings and to minimize the ecological or visual impact of the exterior over the interior of the park), and its legal area of socioeconomic influence is 175,593.40 hectares (Figure 2)-the total area of the municipalities where the National Park and its Peripheral Protection Zone are located [22]. The Sierra de Guadarrama has been present in Spanish literature [23] , Sanchez Ferlosio , and Vicente Aleixandre , are some of the authors who have referred to it. This is not surprising, as the Sierra de Guadarrama offers grandiose and majestic scenery, and thoroughly enigmatic settings [23] (p. 24). , are some of the authors who have referred to it. This is not surprising, as the Sierra de Guadarrama offers grandiose and majestic scenery, and thoroughly enigmatic settings [23] (p. 24). The natural riches of the environs of Sierra de Guadarrama attracted the interest of kings, nobles and clergymen, who chose this area to build their palaces, fortresses, monasteries and churches, resulting in a wealth of heritage. Many of these attractions are inside or around the park, and are an addition to the park's appeal. Highlights include the Monastery of El Paular, the Castle of Manzanares, and the Royal Site of San Ildefonso [23] (p. 25). This park is a representative sample of the natural systems of high Mediterranean mountains (Peñalara is the highest at 2428 m), as are its alpine grasslands and pastures, pine and Pyrenean oak forests, peatlands, with glacier and periglacial modeling, and the presence of unique reliefs and geological elements. The main ecosystems of the park are Pinus sylvestris pine trees on siliceous soils; high mountain lakes and wetlands; formations and reliefs of mountains and high mountains; the geomorphology of granite rock that distinguishes the shape of the unique relief and landscape; gall-oak and Pyrenean oak groves; supraforestal thickets, high mountain pastures, high, woody, gravelly steppes; and forests of pine, savin juniper and juniper [24]. Its biophysical values have been internationally recognized. The park, besides being a national park, has, totally or partially, other forms of international protection. It is a Special Protection Area for Birds (SPA), parts of the park are included in two Biosphere Reserves (BR) (Cuenca Alta del Manzanares BR; Real Sitio de San Ildefonso-El Espinar BR), it is included in the International Ramsar List, and is designated a Site of Community Importance (SCI) with 25 habitats of interest, four of which are of priority. Spain occupies second place in the European Union's habitats of interest The natural riches of the environs of Sierra de Guadarrama attracted the interest of kings, nobles and clergymen, who chose this area to build their palaces, fortresses, monasteries and churches, resulting in a wealth of heritage. Many of these attractions are inside or around the park, and are an addition to the park's appeal. Highlights include the Monastery of El Paular, the Castle of Manzanares, and the Royal Site of San Ildefonso [23] (p. 25). This park is a representative sample of the natural systems of high Mediterranean mountains (Peñalara is the highest at 2428 m), as are its alpine grasslands and pastures, pine and Pyrenean oak forests, peatlands, with glacier and periglacial modeling, and the presence of unique reliefs and geological elements. The main ecosystems of the park are Pinus sylvestris pine trees on siliceous soils; high mountain lakes and wetlands; formations and reliefs of mountains and high mountains; the geomorphology of granite rock that distinguishes the shape of the unique relief and landscape; gall-oak and Pyrenean oak groves; supraforestal thickets, high mountain pastures, high, woody, gravelly steppes; and forests of pine, savin juniper and juniper [24]. Its biophysical values have been internationally recognized. The park, besides being a national park, has, totally or partially, other forms of international protection. It is a Special Protection Area for Birds (SPA), parts of the park are included in two Biosphere Reserves (BR) (Cuenca Alta del Manzanares BR; Real Sitio de San Ildefonso-El Espinar BR), it is included in the International Ramsar List, and is designated a Site of Community Importance (SCI) with 25 habitats of interest, four of which are of priority. Spain occupies second place in the European Union's habitats of interest ranking and third in that of priority habitats. Sierra de Guadarrama is also characterized [25] by its floristic richness and contains a large number of threatened and/or endemic species. Its special climatic conditions and its location in the transition zone between the Eurosiberian and Mediterranean regions have favored the processes of endemism. For example, in relation to flora, 40 species of interest In addition, the park has cultural values, such as the remains of traditional socioeconomic activities and trades (transhumant pastoralists, cowherds, stonecutters, oxen, charcoal workers, carters, neighbors, etc.), remnants of pastoral pastures on the top of the sierra, the ruins of shearing ranches or the brick chimneys of old sawmills, among others. These remains bring us closer to a world of traditions that influenced the local culture for centuries and shaped the territory. It is also worth mentioning the Roman road that crosses the Park, and several drovers' roads and cattle routes dating from the Middle Ages to displace the transhumant herds-millions of Merino sheep of good wool to market to other parts of the world. Today, most of these activities have been lost, with cattle still kept for meat production. Tourism, based on the landscape, values of nature and cultural heritage, has become one of the main economic sectors in the area. Despite its natural values, the area was not declared a national park until 2013 [24]. In order to meet the criteria to reach category II of the IUCN, this law was modified in 2014 [24]. The process to acquire this category is still ongoing. The first National Park in Spain dates back to 1916 (Covadonga National Park). The Park belongs to two autonomous communities (regional governments). Sixty-four percent of its area corresponds to the Autonomous Community of Madrid and a little over 36 per cent belongs to the province of Segovia, in the Autonomous Community of Castile and Leon. There are 28 municipalities included in the geographical limits of the Park. The aforementioned natural and cultural values, as well as the park's proximity-35 km-to the Madrid metropolitan area, tend to attract large numbers of people (3.8 million visits in 2014 [24]). This mass tourism produces one of the main challenges faced by the Sierra de Guadarrama National Park: the tension between the conservation of the park and the economic interests of the municipalities within the park or in the protection area surrounding it-28 included plus 34 in its area of socioeconomic influence. Such a conflict became more visible in 2013 when the Sierra de Guadarrama was declared a National Park. Some argue that an excessive touristic focus was given to the detriment of the conservation objective [26], and that there is a lack of coordination between protection efforts and the pursuit of traditional activities [27]. The valuable ecosystem of the Sierra de Guadarrama National Park is under threat. On the one hand by global warming, to which the park is particularly vulnerable. On the other, by an existing tendency to prioritize the economic interests of local communities instead of the conservation of the park. Both pressures have the potential to interact: for example, changed land use by humans could exacerbate the effects of climate change on the natural and cultural resources of the park. Despite these, there is very little evaluation and monitoring, mitigation and adaptation to climate change [2] of the Sierra de Guadarrama National Park [28]. The current process of drafting the obligatory Master Plan for the Use and Management of the Park could be an opportunity to address climate change, particularly its socioeconomic dimensions, more directly. The System of Socioeconomic Indicators for the Evaluation and Monitoring of Climate Change in the Sierra de Guadarrama National Park The aim of designing and operating a system of socioeconomic indicators for the evaluation and monitoring of climate change in the Sierra de Guadarrama National Park responds to the need to have a sufficient set of data to monitor the short, medium and long-term effects of climate change in the social and economic sphere of this protected natural space. In the face of climate change, such monitoring is crucial for the development and implementation of plans [29][30][31], to conserve natural resources and to the living conditions of the communities dependent on these resources. Those plans need to be based on an approach of seeking to increase the resilience of natural and social systems [32][33][34][35][36][37][38] in the face of considerable uncertainty about the specific changes that might occur and their timing and magnitude [29,39]. To do this successfully requires consideration of a potentially wide range of valued assets, whose vulnerability to different aspects of climate change will vary, and a range of interacting biophysicial and social processes at a range of spatial scales [40]. Hence, an appropriate set of indicators needs to be carefully chosen to be able to track changes in the most important elements of these complex systems over relevant timescales and spatial scales. The collected data will enable park managers to efficiently respond to a complex and changing natural and social environment [39,41]. However, managers and planners still have little guidance or training on how to address the social aspects of vulnerability to climate change in their management and planning [42,43]. This deficit can jeopardize the management strategies of these areas as well as the public support for them. Thus, the objectives of the research presented here have been (1) the definition of a system of indicators for the evaluation and monitoring of the impact of climate change in the social and economic environment of the Sierra de Guadarrama National Park, specifying those that may be generalizable to other national parks of similar characteristics; (2) the design and development of an updated database. Methodology Following Land and Spilerman [44], the indicators refer to those parameters (statistics, data and all forms of evidence) that allow us to evaluate where we are and where we are going, in relation to the objectives set. The variables and indices that have the characteristic of indicators are those sensitive to changes, whether they are of social or physical nature, and trends of natural or social origin. As a whole, the system of indicators should show the relationships between the elements of the system studied and the underlying interactions [45]. To address the research objectives, we drew on a range of relevant existing sources of information relating to conservation and management of national parks and natural resources in Spain. First of all, the legal framework on which the general objectives for national parks in Spain are based; these focus primarily on the protection of their biogeophysical values [46]. However, the sustainable development of the municipalities situated within the park's area of socioeconomic influence, is also considered by law [46]. With regard to monitoring, the National Parks Network of Spain [47] proposes to develop and maintain a monitoring and evaluative system for the ecological, socioeconomic and functional aspects of each park and the Network as a whole. In addition, we have considered both the criteria and indicators for sustainable forest management in Spanish forests [48] and the evaluation of public use of national parks in Spain by the Autonomous National Parks Organization [49], the System of Indicators for the Evaluation and Monitoring of the Socio-economic Impact of the Impact of Global Change in the Picos de Europa National Park, as well as the system proposed for the Integrated Assessment of Protected Areas of the region of Madrid [16], among other sources. Then, taking all of these aspects and arguments into account, the indicators selected were based on the following criteria. Firstly, they were selected according to the socioeconomic characteristics of the park's municipalities and the area of influence and the availability of the data [50]. Secondly, we considered the functions that the indicators should fulfill [51]: the continuous recording of the dynamics of the socioeconomic system and the analysis of the trends of change, either by natural or social causes; the improvement of the knowledge of the system, through the compilation or generation of new information regarding the social and the economic impact of climate change on the national park; the forecast for specific and/or global changes in the system, especially alterations or damage due to unexpected events; the identification, where appropriate, of the effects of management practices on the dynamics of social systems, and detection of undesirable effects. To do this, the research team took into consideration literature analysis, existing accessible statistical information, the park management office's annual reports, and those indicators that may be more sensitive to change. Finally, the focus was on the concordance of the preceding two criteria with the overall goal of progressing towards the sustainable development of the communities that influence or are influenced by the National Park [52][53][54], according to the United Nations sustainable development goals. The indicators developed here are the result of a selection from the many possibilities resulting from the great complexity of the natural and social systems that intertwine in protected natural spaces. This selection, made using rigorous and explicit criteria, has been necessary in order to obtain a number of indicators not too large in order to maximize the information and minimize the cost. To this end, we considered the extent to which the indicators are specific and unequivocal, easy to interpret, accessible, significant and relevant, sensitive to change, valid, verifiable and reproducible, and, above all, useful tools for action. A balance has also been sought between the indicators of general use relating to protected natural areas and those developed for the particular case of the Sierra de Guadarrama National Park. The use of general-purpose indicators allows comparison between different protected areas and their integration into larger monitoring projects, and therefore the achievement of relevant time series. Different indicator systems use alternative frameworks for impact analysis and sustainability [55]. In this case, we have used the Driving forces-Pressure-State-Impact-Response (DPSIR) framework, developed by the European Environment Agency (EEA) [56], which is the one used by the Spanish Ministry of Agriculture, Fishing, Food and Environment to elaborate the Water Indicators System [57]. The EEA defines "Driving forces" as "the social, demographic and economic developments in societies and the corresponding changes in lifestyles, overall levels of consumption and production patterns" [56] (p. 8). "Pressure" indicators describe the "developments in release of substances (emissions), physical and biological agents, the use of resources and the use of land" [56] (p. 9). Pressure indicators are outside the scope of this study. Climate change is a global process that is barely affected by the activities taking place in the park, and the focus of our indicator framework is on impacts and adaptation. Therefore we do not consider it necessary to develop indicators to monitor factors (emissions of CO 2 and other greenhouse gases) that cause climate change. Nor do we consider it necessary to create additional indicators to monitor climate change itself (for example changes in temperature and rainfall), as the park has weather stations with continuous meteorological meters installed and annual reports are kept. We have focused instead on the identification of indicators for (1) the "State" category, a description of the quantity and quality of socioeconomic phenomena in the studied area; (2) the "Impacts", the changes in the social, economic and environmental dimensions, which are caused by changes in the "State" of the system; and (3) the society's "Response" to change the pressures and the state of the environment for the solution of the problem in question, as illustrated in Figure 3. "Impact" indicators will provide data about change in the "State", but it will not be possible to establish, a priori, a causal relation, since the park's socio-ecologic system is affected by other factors as well. As the Intergovernmental Panel on Climate Change IPCC (2014) concludes, "many processes and mechanisms are well understood, but others are not. Complex interactions among multiple climatic and non-climatic influences changing over time lead to persistent uncertainties, which in turn lead to the possibility of surprises" [2] (p. 151). Further, the impact of important socioeconomic factors could emerge in the medium or long term [58], depending as well on the adopted mitigation and adaptation measures. Even so, "for most economic sectors, the impacts of drivers such as changes in population, age structure, income, technology, relative prices, lifestyle, regulation, and governance are projected to be large relative to the impacts of climate change" [58] (p. 19). This emphasizes the importance "Impact" indicators will provide data about change in the "State", but it will not be possible to establish, a priori, a causal relation, since the park's socio-ecologic system is affected by other factors as well. As the Intergovernmental Panel on Climate Change IPCC (2014) concludes, "many processes and mechanisms are well understood, but others are not. Complex interactions among multiple climatic and non-climatic influences changing over time lead to persistent uncertainties, which in turn lead to the possibility of surprises" [2] (p. 151). Further, the impact of important socioeconomic factors could emerge in the medium or long term [58], depending as well on the adopted mitigation and adaptation measures. Even so, "for most economic sectors, the impacts of drivers such as changes in population, age structure, income, technology, relative prices, lifestyle, regulation, and governance are projected to be large relative to the impacts of climate change" [58] (p. 19). This emphasizes the importance of the evaluative and monitoring systems of climate change, in this case the socio-economic impact regarding the national park. We propose a system based on a basic chain-of-causality among the indicators and their mutual dependence. This is achieved by indicating on each indicator what other indicators we consider has a relation with. The starting assumption is that the object of evaluation and monitoring is a system, formed by a series of elements interrelated to each other by different processes [59,60]. Niemeijer and Groot [61] consider it important to advance the development of indicators from causal chains to causal networks, that is to say, including all systemic interrelationships between indicators. This approach enriches but also complicates the issue. In any case, it is a question of finding the appropriate balance of indicators to identify relevant trends of change for policy-making and explain the overall functioning of the system and its remoteness or approximation to sustainability [46]. This approach also makes it feasible to inform civil society and support communication with societies [62,63]. Finally, it must be taken into account that the use of the selected indicators requires continuous revision. The indicator here proposed are just the beginning of a monitoring system that will enable adjusting the model to better address its multi-causal dimension. A final methodological issue regards the information used to elaborate the Sierra de Guadarrama National Park indicators. To a large extent, data has been gathered from official statistical sources. This is a limitation, as the collected data lacked in some cases the level of We propose a system based on a basic chain-of-causality among the indicators and their mutual dependence. This is achieved by indicating on each indicator what other indicators we consider has a relation with. The starting assumption is that the object of evaluation and monitoring is a system, formed by a series of elements interrelated to each other by different processes [59,60]. Niemeijer and Groot [61] consider it important to advance the development of indicators from causal chains to causal networks, that is to say, including all systemic interrelationships between indicators. This approach enriches but also complicates the issue. In any case, it is a question of finding the appropriate balance of indicators to identify relevant trends of change for policy-making and explain the overall functioning of the system and its remoteness or approximation to sustainability [46]. This approach also makes it feasible to inform civil society and support communication with societies [62,63]. Finally, it must be taken into account that the use of the selected indicators requires continuous revision. The indicator here proposed are just the beginning of a monitoring system that will enable adjusting the model to better address its multi-causal dimension. A final methodological issue regards the information used to elaborate the Sierra de Guadarrama National Park indicators. To a large extent, data has been gathered from official statistical sources. This is a limitation, as the collected data lacked in some cases the level of disaggregation necessary for some of the indicators. Even so, they have been maintained for their role in the whole system. It is expected that the information will be provided in the future. Results The indicators here presented have been elaborated to fit the socio-economic conditions of the Sierra de Guadarrama National Park. However, they could be adapted and used in other protected natural areas. The category "State" has been labelled as "Receptor Environment" (RE) in this indicator system. Taking into account the socio-economic characteristics of the Sierra de Guadarrama National Park, the following categories have been proposed, differentiating "group" and "subgroup": Agrarian resources use c. Water use d. Energy use e. Waste treatment 2. Demography: a. Population and its characteristics b. Activity, occupation and unemployment 3. Economy: a. Employment in productive activities b. Tourist activity c. Public investments d. Income and transfers 4. Society: a. Education b. Health c. Quality and living conditions The indicators of the "Impact" (SI) are those of the future "State", which is to say, considering the changes within the time taken into consideration. Finally, the indicators of the "Response" are those including mitigation and adaptation (M&A) to climate change. Two levels have been differentiated: "group" and "subgroup". Social perception c. Training, qualification and participation d. Social research Table 1 has been designed for each of the indicators and includes: the name of the indicator; the frame of reference; the "group" and "subgroup"; the objectives it pursues; its justification; the measurement parameters or variables that define it; the data source; the scope and period to which they refer; and the relation with other indicators. All this is part of the necessary monitoring protocol to ensure its quality. As a result, we have developed seventy-nine indicators altogether, which are listed on the table below. It contains thirty indicators regarding both the biophysical and socio-economic means (State) that could be affected by the impact of global and climate change (RE); twenty indicators regarding the future socioeconomic "Impact" of climate change (SI); twenty-nine indicators regarding the measures (Response) to mitigate and to adapt to climate change (M&A). All these indicators are available on the internet and can be accessed following the links provided in the Supplementary Materials at the end of this article. Table 2 shows a list of all the indicators developed and the indicators with which they are related. Objective, definition and justification of the indicator It comprises the agricultural and livestock exploitations within the territory, which include the strata of agricultural crops, scrub, pasture, and grassland of the National Forest Inventory. It seeks to reflect uses of the territory by uses that do not entail an irreversible transformation of the national park. Measurement parameters Percentage of the agricultural and livestock area respect to the total area of the park. Calculation formula Agrarian area multiplied by 100 divided by the total area. Unit of measurement Percentage rate, result of dividing hectares by hectares. Possible disaggregations By municipalities of the park. Source of information III National Forest Inventory. Data for the Sierra de Guadarrama National Park. Referred area Territory included within the delimitation of the national park. Data availability Upon request on the Management Office of the national park. Measurement periodicity The corresponding to the update of the National Forest Inventory. Responsibility for the veracity of the data Ministry of Agriculture, Food and Environment. Values of the Indicator for the Different Areas and Periods Year Discussion The first general conclusion is that the corpus of scientific and empirical knowledge on the social and economic impact of climate change on national parks is scant. However, the study of the socioeconomic impact of climate change in national parks is relevant because climate change is one of the most important challenges faced by today's society. Moreover, climate impacts on people in and around parks, and people's response to such impacts could also affect the natural values of the parks. Thus, it is considered necessary to evaluate and monitor these impacts, with a systematic scientific approach aimed at understanding parks interconnected biophysical and social systems. This requires the development of more ad hoc theoretical and methodological tools for national parks. This work is oriented in that sense, although limited to indicators and data that nevertheless need to be tested and adjusted in the future. For diverse reasons, the indicators elaborated for the Sierra de Guadarrama National Park vary in their detail, including the lack of sufficiently disaggregated statistical information and the necessary primary research that qualitative indicators require. This primary research is particularly important in order to extend the system to process indicators, limited in this work as well as in many of these features. Such indicators of processes make it possible to examine social phenomena, like relationships between social groups or social perception of trends on sustainability in the area of study. The system of indicators we propose will have to be adjusted in the future, as the processes of interaction of the biophysical and social systems of the Sierra de Guadarrama National Park are better known. The same will be also necessary in the other national parks with similar characteristics. A system of indicators for the monitoring and evaluation of the social and economic impacts of climate change has the potential to go beyond simple reporting. It can provide information on whether the situation improves or worsens, recedes or progresses, increases or decreases. Socio-ecological systems generally are multi-causal and different depending the characteristics of the area. Thus, it cannot be a priori determined whether such changes have been caused by climate change or by any other factor or combination of factors. Nonetheless, continuous evaluation will allow to deepen the understanding of the causal relationships between changes in climatic conditions, changes in the socio-ecological system and changes in natural and cultural values of the park. For example, climate change is a direct cause of drought termed "meteorological drought", in addition to human action or "hydric drought" (water infrastructures, responsible water human use or consumption, etc.), which could also have a relevant impact on the economy (agricultures, industry, tourism, etc.), environment (evolution of fauna and flora, territory, etc.) and population's living conditions (consumption, transport, live styles, etc.) and environmental values and attitudes (response dimension). However, this needs to be tested over time. Ideally, the information provided by a system of socio-economic indicators relating to climate change can be integrated into management planning of national parks to improve decision-making. Moreover, a study on indicators could be the catalyst for the development of comprehensive climate change adaptation plans for individual national parks and protected area networks, which is still limited or non-existent in most national parks in Spain and many other parts of the world. Conclusions In conclusion, the interpretation of changes in the monitoring process of complex climate-related changes in protected areas is a major challenge. The evaluation of the interactions between climate change and the socio-ecological changes in the park and its area of influence requires a holistic approach and a sufficient time-series of data. The set of socio-economic indicators we have developed provides a framework for collecting and interpreting such, and so will help to inform adaptation planning for the Sierra de Guadarrama National Park. The approach we have taken could also be applied in other similar national parks.
7,376.6
2018-02-12T00:00:00.000
[ "Economics" ]
Design, Analysis, and Control of a Stiffness-Tunable 3-DOF Rubber-Bearing Positioning Stage A 3-DOF x-Y-z rubber bearing stage with stiffness-adjustment capability is design, realized, and controlled in this work. The stiffness of rubbers can be adjusted easily by adding a preload from different directions. Base on elastomeric materials analysis, the stiffness of rubber and therefore dynamic characteristics of the stage can be changed with such a preload. In comparison with positioning stages designed by compliance mechanism approach, such characteristics could be an opportunity to improve the performance by replacing compliant bearing with rubber-bearing one. The dynamic testing results show that with a compression of 0.8mm on a 1.5mm thick rubber, the stiffness and natural frequency increased by 100% and 30% in Y-axis, 15% and 7% in θx-axis, and even by 510% and 140% in θz-axis, respectively. Associated PID and integral sliding mode controllers are then implemented for characterizing the position control performance. Currently, the rubber bearing stage can achieve a 79 Hz and 110 Hz controlling bandwidth by using PID and integral sliding mode controller (ISMC), respectively in Y-axis motion. In the future, the effect of different pre-compression displacement on dynamics characteristics of the stage, such as stiffness, natural frequency, damping ratio, will be tested. Additional control design in multi-axis coupling will be addressed to evaluate the performance in precision motion related applications. Introduction Positioning stage have been widely used in manufacturing system and precision metrology (1) . For example, fast steering mirrors, which is used in laser manufacturing related applications, comprise rotational positioning stages (2) . Such stages required higher precision, high bandwidth, and high resolution for the purpose of enhancing manufacturing capability. Traditionally, these stages are usually designed and realized based on compliant mechanisms (3) . It provides stiffness by its compliant structure. For example, compliant stages usually utilize beam flexural deformation to gain the required compliance, which depends on the geometry and material properties. For example, for achieving a high compliant structure, the structural length could be long and is not adjustable once the structure is realized. If the dynamics characteristics of stage need to be adjusted, the shape of compliant structure would need to be re-design and this causes inconveniences. Elastomeric bearings (aka rubber bearings) are essentially rubber-metallic laminates structures. They are used in building technologies to reduce vibration for many years (4) . Recently, this kind of elastomeric materials have been integrated into precision stage design. By changing the shape factor of rubbers, the stiffness under compression can be much higher than that in shearing direction (5) . With proper design, the strong stiffness anisotropy can be exhibited that the motion in shear direction can be flexible while the motion in compression direction be blocked effectively. However, elastomeric bearings are viscoelastic material, its behavior depends on mechanical properties of materials, operating condition and geometry. The characteristics also influenced by temperature. It is important to perform essential characterization for fully taking the advantage of this approach. For example, Kulk (6) designed a fast steering mirror for optical communication. With elastomeric bearings, the system exhibited a stroke of 3.5-mrad angular motion with 10-kHz bandwidth. Teng (7) developed a one-dimensional elastomeric bearing positioning stage that provide a 139μm stroke. With PID controller, the stage can have a 27-Hz bandwidth, the bandwidth is increased to 350 by using integral sliding mode controller (ISMC). In our previous work, a 3-DOF elastomeric bearing stage has also been developed (8) , the bandwidths are 80Hz, 62Hz and 54Hz in Y-, θx-and-θz axis. In that work, it also showed that the shearing stiffness and natural frequency could be adjustable with adding preload in compression direction (8) . This could be an advantage for rubber bearing stage that the dynamic model can be tuned easily. Therefore, it is important to design a stage with adjustable preloading for evaluating the effect of preloading on the characteristic dynamic and control performance systematically for future related applications. The motivation of the research is, therefore, to realize a novel 3-DOF stage which can provide pre-load on rubber bearing in order to make its dynamics characteristics adjustable easily. In this work, we apply viscoelastic dynamics to developed system models of stage. Meanwhile, two controllers have also been designed to realized positioning experiment. Notice that although our previous investigation (8) dealt with all possible deformation types, this work only focuses on shearing stiffness only and it leaves as our future work for exploring the rubber dynamics subjected to other types of pre-loads. The research flow is shown in Fig. 1. First is to establish the rubber bearing stage with preload-adjustment, followed by dynamic characterization for obtaining the system model. Furthermore, the dynamic experiment with adding preload is demoed to show how the stage model dynamics be adjustable in Y-, θx-and-θz axis. Next, PID controller and integration sliding model controller (ISMC) has been designed and used for positioning experiment in Y-axis. The rest of this article presents the work in detail. The conceptual design of the stage and realization are presented in Section 2. Next, system dynamic test is presented in Section 3. In Section 4, the design of the controller is presented and followed by the positioning experiment addressed in Section 5. Finally, Section 6 concludes the paper. 3-DOF Positioning Stage A 3-DOF rubber bearing positioning stage has been developed (8) . This stage consists of one translation (Y-axis) and two rotational (θx and θz-axis) DOF, the Schematic plot for stage design is shown in Fig. 2. As shown in Fig. 3a, three AVM40-20 voice coil motors (VCM) are used to actuate the stage. Three ASP-50-CTA capacitive probes are the sensor to capture the stage motion. Four rubber pads (15×15×1.5 mm 3 ) are attached on an aluminum block (30×30×30 mm 3 ) as the main body of the stage. The stiffness of the rubber bearing stage (ky, kθx and kθz) is express by shearing stiffness(ks), torsional stiffness(kt) and bending stiffness(kb) (8) .The stiffness of rubber bearing ks, kt and kb are 120N/mm, 3.81Nm/rad and 11.25Nm/rad by testing method. The equation of motion of stage can be obtained as The coordinates of force and measurement displacement need to be transformed by 3 voice coil motors (F1, F2 and F3) and three capacitive sensors (Y1, Y2 and Y3). The relations show in Chen's thesis (8) , it related to the arm length of force (af) and measurement point (ay). Rubber bearing of this positioning stage cannot be adding preload, which means the dynamics characteristics is un-adjustable. In contrast, a rubber-adjustment design has installed to the stage, as shown in Fig.3, four mechanisms are used to compress rubbers. This could make the dynamic characteristics of stage change. Rubber Stiffness Adjustment Design The rubber stiffness is characterized by a material testing system, called HSC (8) , which provides a compression loading in compress direction of rubber by step motor. It can test shear stiffness under different preload. The experiment data shows that with different preload, the shear stiffness increases furthermore, the natural frequency and damping ratio of rubber bearing also change with adding preload. With this result, we can see that the rubber can adjust easily with different loading condition. A new 3-DOF rubber bearing stage has been developed, as Fig. 3 shown. Smaller VCMs AVM30-15 have been chosen to make the mass of stage lighter. The parameters of af and ay are 62.5mm and 50mm. Fig. 4 shows the design that rubbers can be compress and fixed by screws in four directions. Compare to HSC system, not only shear stiffness relates to system dynamics, the bending stiffness and torsional stiffness also need to be considered. The entire control flow is shown in Fig. 5. System Dynamics Modeling The system model is established by step and sinusoidal response. Since rubbers are viscoelastic materials, thus a generalized Maxwell model is used to perform a time-and frequency-varying spring K(s), which combined with three linear springs (k1, k2 and k3) and two dampers (c2 and c3) (9) . The stage model is shown as Fig. 6. The system model could be expressed as Here the time dependent stiffness, Kij, can represent the creeping effect of rubber bearing stiffness (7) . Gv denotes the first-order order system of voice coil motor, the cut-off frequency is 300× 2 , and Gij represent the transfer function between i-axis output and j-axis input with the units of μm/V for translation axis and mrad/V for rotation axis, respectively. The models of system are established by curve fitting. For example, the parameters of Gyy is shown in Table 1. Comparison of the simulation and the experimental results are shown in Fig. 7. The creep effect of rubber makes the stiffness of stage decrease. The coupled effect also established by fitting step response and sinusoidal response. The transfer function models of coupled axis are much more complicated. Fig. 8 and Fig. 9 shows the simulation and experimental result of Gxy and Gzy. Dynamic experiment with preload In this part we add preload to compress rubber bearing and show how the dynamic models are influenced. The rubbers have been adding preload and compressed 0.8mm in θx-direction as shown in Fig. 9. In this situation, compress ks makes ky change, kθx is affect by compress ks and kt, and kθz is influenced by kb and ks. By sinusoidal response test, transfer function of Gyy, Gxx and Gzz has change, the results are shown in Fig. 10. The stiffness and natural frequency increase with adding preload. Table 2 shows the comparison of dynamic characteristics. Gzz has significant change in both stiffness and natural frequency, in the other hand, dynamic characteristic of Gxx does not alter so much. (b) θx -axis response. Controller Design There are two controllers has used for stage control: PID controller and integral sliding mode control. PID control is selected to be a basic threshold of control performance. After PID controller established, integral sliding mode control will be applied to seek a better control result. PID control PID controller is widely used in control application (10) . With appropriate design, the resonance peak of bode diagram can be suppressed effectively. The traditional PID controller are written as Where kp is proportional gain, Ti is integral gain and Td is differential gain. Integral sliding mode control On the other hand, the integral sliding mode control (ISMC) is also be designed (11) . A sliding function s(x,t) has been chosen, which is defined as Where ̃ = ( ) − ( ). xd is the command of stage position. There are four parameters of ISMC: , , and . Please refer to Deng's work (7) for the detail in determining of ISMC's control parameters. Control Result The NI cRIO-9014 FPGA is chosen as the real time controlling platform for developing controller to conduct positioning experiment. The sampling rate (Fs) of system is 10-kHz. A 15.7μm step input in Y-axis direction is command. Step response and sinusoidal response are tested. Comparison with different dynamic model There are 3 control result with PID controller shown in Fig. 11. First, we control the stage with PID gain kp=3, Ti=0.7/Fs and Td=0.5/Fs can reach a rise time and settling time of 6ms and 25.3ms with a 5.1% overshoot (PID-a). Second, the rubbers were compressed 0.8mm in θx -direction without adjust PID gain (PID-b). The result has different due to the plant change. The rise time and settling time are 11.8ms and 41.4ms in this condition. Final, another PID gain (kp=4.5, Ti=0.73/Fs and Td=0.02/Fs) has been chosen to make the performance better, it can achieve a rise time and settling time of 5.8ms and 24.3ms in current work. Fig. 12 shows the sinusoidal test result. The -3dB bandwidths are 79Hz, 35Hz and 79Hz in these 3 conditions. From PID-a and PID-b, the bode plot of close loop system are different because the dynamic model has changed. It shows that PID-c can reach a same bandwidth as that of PID-a, due to the increase of natural frequency from 210 Hz to 250 Hz with preload. and-c). Comparison with different controller On the other hand, original plant with ISMC has been selected to compare with PID-a. ISMC with a parameter set of =0.0018, =50, =0.005 and =8 reaches a rise time of 3.9ms and settling time of 20.9ms, which is shown in Fig. 13. The sinusoidal test result is shown in Fig. 14, the -3dB bandwidth is 110Hz by using ISMC. Table 3 summarizes the performance of the control result in these setups. In current works, the ISMC has better settling time and bandwidth in Y-axis. The θx and θz axis has steady state error because only one controller is applied in command axis(Y-axis). In future work, controllers of θx and θz axis will be designed and added in control system to decrease steady state errors which cause by coupled effect. Conclusions In this work, a stiffness-tunable 3-DOF rubber bearing positioning stage has been developed, the design and control of the stage is presented for dynamic characteristic adjustment and positioning control. The system is actuated by 3 voice coil motor. 3 capacitive probes are used to sense one translational and two rotational DOF. PID controller and ISMC are designed for performing the motion control. Currently, the stiffness-tunable design shows that the dynamic characteristics of this stage is adjustable. This is an advantage compare with compliant stage. For example, the length of compliant mechanism needs to be decreased by 30% to make the stiffness increase by 100%, that makes the size of compliant stage have large adjustment. The stiffness of the rubber bearing stage can increase by 100% and by 510% in translational direction and in rotational direction with small scale adjustment, which is more efficiently than compliant stage. The -3dB bandwidth of controlling Y-axis is 79Hz by PID control. With ISMC, the bandwidth can reach 110Hz. In current work the stage with rubber adjustment can approaching same control result to original stage. The future work of this research is to finish controller design of all three axes to realize coupling control. θx and θz -axis control experiment with preload will get on. It could improve control performance with θx and θz -axis control testing. With further future, this rubber bearing stage should be useful for various applications such as precision engineering or laser scanning.
3,319.4
2020-01-20T00:00:00.000
[ "Engineering", "Materials Science" ]
Industrial wireless sensor networks 2016 The industrial wireless sensor network (IWSN) is the next frontier in the Industrial Internet of Things (IIoT Industrial wireless sensor networks 2016 Qindong Sun 1 , Shancang Li 2 , Shanshan Zhao 2 , Hongjian Sun 3 , Li Xu 4 and Arumugam Nallanathan 5 The industrial wireless sensor network (IWSN) is the next frontier in the Industrial Internet of Things (IIoT), which is able to help industrial organizations to gain competitive advantages in industrial manufacturing markets by increasing productivity, reducing the costs, developing new products and services, and deploying new business models.The IWSN can bridge the gap between the existing industrial systems and cyber networks to offer both new challenges and opportunities for manufacturers. In the next few years, as the edge part of IIoT, the IWSN plays a crucial role in transforming industrial organizations opening up a new era of economic growth and competitiveness in digital industrial 4.0.The IWSN presents great benefits to industrial organizations such as profitability, efficiency, productivity, reliability, and safety mainly through three aspects: (1) boost revenues by increasing production, (2) develop new hybrid business models, and (3) exploit intelligent technologies to fuel innovation. However, there are still many challenges for the IWSNs: (1) it involves many separate technology families and bringing them together will take time, and (2) there are still many technical barriers to merge different business functions under different technical standards/ vendors. One of the goals of this Special Issue was to gather researchers from different industrial areas, such as industrial wireless sensors, machine-to-machine communications, and industrial applications and build on the emerging digital industrial 4.0.In the paper ''Connectivity node set generation algorithm of mine WSN based on the maximum distance'' authored by Ke Wang and Donghong Xu, the deployment strategy of IWSNs in a coal mine scenario is investigated based on the energy consumption, survival time, and quality of services.A connectivity node set generation algorithm of mine IWSN-based n maximum distance is proposed and the proposed strategy is tested in a coal mine monitoring system. Location-based service (LBS) is one of the most important topics in industrial applications; the second paper ''Indoor localization based on subarea division with fuzzy C-means,'' authored by Junhuai Li, Jubo Tian, Rong Fei, Zhixiao Wang, and Huaijun Wang, presents a fingerprint localization model by dividing the target area into multiple sub-areas with fuzzy C-Means algorithm.In this solution, the noise and non-linear attenuation between the wireless signals are considered to improve the accuracy. Energy consumption and routing algorithms remain two active topics for resource-constrained nodes in IWSNs.In the paper ''Relay participated-new-type building energy management system: an energy-efficient routing scheme for wireless sensor network-based building energy management systems'' by Kewang Zhang, Qizhao Wu, and Xin Li, a novel energy-efficient routing scheme is proposed based a new strategy: relay participated-new-type building energy management system (RP-NTBEMS).The new scheme can reduce the energy consumption and extend the lifetime of IWSN nodes and the simulation result shows that the proposed RP-NTBEMS can obtain better performances 1 School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China 2 Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK
737.8
2017-06-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Analysis and modelling of road traffic using SUMO to optimize the arrival time of emergency vehicles : Traffic simulation tools are used by city planners and traffic professionals over the years for modelling and analysis of existing and future infrastructural or policy implementations. There are numerous studies on emergency vehicle (EV) prioritization in cities all over the world, but every area is unique and requires the data collection and simulation to be done separately. In this case, the focus area is the M¨orfelder Landstraße in Frankfurt am Main, Germany, one of the busiest streets in this city. The study illustrates demand modelling, simulation and evaluation of a traffic improvement strategy for EVs. Vehicular traffic such as passenger cars and trams are simulated microscopically. To perform accurate traffic simulation, input data quality assurance and cleansing of Master Data is required. Therefore, the data is adapted to reproduce the real-world scenario and transformed into the readable format for the simulation model. Vehicular demand is calibrated by traffic count data provided by the Frankfurt Traffic Department. To model road traffic and road network, origin destination matrices using the Gravity Mathematical Model and Open Street Maps are generated, respectively. This process is time-consuming and requires effort. However, this process is critical to get realistic results. In the next step, the road traffic is simulated using SUMO (Simulation of Urban mobility). Finally, EV relevant key performance indicators (KPIs): total trip time and total delay time are derived from simulations. The real-world scenario is compared with five alternative scenarios. The comparison of the KPIs revealed that the real-world scenario results in longer travel times compared to the EV-prioritization scenario. In the least case, the overall travel times for EV has decreased significantly and, as we know, in the case of EVs, even a few seconds saved could prove crucial for a person in need. Introduction In the 21st century, high rate of urbanisation and the advancement in the transport sector has led to an increase in urban vehicular mobility. This resulted in people opting for a comfortable and luxurious life. But on the other hand, it has also negatively impacted the quality of life by increasing the potential for traffic problems such as traffic congestion, accidents, environmental issues for example, increase in greenhouse gases, carbon emission, particulate matter etc. To combat these problems traffic improvement strategies such as car pool lanes, public transport bus lanes, dedicated space for cyclists and pedestrians, to name some, are adopted. Testing and implementation of such strategies require prior investigation and analysis. Without these studies, the implemented strategies or policies could be unreliable and might end up costing even more in terms of infrastructure, time and in some cases even human life. To have a theoretical evaluation and predict the outcome of these strategies, traffic simulation plays a vital role. For traffic simulation to be implemented properly numerous elements are needed but the following are the most important ones [1]: • Network data such as roads, footpaths, tram routes • Additional traffic infrastructure such as traffic lights, induction loops • Traffic demand • Traffic constraints e.g. speed limits, construction sites, bus lanes. It is time consuming and requires effort to prepare a traffic simulation model using these elements. Therefore, many simulation tools provide ready to use simulation models so that the user can directly test their traffic improvement strategies and saves time and effort required for simulation [2]. One of the main motive of traffic simulation is to evaluate different traffic improvement strategies. This study shows another traffic improvement strategy based on emergency vehicles. "An emergency vehicle is a vehicle that is used by emergency services to respond to an incident" [3]. Even a small reduction in the arrival time of EVs (fire brigade, ambulance or police) can save lives of the people who need immediate assistance. To tackle such situations EVs have special rights such as violating red lights when approaching a traffic light junction (TLJ) or traveling in the opposite direction to reduce the arrival time. But this approach is not a full proof approach to optimize the arrival time. As, there are times when EVs are stuck in a long queue of vehicles in front of the TLJ or are stuck in a traffic congestion where there is no way to overtake. The main objective of the study is to simulate the road traffic of the Mörfelder Landstraße in the Sachsenhausen area, Frankfurt am Main, Germany, followed by studying and evaluating different scenarios to optimise the arrival time of emergency vehicle which could help in combating the aforementioned situations. This paper is structured as follows: Section 2 discusses in details about the master data, demand modelling and simulation process by elaborating on data pre-processing, network modification and traffic generation. Section 3 explains solution methodology, different case scenarios for EVs. Section 4 shows the result obtained from the case scenarios. Section 5 presents the conclusion and future work. Master Data, Demand Modelling and Simulation The data flow diagram based on Gane-Sarson methodology is shown in Figure 1. Master Data consists of the road network (supplemented with additional infrastructure and traffic constraints) and the aggregated vehicle count for 24 hours. The vehicular counts are provided in the form of shape file for the geographical location of the Sachsenhausen area in Frankfurt am Main and the road network is imported from Open Street Map [4]. A methodology named as Gravity Model [5] is used for calculating Origin Destination Matrices (ODMs). It is based on the principle of gravitation theory of Newtonian physics. With reference to the traffic planning, the Gravity Model theory states in [5] that: "the number of trips between two Traffic Assignment Zones (TAZ) will be directly proportional to the number of productions in the production zone and attractions in the attraction zone. In addition, the number of interchanges will be inversely proportional to the spatial separation between the zones." Mathematically, the Gravity Model is defined as [5]: with T ij : number of trips from zone i to zone j, P i : number of trips produced by zone i, A j : number of trips attracted by zone j, F ij : friction factor relating the spatial separation between zone i and zone j, K ij : optional trip-distribution adjustment factor for interchanges between zone i and zone j, n: the number of zones. The initial values of P i and A j are considered from the vehicular counts provided in the form of a shape file. The friction factor and trip distribution adjustment factor are not considered in this study as the only available data is traffic counts. Therefore, equation mentioned below is used for calculating the trip distribution: Before applying this methodology, there are two assumptions made regarding the road network: First, the number of cars occupying the parking space and freeing the parking space are equal as in reality the difference is negligible compared to the normal traffic. Therefore it is not taken into consideration. The second assumption is that there is no generation or elimination of cars within the TAZ (conservative network). Additionally, the total number of cars generated at the entry points of the TAZ should be equal to the total number of cars eliminated at the destination points of the TAZ. This is known as "the closing condition at the edge" [6], also shown in equation 3: with P i : number of trips produced by zone i, A j : number of trips attracted by zone j, n: the number of zones [6]. If this closing condition is not met, which is shown in equation 3 then the balancing process is performed using equations 4 and 5. This process is adopted from [5] and is divided into two steps. Firstly, the balancing factor is calculated using equation 4. Secondly, the number of trips attracted by each zone is multiplied by this balancing factor calculated in step 1 to attain balanced number of trips attracted by each zones, shown in equation 5 and this leads to the fulfillment of equation 3: with F actor: balancing factor, P i : number of trips produced by zone i, A j : number of trips attracted by zone j and with A ′ j : balanced number of trips attracted by zone j. Once the closing condition is met, the trip distribution matrix is generated using equation 2. The matrix balancing approach [6], [5] is carried out to ensure that the expected number of trips produced is equal to the calculated number of trips produced for all the zones. Similarly, the expected number of trips attracted is equal to the calculated number of trips attracted for all the zones. This is shown in equation 6 and 7. This is an iterative process, and it iterates until the calculated production and attraction is equal to the expected production and attraction i.e. F actor Aj and F actor P i converges to 1. This process is implemented using a python script: with Given Aj : expected number of trips attracted by zone j, T otal Aj : calculated number of trips attracted by zone j, Given P i : expected number of trips produced by zone i, T otal P i : calculated number of trips produced by zone i and with D ij : trip interchange calculated for each entry/exit zone. Due to the numerical reasons, equation 6 and 7 do not converges to 1. To solve this issue, a heuristic approach is used where the study area is divided into 3 parts. This leads to the creation of 3 constant ODM. Hence section based demand modelling is performed. The study area for demand modelling is Mörfelder Landstraße. This stretch is around 3.3 km long, also highlighted in the Figure 2. A total of 21 entry/exit zones are present in the study area marked in red in Figure 2. The calculated constant ODMs consist of aggregated count for 24 hours. Then the distribution of the counts over the period of 24 hours is done with the help of induction loop data. This data contains counts from June 2020 till March 2021 and each of this count is split with the time interval of 15 minutes starting from 00:00 until 23:57. With the combination of induction loop data and SUMO functionalities such as od2trips and duarouter, time dependent ODMs based route files are created. This acts as the input to SUMO to simulate the road traffic. In addition to the simulation of passenger cars, trams are also modelled with safety traffic lights at the tram stops. They are simulated using public transport model provided by SUMO. The frequency for the trams are set to every 10 minutes. Solution Methodology There are many studies carried out to optimize the arrival time of EVs such as optimization in routing and dispatching of EV which can led to faster routes for EV [7], ranking of alternatives for emergency routing [8]. However, behaviour of pedestrians, especially children is unpredictable, and even though SUMO can be used to model such patterns, but in the real world it does not function exactly in the simulation. In the case of re-routing an EV, the algorithm prioritizes the shortest route which is free of traffic. But the shortest route could include residential areas that consist of more foot traffic as compared to main streets. Thus, the preferred approach in this study is EV prioritization approach using V2X (Vehicle to Infrastructure) communication with TLJ. This approach is adopted from [9], [10], [11]. The basic approach is that as soon as the EV arrives at TLJ, traffic light is switched to green for the direction of EV trip and prioritizes the EV [9], [10], [11]. The following steps are performed for the EV prioritization application which is also known as the WALABI approach[9]: • EV sends CAMs (Cooperative Awareness Messages) and route information • Road side unit informs Traffic Management Center (TMC) • TMC sets traffic lights on the route of the EV: green for the EV and red for all other traffic participants • After the EV has passed the intersection normal operation continues. For the aforementioned EV prioritization approach, the question arises what should be the optimal distance between an EV and the traffic light so that the traffic light should turn green. The study [10] shows that the EV is usually within the range of 300 meters from the TLJ and when EV enters this range, the traffic light is turned to green and when EV passes the TLJ the traffic light switches back to normal. Therefore 300 meters is considered as a threshold distance value for scenario 2 which is discussed in section 3.3. There is a negative consequence of having this predefined value that is for the other vehicles who are waiting in front of the red signal. If the red phase on the traffic light increases then traffic congestion on the other side may also increase leading to more chaos and more time to diffuse the traffic congestion. Therefore to solve this issue, instead of taking a predefined value, it is calculated dynamically (dynamically calculating threshold distance). This threshold distance is calculated using the speed of the EV and the number of vehicles waiting in front of TLJ shown in equation (8) and (9). This approach is adopted from the study in [9]: with T f ree : time which is needed to let the EV pass the traffic light, N waiting : number of vehicles waiting in front of TLJ, t saf ety : safety time which is 3 seconds, t B : time required for one vehicle to pass the intersection which is 1.8 sec and with d: distance of the EV to the intersection, V EV : speed of the EV. Emergency Vehicle Prioritization Study Area The highlighted path shown in the Figure 3 is the route of EVs whose behaviour is evaluated in the simulations. The route length is approximately 1.5 km consisting of 3 major and 2 minor junctions of the Mörfelder Landstraße which are mentioned in Table 1. Case Sceanrio For each of the three scenarios which are considered for studying the behaviour for EVs, there are two cases considered. One is the usual traffic condition and other is the closed lane based on the assumption that only one lane stays available and all others are closed due to construction/incident reasons or by prioritizing these lanes for non car traffic. Hence making up a total of six scenarios. In this study area, around 60% of the street has more than 1 lane. Figure 4 shows the setup of closed lanes where edges highlighted in red colour signifies that lanes are closed. To generate traffic in a realistic manner, induction loop data is used. This data of induction loops is cleaned, averaged out and normalised over the total number of cars which resulted in creation of traffic flow distribution over the course of the day. It is shown in Figure 5. The X axis represents the time of timeslice [hh:mm] and the Y axis represents the average rate normalised for the overall traffic per day. The maximum averaged, measured count per 3 minutes is observed around 8 am, which is 30 cars. It can be seen in the Figure 5 that the congestion in the morning from 7:00 am until 10:30 is the most on the street of the Mörfelder Landstraße and therefore that is the time range selected for testing the EV. A total of 10 EVs are run between this time range and their trip time and delay time are compared. Results This section explains the simulation results obtained for case scenarios discussed above. A total of 10 EVs (ambulances) are run. tively. The KPIs that have been considered are the total trip time (time required for the vehicle to finish the trip) and total delay time (time for which the vehicle travels below the ideal speed). For EVs, the speed is set 50% above the speed limit of the edge specified by the attribute "speed factor" which is defined as 1.5 while configuring the EV in SUMO. This is adopted from the study [12]. In some simulations when a tram stops, the subsequent red traffic light led to a delay since the EV is not able to overtake the tram. This is also reflected in total delay time in Table 3 for e.g. ambulance with ID 6 Ambulance. For scenarios 2 and 3 the average trip time is 153 and 162 seconds respectively and average delay time is 82 and 91 seconds respectively. Table 4 and 5 show the comparison of total trip time and total delay time for each of the EVs (Closed Lane), where "EV with No-Priority (Closed Lane)" scenario acts as the baseline reference for calculating the impact. For scenario 4, the trip time varies between 242 and 469 seconds. The average for scenario 4 is 344 seconds and empirical variance is 70.2 which is 20% of the average. The variances of the scenarios 4, 5 and 6 are almost the same as scenarios 1, 2 and 3 which is (20±3%). The reasons for the variances are the same like in section 4.1 but the occurrences of these special events happened in different time intervals. This is also reflected in total delay time in Table 5 for e.g. ambulance with ID 2 Ambulance. For scenarios 5 and 6 the average trip time is 183 and 191 seconds respectively and the average delay time is 112 and 117 seconds respectively. Threshold Distance In Scenario 2 and 5, the threshold distance is constant i.e. 300 meters. In contrast for scenario 3 and 6, the threshold distance is calculated using equation 8 and 9. Table 6 and 7 show this distance for all major junctions. The variance of these distances is due to the change in the number of vehicles waiting in front of the TLJs and the speed of the ambulance when entering the study area. The velocity used in these equations are derived from initial calculated speed of the ambulances after entering the study area. It ranges between 36 and 55 km/h. Table 8 shows the average impact for EVs under "Normal Traffic" condition where the number in parenthesis gives the average of the absolute impact and the percentage gives the average of the relative impact compared to the baseline reference. The scenario "EV with No-Priority" is the baseline reference. Table 9 shows the average impact for EVs with "Closed Lane" condition. Here, the scenario "EV with No-Priority (Closed Lane)/Scenario 4" is the baseline instead of "EV with No-Priority/Scenario 1". Moreover, Table 10 "Baseline Comparison" shows the average increment in the travel time and delay time when the lanes are closed. Conclusion The optimization process used in this study involved data pre-processing. This includes improvement of master data quality which required network modelling and the creation of ODMs to make the models as realistic as possible. During the process of importing networks from OSM, the imported network contained a lot of errors due to the misalignment with reality such as errors in simple road links (lanes wrongly connected), classification of lanes etc. Therefore, network corrections were done using SUMO (SUMO's editing tool NETEDIT). ODMs were created by leveraging tools such as Python and Excel. These processes were time consuming but at the same time it was important for the execution of the models. The simulation results in Table 8 and 9 show that the implementation of EV prioritization techniques results in a significant improvement of the KPI values. For "Normal Traffic" condition, the average trip time and delay time is dropped by 51% and 49%, 66% and 63% respectively. For the "Closed Lane" condition, increases in travel time and delay time was anticipated but the impact is lower than expected. The reason maybe that only 33% of the overall multi lanes were reduced to one lane. However, the average trip time and delay time is also dropped by 47% and 45%, 59% and 57% respectively. The maximum impact were seen on the scenarios where the tram stops ahead of the ambulance and the subsequent traffic light is switched to green. The model where threshold distance is calculated dynamically is not as good as expected. The reason is that the calculated distance is mostly lower than 300 meters for all major junctions which reduces the optimization of the travel time of the ambulances. Nevertheless, in all cases the travel time was reduced with the intervention into the traffic infrastructure. Therefore, it can be concluded that through the EV prioritization approaches using V2X communication, EVs can save precious seconds which could be the difference between life and death for a person in need. Future Work In future work, the impact of the length of the closed lanes on the arrival times of the EV should be investigated. Another interesting addition to the simulation would be to include foot traffic (pedestrians), buses and cyclists. The current model is used to study only one EV at a given instant during the simulation. Therefore, further studies could be implemented to handle multiple EVs at the same time. As SUMO is a continuously improving software and thus, for this model, there is still scope of improvement for lane changing functionalities e.g. overtake using the opposite lane. The traffic light control plans used in the study are edited as per demand model. Further work can be carried out to incorporate real world traffic control plans that could lead to even more accurate depiction of the real-world scenario. Since "Dynamic Priority" scenario calculates the threshold distance often less than 300m, delivering the results in the section 4, the parameters in the "Dynamic Priority" strategy needs to be optimized. Finally, this simulation needs to be redone with higher, post pandemic traffic rates.
5,086.2
2023-06-29T00:00:00.000
[ "Computer Science" ]
Spinning particle interacting with electromagnetic and antisymmetric gauge fields in anti-de Sitter space Massless spinning particle model that interacts with electromagnetic and antisymmetric gauge fields in anti-de Sitter space-time is considered as a constrained Hamiltonian system. d-dimensional anti-de Sitter space-time is realized as a real projective manifold parametrized by the homogeneous coordinates. Classical constraints that generate in the presence of interactions minimal world-line supersymmetry algebra extended by the dilatations of the ambient-space homogeneous coordinates are found. Various representations of the Lagrangian of the spinning particle are obtained. Dirac quantization is shown to produce first- and second-order equations for the wave function of the spinning particle that are presented in the homogeneous, inhomogeneous and intrinsic coordinates of AdSd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$AdS_d$$\end{document}. Introduction Spinning particle models [1][2][3][4] are known to provide classical realization of the spin 1/2 field equations in Minkowski space-time as odd generators of the minimal world-line supersymmetry algebra that is the finite-dimensional subalgebra of the infinite-dimensional superVirasoro algebra of the superstring. Since the world-line supersymmetry is less restrictive than the world-sheet one, spinning particle models admit a wide variety of generalizations. In particular, it is possible to include interactions with background electromagnetic [1,[5][6][7][8], Yang-Mills [9,10], gravitational [5,11] and antisymmetric gauge fields [12] in a way consistent with minimal world-line supersymmetry. Such models upon quantization yield Dirac equation for spin 1/2 field interacting with background fields. Apart from Minkowski space-time of the special interest are maximally symmetric spaces such as anti-de Sitter space-time. There the interplay between the space-time geometry and world-line supersymmetry appears to be quite non-trivial [13,14]. a e-mail<EMAIL_ADDRESS>Anti-de Sitter space can be described as a manifold embedded into flat space-time with extra dimension(s) and it is possible to consider respective spinning particle models [15][16][17][18]. These provide pseudoclassical realization of the idea of formulating field dynamics in (anti-)de Sitter (as well as Minkowski) space in a way exhibiting (conformal) isometries that dates back to the seminal works of Dirac [19,20]. It was also applied to examine conformal field theories in 4-dimensional Minkowski (Euclidean) space-time [21][22][23] and to formulate dynamical equations for the gauge fields in 4-dimensional anti-de Sitter space [24,25]. More recently embedding (or ambient) space description was applied to study correlation functions in d-dimensional conformal field theories taking advantage of the AdS/CFT inspired techniques [26][27][28] and to study higher-spin field equations in Ad S d and its conformal boundary [29][30][31]. In Ref. [32] there was considered the possibility of applying twistor methods to the Ad S/C FT duality based on the projective-space description of the bulk anti-de Sitter space parametrized by the homogeneous coordinates that naturally combines linear realization of SO(2, d − 1) isometry and the projective light-cone description of the (d − 1)-dimensional conformal boundary space-time. Shortly after that two-twistor formulation of the spinning particle in Ad S d for d = 4, 5, 7 was proposed in [33]. It is based on the generalization [34] of the two-twistor formulation of the massive bosonic particle in Ad S 5 [35]. 1 Utility of the projective-space realization of anti-de Sitter space from the viewpoint of canonical description of massless particle (tensionless string) models can be justi-fied as follows. Description of Ad S d as an embedded hyperboloid assumes imposition of the constraint y 2 + 1 ≈ 0 on the ambient-space inhomogeneous coordinates y m , where the SO(2, d − 1)-invariant scalar product y 2 = (y · y) = y m η mn y n is taken w.r.t. Minkowski metric η mn = diag(−, −, +, · · · , +) and Ad S d radius is set to unity. In the canonical approach the mass-shell constraint for the massless particle (tensionless string zero modes) in its simplest form is p 2 ≈ 0 and its Poisson bracket (PB) relations with the above constraint imply that (y · p) ≈ 0 is also a constraint forming with y 2 + 1 ≈ 0 the pair of the second-class constraints. The presence of the second-class constraints necessitates introduction of the Dirac brackets (DB) that in general essentially complicates analysis of the Hamiltonian dynamics (see, e.g. [40]) so it is convenient to treat the constraint y 2 +1 ≈ 0 as a gauge-fixing condition for the first-class constraint (y· p) ≈ 0 that generates dilatations of the embeddingspace coordinates [41]. Gauged dilatations implement the projective-space realization of Ad S d , so the set of the two first-class constraints (y · p) ≈ 0 and p 2 ≈ 0 can be taken as the starting point for description of the massless particle (tensionless string zero modes) models in such an approach. In the Lagrangian approach important feature of the parametrization of Ad S d by the homogeneous coordinates x m : y m = |x| −1 x m , |x| = √ −x 2 , is that the object that can be naturally identified with the metric tensor, taking into account the form of the line element, is degenerate det θ = 0. So one is led to consider particle (string, brane) mechanics in the space with degenerate metric [42]. Tensor θ mn and associated differential operator θ mn ∂/∂ x n also enter dynamical equations for the Ad S d higher-spin fields in the ambient-space formulation [24,25,[43][44][45]. In Ref. [46] there was proposed massless spinning particle model in Ad S d realized as the projective space parametrized by the homogeneous coordinates. Three first-class constraints of the model (one odd and two even) span minimal world-line supersymmetry algebra extended by the gauged space-time dilatations. Dirac quantization of the model yields Dirac and Klein-Gordon equations for the particle's wave function that is a homogeneous function of degree zero. In this note we continue to study the above model and examine the possibility of including interactions with background gauge fields. As the starting point we take Hamiltonian first-class constraints of the free spinning particle model. Then we seek for the generalizations of the odd constraint, that is the world-line supersymmetry generator, by the terms depending on the background gauge fields and calculate its DB relations with itself that define bosonic con-straint generating world-line reparametrizations. Then linear combination of these constraints and the generator of the space-time dilatations with the Lagrange multipliers is used to write down the Lagrangian of the interacting spinning particle model in terms of the phase-space variables. These Lagrange multipliers play the role of the gauge fields for local world-line supersymmetry, reparametrizations and space-time dilatations. Integrating out space-time momentum and some of the Lagrange multipliers we derive various representations of the spinning particle Lagrangian. After that we discuss Dirac quantization of the proposed models. We find Hermitian operators associated with the classical first-class constraints from the requirement that they satisfy quantum world-line supersymmetry algebra. Then the substitution of the realization of the Hermitian momentum operator as a differential operator in configuration space produces Dirac-and Klein-Gordon-type equations for the wave function of the spinning particle in homogeneous coordinates. We also write these equations in the inhomogeneous and intrinsic coordinates on Ad S d . Section 2 is devoted to the spinning particle's interaction with the background electromagnetic field. 2 In Sect. 3 we discuss gauge-invariant interaction with the rank r − 1 antisymmetric gauge field. Like in the case of electromagnetic interaction closed algebra of the constraints is obtained and various forms of Dirac-and Klein-Gordon-type equations for the particle's wave function are found. Let us remark that the spinning particle model with minimal world-line supersymmetry interacting with odd-rank antisymmetric tensor gauge fields in (2d + 1)-dimensional Minkowski space was studied in [12] from the perspective of the Kaluza-Klein dimensional reduction. In 2d dimensions it results in the particle's interactions with both rank 2r and 2r + 1 antisymmetric gauge fields as well as with the electromagnetic field. Curiously antisymmetric gauge fields appear in quantization of the spinning particle model with extended world-line supersymmetry [53]. Charged spinning particle in background electromagnetic field Consider odd constraint as the classical analogue of the Dirac equation that includes interaction with external electromagnetic field. We take it as the generator of the minimal world-line supersymmetry. In the absence of the interaction it coincides with odd constraint introduced in [46]. Observe that the minimality principle fixes the homogeneity degree of . Transverse strength of the electromagnetic field by definition is and the last equality follows by taking into account transversality (x · A(x)) = 0 and homogeneity properties of the electromagnetic potential. After introduction of the PB (DB) relations it is easy to see that odd constraint (2) has zero PB with the constraint that generates dilatations of the embedding-space coordinates. Also the DB relations of the supersymmetry generator with itself define bosonic constraint that is the generator of the world-line reparametrizations. Eq. (6) appears to be the only non-trivial relation of the world-line supersymmetry algebra extended by the spacetime dilatations. Having introduced the classical first-class constraints we can write down the spinning particle's Hamiltonian as their linear combination with evenẽ, a and odd χ Lagrange multipliers. Then the action is defined as the integral of the Lagrangian expressed in terms of the phase-space variables where Integrating out the momentum p m yields configuration-space form of the particle's Lagrangian The Lagrange multiplier a plays the role of the gauge field for the scale transformations of x and p. Integrating it out allows to bring the Lagrangian to the form that manifests the realization of Ad S d as the projective space R P d parametrized by the homogeneous coordinates with the degenerate metric θ mn = η mn + 1 |x| 2 x m x n . In quantum theory classical observables are replaced by the Hermitian operators and their PB (DB) relations-by the (anti)commutators. The operators associated with the phasespace variables satisfy the (anti)commutation relations 3 From the anticommutation relations of ξ m it follows that they are proportional to γ −matrices in (d + 1) dimensions: ξ m = 2 −1/2 γ m and their Hermiticity is understood in the same sense as that of γ m , i.e. (γ m ) † = (−) t Aγ m A −1 , where A = γ 0 1 γ 0 2 · · · γ 0 t and t is the number of time-like dimensions (t = 2 for the realization of Ad S d as the hyperboloid in the ambient space-time). Classical constraints become Hermitian operators that select physical subspace in the space of quantum states of the spinning particle. We choose Hermitian operator associated with the classical supersymmetry generator in the form where the second summand arises as a result of moving the momentum operator to the right in the manifestly Hermitian representation for the first summand. The square of (e)H 2 defines Hermitian operator associated with the classical con- where is the Hermitian operator for the generator of the space-time dilatations. Note the relation that makes obvious the contact with the classical constraint (7). For the case of flat configuration-space Hermitian momentum operator can be realized as the coordinate partial derivative acting on the wave function (x). Whenever configuration space is a curved manifold, Hermitian momentum operator is given by where g is the determinant of the configuration-space metric tensor. In the realization of anti-de Sitter space-time as the projective manifold, the scale-invariant measure is proportional to |x| −d−1 ε m 1 m 2 ···m d+1 x m 1 dx m 2 ∧ · · · ∧ dx m d+1 , so as the definition of the Hermitian momentum operator we take Then the constraint (14) translates into the Dirac-type equation for the particle's wave function (x) that is the 2 [ d+1 2 ]component spinor field. It has the homogeneity degree zero D H (x) = (x · ∂) = 0 and also satisfies the second-order equation 4 Discussion of the ambiguities in the definition of Hermitian operators in locally supersymmetric models can be found, e.g., in [54][55][56]. To conclude this section let us discuss how the conventional form of spin 1/2 particle's equations in Ad S d in terms of intrinsic coordinates can be derived from the equations given above. As an intermediate step let us present Eqs. (21) and (22) in the inhomogeneous coordinates y m = |x| −1 x m . For Eq. (21) we obtain where ∇ m = θ m n (y)∂/∂ y n , θ m n (y) = δ n m + y m y n , and Eq. (22) becomes The electromagnetic potential and field strength in the homogeneous and inhomogeneous coordinates are related as Transverse field strength in the inhomogeneous coordinates is defined by and the last equality follows by using the transversality property of the potential y · A(y) = 0. Above equations for the particle's wave function in the inhomogeneous coordinates can be transformed to intrinsic coordinates using the transition formulae [24,25,45]. In particular, we use the relation between the derivatives of the coordinate functions where g mn (z) = ∂ m y m ∂ n y m and g mn (z) = ∇ m z m ∇ m z n are the Ad S d metric and its inverse in the intrinsic coordinates. Also the spinning particle's wave functions in the inhomogeneous and intrinsic coordinates are connected by In Eqs. (29) (31) and σ ab = 1 4 (ρ a ρ b −ρ b ρ a ) that span the so(1, d −1) algebra. Useful consequences of Eqs. (29) and (30) are To where is the spinor covariant derivative extended by the external electromagnetic potential. The commutator of the covariant derivatives appears in the transformation of the second-order equation (24) to the intrinsic coordinates. To find the final form we substituted explicit expression for the Riemann tensor R klmn = g kn g lm − g km g ln , (R = R mn mn = −d(d − 1)) that provides solution of the Einstein equations in the form widely used in the literature on the Ad S/C FT correspondence: with the cosmological constant Note that [(γ · y) (e)H ] 2 differs from 2 (e)H by the linear combination of the constraints (e)H and D H . Spinning particle interactions with antisymmetric gauge fields In this section we discuss gauge-invariant coupling of the spinning particle to external (r − 1)-form gauge field A m[r −1] (x) 5 that we assume to be transverse x n A nm[r −2] (x) = 0 and homogeneous of degree −(r − 1). The definition of the transverse field strength generalizes that for the electromagnetic field (3). Since the form of the coupling is sensitive to the value of r we start with the case of odd r and then turn to even r . r odd In this case the fermionic constraint naturally generalizes that for the free spinning particle. q stands for the particle's charge, ξ m[r ] = ξ m 1 . . . ξ m r and the factor |x| r makes the last term homogeneous of degree zero like the first is, while the factor i n , n = r −1 2 − 2[ r −1 4 ] makes it real under the complex conjugation. DB relations of this constraint with itself generate classical world-line supersymmetry algebra with being the world-line reparametrization generator in the presence of the interaction. Similarly to the previously considered case of the interaction with the background electromagnetic field, one can write down the spinning particle's Hamiltonian and the action functional where the Lagrangian expressed in terms of the phase-space variables has the form Integrating consecutively momentum p m and dilatation gauge field a yields two representations of the configurationspace Lagrangian: and In quantum theory the Hermitian operator associated with the odd constraint (40) is where the antisymmetrized product of r γ -matrices is defined as Squaring the constraint (48) allows to obtain quantum version of the world-line supersymmetry algebra (41) 2 (q, r odd)H = T (q, r odd)H (50) and define the Hermitian operator corresponding to the generator of the world-line reparametrizations Let us note in passing that the square of γ m[r ] F m [r ] can be expanded over the basis of the antisymmetrized products of γ -matrices using the relations given, e.g. in [57] Clearly which of the (r + 1)/2 terms actually contribute to the sum depends on the values of r and the space-time dimension d. Substitute now the realization (20) of the momentum as the differential operator to impose (48) and (51) on the configuration space wave function (x). So we come to the first-order Dirac-type equation and the second-order Klein-Gordon-type equation For transformation of the above equations to the intrinsic coordinates let us first rewrite them in the inhomogeneous coordinates. Eq. (53) takes the form where the r -form field strength in the homogeneous and inhomogeneous coordinates is related in the following way generalizing (25) and (26) for the electromagnetic field. The Klein-Gordon-type equation (54) in the inhomogeneous coordinates reads Then using the relations (27)-(32) we find wave equations describing gauge-invariant interaction of the spin 1/2 field on the Ad S d background with the odd-rank antisymmetric gauge field in the intrinsic coordinates and The definition of the spinor covariant derivative coincides with (34) in the absence of electromagnetic field and the r −form field strength in the ambient-space and intrinsic coordinates is related as r even In the case of r even, the odd constraint takes the form In this subsection n = r Similarly to the previously considered models, DB relations of this constraint with itself generate the world-line supersymmetry algebra where the world-line reparametrization generator equals The constraints (61), (63) and D are the first-class constraints of the model and are used to define the spinning particle's Lagrangian and action functional S (q, r even) = dτ L (q, r even) ph : (64) Substituting explicit expressions for these constraints and integrating out the momentum allows to transfer from the phase-space to the configuration-space form of the Lagrangian Further integrating out the dilatation gauge field, one finds the Lagrangian that corresponds to the realization of the Ad S d as a projective manifold with the degenerate metric L (q, r even) R P d = 1 2ẽ|x| 2 (ẋθẋ) Now we come to the discussion of the Dirac quantization of the model. Let us define the Hermitian operator associated with the odd constraint (61) as As in the previous sections expression for the Hermitian operator that corresponds to the world-line reparametrization generator is obtained by requiring the closure of the world-line supersymmetry algebra 2 (q, r even)H = T (q, r even)H , where T (q, r even)H = |x|(γ · p) Realizing momentum operator as the differential operator in configuration space allows to obtain Dirac-type and Klein-Gordon-type equations for the particle's wave function in the homogeneous coordinates and Note that (x) is homogeneous of degree zero since D H (x) = 0. In terms of the inhomogeneous coordinates y m = |x| −1 x m these equations acquire the form Connection between the r -form field strength in the homogeneous and inhomogeneous coordinates is given in (56). Using the transition relations (27) The relation between the r -form field strength in the inhomogeneous and intrinsic coordinates is given in (60). Analogously the second-order equation (74) transforms into the generalization of the Klein-Gordon equation Conclusion In this note we have studied interactions with background electromagnetic or rank (r − 1) antisymmetric gauge fields of the minimally-supersymmetric massless spinning particle in anti-de Sitter space-time. d−dimensional anti-de Sitter space-time has been realized as a real projective manifold parametrized by the homogeneous coordinates. For all of the considered interactions we have found the set of three first-class constraints, one odd and two even, that generate extended world-line supersymmetry algebra. The constraints are the classical generators of 1d supersymmetry, reparametrizations and rescalings of the space-time homogeneous coordinates. Various forms of the spinning particle's Lagrangian both in terms of the phase-space and configuration-space variables have been derived. Then the quantum realization of the classical constraint algebra by the Hermitian operators has been found. The form of the Hermitian operator associated with the classical generator of the world-line reparametrizations is unambiguously fixed by the closure of the quantum algebra of the constraints. The realization of the Hermitian momentum operator as the differential operator in configuration space yields first-and secondorder equations for the particle's wave function in the presence of background electromagnetic field or antisymmetric gauge fields. These equations have been presented both in the homogeneous and inhomogeneous coordinates of the ambient space. Finally using known transition relations between the ambient and intrinsic coordinates they have been written in the conventional form of extended Dirac and Klein-Gordon equations in Ad S d . Let us note that although we treated independently interactions of the spinning particle with electromagnetic and (r − 1)-form gauge fields, along the same lines it is possible to consider simultaneous coupling to a number of gauge fields and electromagnetic field. One can also consider interactions with mixed symmetry fields that carry even number of indices in each set of the antisymmetrized indices. The part of our discussion that concerned transition of the equations for particle's wave function from the ambient-space to intrinsic coordinates assumed implicitly that Spin(1, d −1) and Spin(2, d −1) spinor representations have equal dimension that is the case for d even. So one of possible generalizations is to consider the case of d odd that presumably requires introduction of additional (odd) variables and constraints to impose chirality projection on the spinor wave function in d + 1 dimensions. As a further development of the results reported here it is possible to consider interaction of the spinning particle with the Yang-Mills field, to look for the superfield formulation and to describe particles with other values of spin. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: There are no extra data associated with this article other than the article itself.] Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 .
5,220.6
2019-05-01T00:00:00.000
[ "Physics" ]
Combating Fake News in “Low-Resource” Languages: Amharic Fake News Detection Accompanied by Resource Crafting : The need to fight the progressive negative impact of fake news is escalating, which is evident in the strive to do research and develop tools that could do this job. However, a lack of adequate datasets and good word embeddings have posed challenges to make detection methods sufficiently accurate. These resources are even totally missing for “low-resource” African languages, such as Amharic. Alleviating these critical problems should not be left for tomorrow. Deep learning methods and word embeddings contributed a lot in devising automatic fake news detection mechanisms. Several contributions are presented, including an Amharic fake news detection model, a general-purpose Amharic corpus (GPAC), a novel Amharic fake news detection dataset (ETH_FAKE), and Amharic fasttext word embedding (AMFTWE). Our Amharic fake news detection model, evaluated with the ETH_FAKE dataset and using the AMFTWE, performed very well. Introduction Online media, specifically social media, is easily accessible, cheap, suitable for commenting and sharing, and more timely [1][2][3], which enables it to be favored by many, especially youngsters. However, it also has a dark side: the propagation of hate speech and inauthentic information, such as fake news. Fake news refers to news articles that are intentionally and verifiably false [4,5]. Fake news is increasingly becoming a threat to individuals, governments, freedom of speech, news systems, and society as a whole [3,6,7]. It disturbs the authenticity balance of the news system, creating real-life fears in the world's societies. To express the spread and bad effect of fake news during the current pandemic, the WHO warned against fake news in the COVID-19 infodemic (https://www.who.int/dg/speeche s/detail/director-general-s-remarks-at-the-media-briefing-on-2019-novel-coronavirus---8-february-2020). It said that while the virus spreads, misinformation makes the job of our heroic health workers even harder; it diverges the attention of our decision-makers and it causes confusion and spreads fear in the general public. The list of practical examples of the impacts of fake news is becoming extensive and the danger is already eminent. To reduce the adverse effects of fake news, governments, the tech industry, and individual researchers have been trying to devise various mechanisms. Governments tried to enact legal proclamations that they believe will suppress fake news. For example, the government of Ethiopia has enacted the Hate Speech and Disinformation Prevention and Suppression Proclamation No.1185/2020 (https://www.accessnow.org/cms/assets/uploads/2020/05 /Hate-Speech-and-Disinformation-Prevention-and-Suppression-Proclamation.pdf), though this looks less helpful as creators of fake news hide themselves, and this obscurity leaves no trace for the law. Facebook, Google, Twitter, and YouTube tried to take technological measures, using certain tools. In the development of fake news detection tools, linguistic re-Facebook, Google, Twitter, and YouTube tried to take technological measures, using certain tools. In the development of fake news detection tools, linguistic resources play crucial roles. However, "low-resource" languages, mostly African languages, such as Amharic, lack such resources and tools. Amharic (አማርኛ, Amarəñña), with the only African-origin script named Ethiopic/Fidel, is the second most spoken Semitic language in the World, next to Arabic, and it is the official working language of the Ethiopian government. As more Ethiopians are living outside their home, the Amharic language speakers in different countries of the world is also growing. In Washington DC, Amharic has gotten the status as one of the six non-English working languages [8]. Furthermore, Amharic is considered as a sacred language by Rastafarians across the world. Despite this, the Amharic language is one of the "low-resource" languages in the world, which lacks the tools and resources important for NLP (natural language processing) and other techno-linguistic solutions. To the best of our knowledge, for the Amharic language, there is no fake news detection dataset and we could not find work done to detect fake news written in Amharic. Moreover, there is a lack of quality Amharic word embedding. The available Amharic corpora are not sufficient and some of them are not freely open to the public. This is a total disadvantage, not being able to benefit from technology solutions. In this work, we tried to narrow down those gaps. We present several contributions that include the following: • We collected and organized a huge Amharic general purpose corpus. • We prepared a novel fake news detection dataset for the Amharic language. • We introduced a deep learning-based model for Amharic fake news detection. • We performed a series of experiments to evaluate the word embedding and fake news detection model. The rest of this document is organized as follows. In Section 2, we present the general-purpose Amharic corpus (GPAC). Section 3 explains the Amharic fasttext word embedding (AMFTWE). The Amharic fake news detection dataset (ETH_FAKE) is explained in Section 4. Section 5 is dedicated to the experiments, results, and discussion, while Section 6 concludes our work. GPAC: General-Purpose Amharic Corpus One of the challenges of content-based fake news detection is the absence of sufficient corpora to train word embeddings, which are used in a multiplicity of NLP applications, including fake news detection [1,[9][10][11][12][13][14][15], either to represent the features for traditional classifiers, or to initialize the deep neural network embedding layers. Similarly, the shortage of an appropriate dataset to train fake news detection models is the other bottleneck. Especially African languages labeled "under-resourced", such as Amharic, suffer from a shortage of such resources. Amharic is a highly influential language with its own ancient script. Not to mention its early existence and applications, it has been the working language of courts, the military, language of trade, and everyday communications since the late 12th century, and remains the official language of the Ethiopian government today [16,17]. Most of the Ethiopian Jewish communities in Ethiopia and Israel speak Amharic. In Washington DC, Amharic became one of the six non-English languages in the Language Access Act of 2004, which allows government services and education in Amharic [8]. Furthermore, Amharic is considered as a sacred language by Rastafarians. Despite Amharic being highly powerful, it is still one of the "low-resource" languages in the world. A lack of sufficient corpora and linguistic tools to help use technology make the language disadvantaged in this regard. However, there have been a few works done to prepare the Amharic corpus and linguistic tools. ), with the only African-origin script named Ethiopic/ Fidel, is the second most spoken Semitic language in the World, next to Arabic, and it is the official working language of the Ethiopian government. As more Ethiopians are living outside their home, the Amharic language speakers in different countries of the world is also growing. In Washington DC, Amharic has gotten the status as one of the six non-English working languages [8]. Furthermore, Amharic is considered as a sacred language by Rastafarians across the world. Despite this, the Amharic language is one of the "low-resource" languages in the world, which lacks the tools and resources important for NLP (natural language processing) and other techno-linguistic solutions. To the best of our knowledge, for the Amharic language, there is no fake news detection dataset and we could not find work done to detect fake news written in Amharic. Moreover, there is a lack of quality Amharic word embedding. The available Amharic corpora are not sufficient and some of them are not freely open to the public. This is a total disadvantage, not being able to benefit from technology solutions. In this work, we tried to narrow down those gaps. We present several contributions that include the following: • We collected and organized a huge Amharic general purpose corpus. • We prepared a novel fake news detection dataset for the Amharic language. • We introduced a deep learning-based model for Amharic fake news detection. • We performed a series of experiments to evaluate the word embedding and fake news detection model. The rest of this document is organized as follows. In Section 2, we present the general-purpose Amharic corpus (GPAC). Section 3 explains the Amharic fasttext word embedding (AMFTWE). The Amharic fake news detection dataset (ETH_FAKE) is explained in Section 4. Section 5 is dedicated to the experiments, results, and discussion, while Section 6 concludes our work. GPAC: General-Purpose Amharic Corpus One of the challenges of content-based fake news detection is the absence of sufficient corpora to train word embeddings, which are used in a multiplicity of NLP applications, including fake news detection [1,[9][10][11][12][13][14][15], either to represent the features for traditional classifiers, or to initialize the deep neural network embedding layers. Similarly, the shortage of an appropriate dataset to train fake news detection models is the other bottleneck. Especially African languages labeled "under-resourced", such as Amharic, suffer from a shortage of such resources. Amharic is a highly influential language with its own ancient script. Not to mention its early existence and applications, it has been the working language of courts, the military, language of trade, and everyday communications since the late 12th century, and remains the official language of the Ethiopian government today [16,17]. Most of the Ethiopian Jewish communities in Ethiopia and Israel speak Amharic. In Washington DC, Amharic became one of the six non-English languages in the Language Access Act of 2004, which allows government services and education in Amharic [8]. Furthermore, Amharic is considered as a sacred language by Rastafarians. Despite Amharic being highly powerful, it is still one of the "low-resource" languages in the world. A lack of sufficient corpora and linguistic tools to help use technology make the language disadvantaged in this regard. However, there have been a few works done to prepare the Amharic corpus and linguistic tools. The Walta Information Center (WIC) corpus is a small-sized corpus with 210,000 tokens collected from 1065 Amharic news documents [18]. The corpus is manually annotated for POS tags. It is, however, too small for deep learning applications. The HaBit project corpus is another web corpus, which was developed by crawling the web [19]. The corpus is cleaned and tagged for POS using a TreeTagger trained on WIC. The Crúbadán corpus was developed under the project called corpus building for a large number of under-resourced languages [20]. The Amharic corpus consists of 16,970,855 words crawled from 9999 documents. This corpus is just a list of words with their frequencies, which is inconvenient for word embedding and other deep learning applications. The Contemporary Amharic Corpus (CACO) [21] is another corpus crawled from different sources. We checked and got about 21 million tokens from 25,000 documents in this corpus. As we can see in Table 1, the WIC is too small and the Crúbadán is just a list of words and thus inconvenient to train quality Amharic word embeddings; the remaining two are not sufficient for the data-hungry word embedding training. Of course, the POS-tagged corpora are not directly usable for this purpose. Thinking to fill these gaps, we created our own general-purpose Amharic corpus (GPAC (https://github.com/Fan politi/GPAC)) collected from a variety of sources. This version of GPAC includes about 121 million documents and more than 40 million tokens. Data Collection We collected data from diversified sources and prepared a general-purpose Amharic corpus (GPAC). There are two objectives for preparing this corpus. First, it will be used as a general resource for future NLP research and tool development projects for the "lowresource" language Amharic. Second, added to the other corpora, it will be used to create a good-quality Amharic word embedding, which itself has two objectives. As part of this fake news detection work, it is the backbone of the embedding layer. Secondly, it is a vital resource in many NLP applications and others. Data Processing The preprocessing of the documents involves spelling correction, normalization of punctuation marks, and sentence extraction from documents for the purpose of randomizing the documents. Extracting each statement from individual documents and randomizing them helps make the corpus publicly available for researchers and tool developers without affecting the copyrights, if any. Different styles of punctuation marks have been used in the documents or articles. For quotation marks, different representations such as " ", " ", ‹‹ ››, ' ', ' ', or « » have been used. We normalized all types of double quotes by " ", and all single quotes by ' '. Other punctuation marks were normalized as follows: full stops (like:: and The Walta Information Center (W tokens collected from 1065 Amharic ne notated for POS tags. It is, however, too project corpus is another web corpus, w The corpus is cleaned and tagged for PO The Crúbadán corpus was develop large number of under-resourced lan 16,970,855 words crawled from 9999 do their frequencies, which is inconvenien applications. The Contemporary Amharic Corpu different sources. We checked and got a this corpus. As we can see in Table 1, th of words and thus inconvenient to tra maining two are not sufficient for the da the POS-tagged corpora are not directl gaps, we created our own (https://github.com/Fanpoliti/GPAC)) co GPAC includes about 121 million docum Data Collection We collected data from diversified haric corpus (GPAC). There are two obj used as a general resource for future NL "low-resource" language Amharic. Seco create a good-quality Amharic word em of this fake news detection work, it is the a vital resource in many NLP application Data Processing The preprocessing of the documen punctuation marks, and sentence extra domizing the documents. Extracting e randomizing them helps make the cor developers without affecting the copyrig have been used in the documents or art tions such as " ", " ", ‹‹ ››, ' ', ' ', or « » ha quotes by " ", and all single quotes by ' follows: full stops (like:: and ፡፡) by ።, hy and ÷) by ፣. Table 2 summarizes the vari corpus. The Walta Information Center (WIC) corpus is a small-sized corpus with 210,000 tokens collected from 1065 Amharic news documents [18]. The corpus is manually annotated for POS tags. It is, however, too small for deep learning applications. The HaBit project corpus is another web corpus, which was developed by crawling the web [19]. The corpus is cleaned and tagged for POS using a TreeTagger trained on WIC. The Crúbadán corpus was developed under the project called corpus building for a large number of under-resourced languages [20]. The Amharic corpus consists of 16,970,855 words crawled from 9999 documents. This corpus is just a list of words with their frequencies, which is inconvenient for word embedding and other deep learning applications. The Contemporary Amharic Corpus (CACO) [21] is another corpus crawled from different sources. We checked and got about 21 million tokens from 25,000 documents in this corpus. As we can see in Table 1, the WIC is too small and the Crúbadán is just a list of words and thus inconvenient to train quality Amharic word embeddings; the remaining two are not sufficient for the data-hungry word embedding training. Of course, the POS-tagged corpora are not directly usable for this purpose. Thinking to fill these gaps, we created our own general-purpose Amharic corpus (GPAC (https://github.com/Fanpoliti/GPAC)) collected from a variety of sources. This version of GPAC includes about 121 million documents and more than 40 million tokens. Data Collection We collected data from diversified sources and prepared a general-purpose Amharic corpus (GPAC). There are two objectives for preparing this corpus. First, it will be used as a general resource for future NLP research and tool development projects for the "low-resource" language Amharic. Second, added to the other corpora, it will be used to create a good-quality Amharic word embedding, which itself has two objectives. As part of this fake news detection work, it is the backbone of the embedding layer. Secondly, it is a vital resource in many NLP applications and others. Data Processing The preprocessing of the documents involves spelling correction, normalization of punctuation marks, and sentence extraction from documents for the purpose of randomizing the documents. Extracting each statement from individual documents and randomizing them helps make the corpus publicly available for researchers and tool developers without affecting the copyrights, if any. Different styles of punctuation marks have been used in the documents or articles. For quotation marks, different representations such as " ", " ", ‹‹ ››, ' ', ' ', or « » have been used. We normalized all types of double quotes by " ", and all single quotes by ' '. Other punctuation marks were normalized as follows: full stops (like:: and ፡፡) by ።, hyphens (like:-, and ፡-) by ፦, and commas (like፥ and ÷) by ፣. Table 2 summarizes the various multi-domain data sources used to build the corpus. The Walta Information Center (WIC) corpus is a small-sized corpus with 210,000 tokens collected from 1065 Amharic news documents [18]. The corpus is manually annotated for POS tags. It is, however, too small for deep learning applications. The HaBit project corpus is another web corpus, which was developed by crawling the web [19]. The corpus is cleaned and tagged for POS using a TreeTagger trained on WIC. The Crúbadán corpus was developed under the project called corpus building for a large number of under-resourced languages [20]. The Amharic corpus consists of 16,970,855 words crawled from 9999 documents. This corpus is just a list of words with their frequencies, which is inconvenient for word embedding and other deep learning applications. The Contemporary Amharic Corpus (CACO) [21] is another corpus crawled from different sources. We checked and got about 21 million tokens from 25,000 documents in this corpus. As we can see in Table 1, the WIC is too small and the Crúbadán is just a list of words and thus inconvenient to train quality Amharic word embeddings; the remaining two are not sufficient for the data-hungry word embedding training. Of course, the POS-tagged corpora are not directly usable for this purpose. Thinking to fill these gaps, we created our own general-purpose Amharic corpus (GPAC (https://github.com/Fanpoliti/GPAC)) collected from a variety of sources. This version of GPAC includes about 121 million documents and more than 40 million tokens. Data Collection We collected data from diversified sources and prepared a general-purpose Amharic corpus (GPAC). There are two objectives for preparing this corpus. First, it will be used as a general resource for future NLP research and tool development projects for the "low-resource" language Amharic. Second, added to the other corpora, it will be used to create a good-quality Amharic word embedding, which itself has two objectives. As part of this fake news detection work, it is the backbone of the embedding layer. Secondly, it is a vital resource in many NLP applications and others. Data Processing The preprocessing of the documents involves spelling correction, normalization of punctuation marks, and sentence extraction from documents for the purpose of randomizing the documents. Extracting each statement from individual documents and randomizing them helps make the corpus publicly available for researchers and tool developers without affecting the copyrights, if any. Different styles of punctuation marks have been used in the documents or articles. For quotation marks, different representations such as " ", " ", ‹‹ ››, ' ', ' ', or « » have been used. We normalized all types of double quotes by " ", and all single quotes by ' '. Other punctuation marks were normalized as follows: full stops (like:: and ፡፡) by ።, hyphens (like:-, and ፡-) by ፦, and commas (like፥ and ÷) by ፣. Table 2 summarizes the various multi-domain data sources used to build the corpus. The Walta Information Center (WIC) corpus is a small-sized corpus with 210,000 tokens collected from 1065 Amharic news documents [18]. The corpus is manually annotated for POS tags. It is, however, too small for deep learning applications. The HaBit project corpus is another web corpus, which was developed by crawling the web [19]. The corpus is cleaned and tagged for POS using a TreeTagger trained on WIC. The Crúbadán corpus was developed under the project called corpus building for a large number of under-resourced languages [20]. The Amharic corpus consists of 16,970,855 words crawled from 9999 documents. This corpus is just a list of words with their frequencies, which is inconvenient for word embedding and other deep learning applications. The Contemporary Amharic Corpus (CACO) [21] is another corpus crawled from different sources. We checked and got about 21 million tokens from 25,000 documents in this corpus. As we can see in Table 1, the WIC is too small and the Crúbadán is just a list of words and thus inconvenient to train quality Amharic word embeddings; the remaining two are not sufficient for the data-hungry word embedding training. Of course, the POS-tagged corpora are not directly usable for this purpose. Thinking to fill these gaps, we created our own general-purpose Amharic corpus (GPAC (https://github.com/Fanpoliti/GPAC)) collected from a variety of sources. This version of GPAC includes about 121 million documents and more than 40 million tokens. Data Collection We collected data from diversified sources and prepared a general-purpose Amharic corpus (GPAC). There are two objectives for preparing this corpus. First, it will be used as a general resource for future NLP research and tool development projects for the "low-resource" language Amharic. Second, added to the other corpora, it will be used to create a good-quality Amharic word embedding, which itself has two objectives. As part of this fake news detection work, it is the backbone of the embedding layer. Secondly, it is a vital resource in many NLP applications and others. Data Processing The preprocessing of the documents involves spelling correction, normalization of punctuation marks, and sentence extraction from documents for the purpose of randomizing the documents. Extracting each statement from individual documents and randomizing them helps make the corpus publicly available for researchers and tool developers without affecting the copyrights, if any. Different styles of punctuation marks have been used in the documents or articles. For quotation marks, different representations such as " ", " ", ‹‹ ››, ' ', ' ', or « » have been used. We normalized all types of double quotes by " ", and all single quotes by ' '. Other punctuation marks were normalized as follows: full stops (like:: and ፡፡) by ።, hyphens (like:-, and ፡-) by ፦, and commas (like፥ and ÷) by ፣. Table 2 summarizes the various multi-domain data sources used to build the corpus. , and commas (like The Walta Information Center (WIC) corpus is a small-sized corpus with 210,000 tokens collected from 1065 Amharic news documents [18]. The corpus is manually annotated for POS tags. It is, however, too small for deep learning applications. The HaBit project corpus is another web corpus, which was developed by crawling the web [19]. The corpus is cleaned and tagged for POS using a TreeTagger trained on WIC. The Crúbadán corpus was developed under the project called corpus building for a large number of under-resourced languages [20]. The Amharic corpus consists of 16,970,855 words crawled from 9999 documents. This corpus is just a list of words with their frequencies, which is inconvenient for word embedding and other deep learning applications. The Contemporary Amharic Corpus (CACO) [21] is another corpus crawled from different sources. We checked and got about 21 million tokens from 25,000 documents in this corpus. As we can see in Table 1, the WIC is too small and the Crúbadán is just a list of words and thus inconvenient to train quality Amharic word embeddings; the remaining two are not sufficient for the data-hungry word embedding training. Of course, the POS-tagged corpora are not directly usable for this purpose. Thinking to fill these gaps, we created our own general-purpose Amharic corpus (GPAC (https://github.com/Fanpoliti/GPAC)) collected from a variety of sources. This version of GPAC includes about 121 million documents and more than 40 million tokens. Data Collection We collected data from diversified sources and prepared a general-purpose Amharic corpus (GPAC). There are two objectives for preparing this corpus. First, it will be used as a general resource for future NLP research and tool development projects for the "low-resource" language Amharic. Second, added to the other corpora, it will be used to create a good-quality Amharic word embedding, which itself has two objectives. As part of this fake news detection work, it is the backbone of the embedding layer. Secondly, it is a vital resource in many NLP applications and others. Data Processing The preprocessing of the documents involves spelling correction, normalization of punctuation marks, and sentence extraction from documents for the purpose of randomizing the documents. Extracting each statement from individual documents and randomizing them helps make the corpus publicly available for researchers and tool developers without affecting the copyrights, if any. Different styles of punctuation marks have been used in the documents or articles. For quotation marks, different representations such as " ", " ", ‹‹ ››, ' ', ' ', or « » have been used. We normalized all types of double quotes by " ", and all single quotes by ' '. Other punctuation marks were normalized as follows: full stops (like:: and ፡፡) by ።, hyphens (like:-, and ፡-) by ፦, and commas (like፥ and ÷) by ፣. Table 2 summarizes the various multi-domain data sources used to build the corpus. and ÷) by The Walta Information Center (WI tokens collected from 1065 Amharic new notated for POS tags. It is, however, too project corpus is another web corpus, w The corpus is cleaned and tagged for PO The Crúbadán corpus was develope large number of under-resourced lang 16,970,855 words crawled from 9999 doc their frequencies, which is inconvenient applications. The Contemporary Amharic Corpu different sources. We checked and got ab this corpus. As we can see in Table 1, the of words and thus inconvenient to tra maining two are not sufficient for the da the POS-tagged corpora are not directly gaps, we created our own g (https://github.com/Fanpoliti/GPAC)) co GPAC includes about 121 million docum Data Collection We collected data from diversified haric corpus (GPAC). There are two obje used as a general resource for future NLP "low-resource" language Amharic. Secon create a good-quality Amharic word emb of this fake news detection work, it is the a vital resource in many NLP application Data Processing The preprocessing of the document punctuation marks, and sentence extra domizing the documents. Extracting ea randomizing them helps make the corp developers without affecting the copyrig have been used in the documents or art tions such as " ", " ", ‹‹ ››, ' ', ' ', or « » hav quotes by " ", and all single quotes by ' follows: full stops (like:: and ፡፡) by ።, hy and ÷) by ፣. Table 2 summarizes the vario corpus. . Table 2 summarizes the various multi-domain data sources used to build the corpus. Amharic Fasttext Word Embedding (AMFTWE) Word embeddings have been used to represent the features for traditional classifiers, or as initializations in deep neural networks. Word embeddings are real-valued representations for text by embedding both the semantic and syntactic meanings obtained from an unlabeled large corpus and are perhaps one of the key advances for the remarkable performance of deep learning methods in challenging NLP (natural language processing) problems, such as content-based fake news detection [6,14,22]. They are widely used in NLP tasks, such as sentiment analysis [23], dependency parsing [24], machine translation [25], and fake news detection [1,[9][10][11][12][13][14][15]. Considering the difficulty of the fake news detection problem, fake news detection methods using deep learning can benefit from good-quality word embeddings. Publicly available models, which are pre-trained on large amounts of data, have become a standard tool for many NLP applications, but are mostly available for the English language. Word embeddings for "low-resource" languages are absent or are very limited. For the Amharic language, a fasttext-based word embedding (cc_am_300) was trained by [26], using 300 dimensions. However, the number of word vectors are limited and also it contains uncleaned English tokens. The distributional hypothesis used in [27][28][29] utilizes the idea that the meaning of a word is captured by the contexts in which it appears, to learn word embeddings. Thus, the quality of the word vectors directly depends on the amount and quality of data they were trained on. Based on this fact, in this work, we introduce a high-quality Amharic fasttext word embedding (AMFTWE (https://github.com/Fanpoliti/AMFTWE)) trained on a huge corpus (GPAC_CACO_WIC_am131516) obtained by merging and deduplicating four corpora (discussed in Section 2), namely, GPAC, am131516, WIC, and CACO, using a fasttext model with sub-word information [30]. Table 3 illustrates the architecture of the word embedding. As the quality of word embeddings directly depends on the amount and quality of data used, the AMFTWE is of high quality. This is manifested in the superior performance of our fake news detection model when it uses AMFTWE compared with cc_am_300 [26]. The very reason we chose fasttext is because Amharic is one of the morphologically rich languages and it is possible to improve the vector representations for morphologically rich languages by using character-level information [30]. We evaluated AMFTWE using an extrinsic evaluation. In an extrinsic word embedding evaluation, we use word embeddings as the input features to a downstream task, in our case fake news detection, and measure the changes in performance metrics specific to that task [31][32][33][34]. For comparison purposes, we use the only available Amharic word embedding presented in [26]. This fake news detection, task-oriented evaluation, as presented in Section 5, shows that AMFTWE is a quality word embedding. The objective of preparing an AMFTWE is not only for the consumption of this paper; it is intended to be a valuable resource for future computational linguistic researches. For this reason, we will make it publicly available through our GitHub link (https://github.com/Fanpoliti/AMF TWE), with various dimensions and file formats (as shown in Table 3). ETH_FAKE: A Novel Amharic Fake News Dataset The fake news problem is a recent phenomenon and already a research issue, although still relatively less explored [35]. Even though there are research studies done, the scarcity of standard datasets was a common issue raised by many researchers. Deep learning-based fake news detection has been shown to be effective [6,22]. However, the data-hungry nature of this approach and the absence of sufficient datasets have made the research outcomes limited. Even the trials made in preparation of fake news detection datasets focused on the English language. Fake news detection in "under-resourced" languages, such as Amharic, is difficult to do because of the absence of a dataset. Though Amharic is a widely used language, as we have discussed in Section 2, and the impact of fake news in the regions using the language is a big concern-both to the government and society-there has not been any fake news detection research done for the Amharic language and there is no Amharic fake news detection dataset. This critical problem motivated us to do fake news detection research and prepare an Amharic fake news detection dataset. We created the first Amharic fake news detection dataset with fine-grained labels and named it ETH_FAKE (https://github.com/Fanpoliti /ETH_FAKE). ETH_FAKE consists of 3417 real news articles and 3417 fake news articles gathered from Amharic Facebook pages and online newspapers that accounts to a total of 6834 articles. Table 4 summarizes the architecture of the ETH_FAKE dataset. We discussed the data collection and preprocessing of the data under Sections 4.1 and 4.2. Data Collection As there was no existing Amharic fake news detection dataset, it was compulsory to collect data from the scratch. Obtaining well balanced real and fake pieces of Amharic news is not an easy task; especially getting the fake articles was tiresome. Both real news and fake news articles were obtained from Facebook and two well-known Ethiopian private newspapers, Reporter and Addis Admass. Reporter (Amharic: Information 2021, 12, x FOR PEER REVIEW 6 of 9 private newspapers, Reporter and Addis Admass. Reporter (Amharic: ሪፖርተር) and Addis Admass (Amharic: አዲስ አድማስ) are private newspapers published in Addis Ababa, Ethiopia. Archives of the online Amharic versions of these newspapers were scrapped using a python script. Even though these newspapers are presumed to broadcast real news, we have done fact checking for the news pieces by employing four senior journalists and a linguist. Articles collected from Facebook passed through the same procedure to check whether they contain factual news. Getting the fake news group was the utmost demanding task. As we could not get Amharic fake news sources, we were required to collect them piece-by-piece from scratch. We collected the fake news articles from Facebook and the aforementioned online newspapers. After identifying check-worthy Amharic Facebook pages, groups, and profiles, we scrapped the pages and fact-checked them manually using a group of senior journalists and a linguist. The ridiculously false statements like bombardment of an existing facility, bridge, or dam; contradictions to general truth; etc. are easily picked ) and Addis Admass (Amharic: Information 2021, 12, x FOR PEER REVIEW 6 of 9 private newspapers, Reporter and Addis Admass. Reporter (Amharic: ሪፖርተር) and Addis Admass (Amharic: አዲስ አድማስ) are private newspapers published in Addis Ababa, Ethiopia. Archives of the online Amharic versions of these newspapers were scrapped using a python script. Even though these newspapers are presumed to broadcast real news, we have done fact checking for the news pieces by employing four senior journalists and a linguist. Articles collected from Facebook passed through the same procedure to check whether they contain factual news. Getting the fake news group was the utmost demanding task. As we could not get Amharic fake news sources, we were required to collect them piece-by-piece from scratch. We collected the fake news articles from Facebook and the aforementioned online newspapers. After identifying check-worthy Amharic Facebook pages, groups, and profiles, we scrapped the pages and fact-checked them manually using a group of senior journalists and a linguist. The ridiculously false statements like bombardment of an existing facility, bridge, or dam; contradictions to general truth; etc. are easily picked ) are private newspapers published in Addis Ababa, Ethiopia. Archives of the online Amharic versions of these newspapers were scrapped using a python script. Even though these newspapers are presumed to broadcast real news, we have done fact checking for the news pieces by employing four senior journalists and a linguist. Articles collected from Facebook passed through the same procedure to check whether they contain factual news. Getting the fake news group was the utmost demanding task. As we could not get Amharic fake news sources, we were required to collect them piece-by-piece from scratch. We collected the fake news articles from Facebook and the aforementioned online newspapers. After identifying check-worthy Amharic Facebook pages, groups, and profiles, we scrapped the pages and fact-checked them manually using a group of senior journalists and a linguist. The ridiculously false statements like bombardment of an existing facility, bridge, or dam; contradictions to general truth; etc. are easily picked as FAKE, where as other contents were analyzed thoroughly. A FAKE label is attached to an article if the content is reporting completely false information or most of the content is verifiably false. Both the real news and the fake news come from multiple domains like sport, politics, arts, education, religion, economics, history, etc. Even though most of the fake news comes from Facebook, we tried to balance the domains of the real and fake sources. Preprocessing The absence of well-developed tools, like English and other languages, makes Amharic text preprocessing difficult. Its customary to find multiple languages intermingled in webscraped texts. We cleaned out mixed texts of English from the news articles. Evaluation Metrics We evaluated both the word embedding and the fake news detection model. For the fake news detection model, we used the accuracy, precision, recall, and F1 score as the evaluation matrices. For the evaluation of the word embeddings, we used the extrinsic evaluation technique, which is an evaluation of word embeddings using a specific task with metrics [31][32][33][34]. Our word embedding was evaluated against an existing pre-trained word embedding [26] using the same task-fake news detection. news, we have done fact checking for the news pieces by employing four senior journalists and a linguist. Articles collected from Facebook passed through the same procedure to check whether they contain factual news. Getting the fake news group was the utmost demanding task. As we could not get Amharic fake news sources, we were required to collect them piece-by-piece from scratch. We collected the fake news articles from Facebook and the aforementioned online newspapers. After identifying check-worthy Amharic Facebook pages, groups, and profiles, we scrapped the pages and fact-checked them manually using a group of senior journalists and a linguist. The ridiculously false statements like bombardment of an existing facility, bridge, or dam; contradictions to general truth; etc. are easily picked as FAKE, where as other contents were analyzed thoroughly. A FAKE label is attached to an article if the content is reporting completely false information or most of the content is verifiably false. Both the real news and the fake news come from multiple domains like sport, politics, arts, education, religion, economics, history, etc. Even though most of the fake news comes from Facebook, we tried to balance the domains of the real and fake sources. Preprocessing The absence of well-developed tools, like English and other languages, makes Amharic text preprocessing difficult. Its customary to find multiple languages intermingled in web-scraped texts. We cleaned out mixed texts of English from the news articles. Evaluation Metrics We evaluated both the word embedding and the fake news detection model. For the fake news detection model, we used the accuracy, precision, recall, and F1 score as the evaluation matrices. For the evaluation of the word embeddings, we used the extrinsic evaluation technique, which is an evaluation of word embeddings using a specific task with metrics [31][32][33][34]. Our word embedding was evaluated against an existing pre-trained word embedding [26] using the same task-fake news detection. Experimental Setup We wrote both the main project code and the data-scraping script in Python 3, using the TensorFlow r1.10, NumPy, and Keras library. Figure 1 depicts the Amharic fake news detection model based on Convolutional Neural Networks (CNNs). Since CNNs have been proven to show superior performance in text classification tasks, specifically in content-based fake news detection [6,14,22], we used them as model building methods. We present specific details as follows. The fake news detection dataset was preprocessed and split for training and validation in an 80/20 ratio (80% of it for training set and the remainder 20% for validation set). We used an embedding dimension of various sizes Experimental Setup We wrote both the main project code and the data-scraping script in Python 3, using the TensorFlow r1.10, NumPy, and Keras library. Figure 1 depicts the Amharic fake news detection model based on Convolutional Neural Networks (CNNs). Since CNNs have been proven to show superior performance in text classification tasks, specifically in contentbased fake news detection [6,14,22], we used them as model building methods. We present specific details as follows. The fake news detection dataset was preprocessed and split for training and validation in an 80/20 ratio (80% of it for training set and the remainder 20% for validation set). We used an embedding dimension of various sizes with 10,000 unique tokens and a 5000 sequence-length post padded with zeros. The output of the embedding layer was fed into a dense network of 128 neurons with the ReLU (Rectified Linear Unit) activation function. Then this output was passed into a one-dimensional GlobalMaxPooling layer. The output of the GlobalMaxPooling layer was again fed into a dense network of 128 neurons with the ReLU activation function, whose output was finally passed into a one-dimensional dense network with a sigmoid activation function. Rmsprop was used as the optimization technique and binary-crossentropy as the loss function. We used our word embedding (AMFTWE) with the dimensions 50, 100, 200, and 300 to record the performance of the fake news detection model in different dimensions. For the comparison of the two word embeddings, AMFTWE and cc_am_300, we set up a separate experiment with 300 dimensions, because the pre-trained word embedding (cc_am_300 [26]) is available in 300 dimensions only. Results and Discussion As the experimental results depicted in Table 5 show, the Amharic fake news detection model performed very well. The model scored validation accuracy above 99% while using the 300-and 200-dimension embeddings. This good performance might be attributed to both the fake news detection model and the quality of the word embedding. We could not find a content-based fake news detection work for the Amharic language for comparison. We recorded a higher performance of our model when it uses the Amharic fasttext word embedding AMFTWE than using the existing Amharic word embedding cc_am_300 presented in [26]. Since the evaluation of cc_am_300 and AMFTWE was made in the same experimental setup, we can say that the higher score of the model using AMFTWE is due to the relatively higher quality of AMFTWE. Hence, AMFTWE will be a valuable resource for future related research. Regarding the choice of dimensions, higher dimensions of AMFTWE, obviously, made the model perform better than the lower dimensions. As the results are proximate, we may opt to use either the 200-dimensional or 300-dimensional pre-trained word embedding, based on our memory and storage availability. Conclusions In this paper, we have studied Amharic fake news detection using deep learning and news content accompanied with the preparation of several computational linguistic resources for this "low-resource" African language. The lack of Amharic fake news detection research, especially due to the lack of both a fake news dataset and a good Amharic word embedding, as well as limitations in the existing Amharic corpora, have motivated us to contribute our share to fill these gaps. Together with the Amharic fake news detection model, we contributed several resources of paramount importance for this work and future research. We created ETH_FAKE: a novel Amharic fake news detection dataset with fine grained labels, which was collected from various multi-domain sources. Considering the lack of quality Amharic word embedding, we prepared AMFTWE: Amharic fasttext word embedding with sub-word information. GPAC, a general-purpose Amharic corpus, was the other contribution of this work. We used GPAC merged with other publicly available corpora to train AMFTWE, which, in turn, was used to initialize the embedding layer of our fake news detection model. Our fake news detection model performed very well using both word embeddings, cc_am_300 and AMFTWE. However, it exhibited a higher performance record when AMFTWE was used compared to cc_am_300, which could be attributed to the quality of our word embedding, which was trained on a relatively huge corpus. As deep learning methods require more data, this work may further be improved by increasing the size of ETH_FAKE and GPAC. On the other hand, using other word embedding algorithms, such as BERT (Bidirectional Encoder Representations from Transformers), could help train a word embedding possibly better than AMFTWE, provided the data-hungry nature of BERT is satisfied. However, crafting an Amharic fake news dataset and obtaining a large number of Amharic corpora will be challenging.
9,982
2021-01-07T00:00:00.000
[ "Computer Science" ]
Comparative genome analysis of non-toxigenic non-O1 versus toxigenic O1 Vibrio cholerae Pathogenic strains of Vibrio cholerae are responsible for endemic and pandemic outbreaks of the disease cholera. The complete toxigenic mechanisms underlying virulence in Vibrio strains are poorly understood. The hypothesis of this work was that virulent versus non-virulent strains of V. cholerae harbor distinctive genomic elements that encode virulence. The purpose of this study was to elucidate genomic differences between the O1 serotypes and non-O1 V. cholerae PS15, a non-toxigenic strain, in order to identify novel genes potentially responsible for virulence. In this study, we compared the whole genome of the non-O1 PS15 strain to the whole genomes of toxigenic serotypes at the phylogenetic level, and found that the PS15 genome was distantly related to those of toxigenic V. cholerae. Thus we focused on a detailed gene comparison between PS15 and the distantly related O1 V. cholerae N16961. Based on sequence alignment we tentatively assigned chromosome numbers 1 and 2 to elements within the genome of non-O1 V. cholerae PS15. Further, we found that PS15 and O1 V. cholerae N16961 shared 98% identity and 766 genes, but of the genes present in N16961 that were missing in the non-O1 V. cholerae PS15 genome, 56 were predicted to encode not only for virulence–related genes (colonization, antimicrobial resistance, and regulation of persister cells) but also genes involved in the metabolic biosynthesis of lipids, nucleosides and sulfur compounds. Additionally, we found 113 genes unique to PS15 that were predicted to encode other properties related to virulence, disease, defense, membrane transport, and DNA metabolism. Here, we identified distinctive and novel genomic elements between O1 and non-O1 V. cholerae genomes as potential virulence factors and, thus, targets for future therapeutics. Modulation of such novel targets may eventually enhance eradication efforts of endemic and pandemic disease cholera in afflicted nations. Authors' contributions MM PK SK EG JTF MI ARD SRT MB GH IEL AS FDS JM MFV Research concept and design -- Collection and/or assembly of data Introduction Cholera is an infectious disease characterized by profuse watery diarrhea and vomiting in humans, and the causative agent is Vibrio cholerae, a Gram-negative, comma-shaped, facultative anaerobic bacterium [1]. V. cholerae includes both pathogenic and nonpathogenic strains, and the bacteria responsible for pandemic outbreaks secrete the cholera toxin [2]. Since 1817, seven pandemics of cholera have been recorded. Cholera is a major public health concern because the disease can exhibit significant mortality if left untreated [3,4]. In the past 200 years, cholera has resulted in millions of deaths due to its ability to spread rapidly within populations, and has been capable of contaminating rivers and estuaries [5]. The most recent outbreak of V. cholerae was recorded in Southeast Asia, which quickly spread across the globe as the seventh pandemic [6]. In 2010 alone, 604,634 cases of cholera were reported in Haiti, raising the death toll count to 7,436 in the first two years [7]. The genomes of several pathogenic V. cholerae strains encode proteins that are directly or indirectly responsible for virulence. In many parts of the world, the O serogroups of V. cholerae are associated with diarrhea [8]. The most common mode of transmission for this bacterium is through the consumption of feces-contaminated water, fishes or crustaceans [9]. In addition to rehydration therapy, the first line of antimicrobial agent used against cholera is doxycycline, prescribed for a period of 1-3 days in order to reduce the severity of the symptoms [10,11]. Other antimicrobials which have been demonstrated to be effective in humans include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, furazolidone and norfloxacin [11,12]. Unfortunately, wide spread use and misuse of these and other antimicrobials have resulted in selection of multidrug-resistant bacterial variants [13] which potentially compromise chemotherapeutic efficacy towards cholera [14]. The different mechanisms by which bacteria show resistance to antimicrobial agents include (a) biofilm production (b) drug inactivation (c) ribosome protection (d) reduced permeability (e) target alteration [15] and (f) active efflux [16]. One of the active efflux pumps of V. cholerae is EmrD-3, which belongs to the major facilitator superfamily (MFS) and is a drug/H + antiporter with 12 transmembrane domains [17]. Another efflux pump encoded in the genome of V. cholerae is VceB [18]. Drug efflux pumps are integral membrane transporters that actively efflux the toxic compounds and antibiotics out of the bacterial cell and confer resistance against multiple antibacterial agents [19][20][21]. The presence of the cholera toxin (CT), the Vibrio pathogenicity island (VPI), and the toxin co-regulated pilus (TCP) within the O1 serogroups of V. cholerae make these strains more virulent and pandemic than their non-O1 counterparts [22]. A significant basis for their pathogenicity is attributed to cholera toxin encoding genes. Other genes important for enhancing virulence in these organisms are ace, psh, PIIICTX, zot and cep, which are implicated in phage morphogenesis [5,23,24]. The Vibrio pathogenicity island-1 (VPI-1) confers toxin release, bioflim formation, attachment to disease vectors for transmission to humans, and are receptors of CTX. The Vibrio pathogenicity island-2 (VPI-2) helps the cholera toxin to gain entry into the intestinal epithelium by unmasking GM1 gangliosides in the lining of the human intestine. The absence of VPI-1 and VPI-2 in non-O1 serogroups of V. cholerae makes them less pathogenic than the O1 serogroups [25]. Even though non-O1 V. cholerae strains carry certain virulence genes, the severity of disease is less compared to O1/ O139 V. cholerae [8]. The non-O1 serogroups of V. cholerae are known as the non-agglutinating Vibrios (NAGs) because they lack the genes coding for CT and TCP [26,27]. The presence of multidrug resistance (MDR) transporters confers resistance to ampicillin, chloramphenicol and tetracycline in non-O1 and non-O139 serogroups of V. cholerae species [14]. The ABC transporters present in PS15 V. cholerae predictably transport phosphate molecules across the periplasm and may be essential for protein synthesis, amino acid exchange, and transport of fatty acids [28]. We previously determined the genome nucleotide sequence of the non-O1 non-toxigenic V. cholerae PS15 (GenBank Accession No. AIJR00000000) [28]. Here, we compared non-O1 PS15 with the genetic information of virulent strains. The genome of V. cholerae PS15 is composed of 3,910,387 base pairs (bp) organized into 3,512 open reading frames with a G +C content of 47.55% [28]. We chose to focus our comparative analysis with V. cholerae PS15 [29] using V. cholerae El Tor N16961 because this latter genome was completely sequenced [30]. N16961 is made up of 4,033,460 base pairs (bp) organized and distributed into two chromosomes, with a G+C content of 46.9% in chromosome 1 and 47.7% in chromosome 2 [30]. Even though the non-O1 V. cholerae bacterium possesses some virulence genes responsible for causing gastrointestinal infections, wound infections, septicemia and cellulitis in humans, little is known about the mechanisms that confer virulence in this microorganism. The aim of this work is to identify differences in the genetic elements between the genomes of virulent N16961 and non-virulent PS15 strains of V. cholerae in order to identify novel virulence mechanisms that may eventually serve as potential therapeutic targets for the ultimate purpose of fostering conditions that reduce dissemination of disease-causing virulent serotypes of V. cholerae through populations. Comparison of non-O1 PS15 and O1 N16961 Vibrio cholerae genomes using RAST and UniProt A function based genome comparison was performed between a non-toxigenic, non-O1 V. cholerae PS15 environmental isolate (courtesy of Dr. Charles Kaysner) from sediment sampled in Puget Sound, WA [28,31] and O1 V. cholerae N16961 [30], using the RAST (Rapid Annotation using Subsystem Technology) database and Seed Viewer to predict protein function [32] focusing on comparison of categories and subsystem groupings pertaining to virulence, disease, defense, membrane transport, DNA metabolism, regulons, dormancy, sporulation, phages, prophages, transposable elements, and plasmids for both genomes of O1 and non-O1 V. cholerae microorganisms. The open reading frames (genes) encoding functional roles associated with a subsystem are referred as functioning parts, and a subsystem is referred as a set of predicted abstract functional roles [32]. The screening of predicted proteins encoded from elements of both genomes was performed with BLAST analysis of the amino acid sequences using UniProt [33]. Phylogenetic analysis The non-O1 V. cholerae PS15 genome sequence [28] (GenBank Accession no. AIJR00000000) was analyzed using BLAST [34] in order to generate phylogenetic trees harboring genomes of closely related organisms and virulence factors of the O1 serotypes. The BLAST pair wise alignment using Tree Neighbor Joining method [35] was used to compare the genome of PS15 to other complete Vibrio genome sequences in the database and is represented in Figure 1. CGView The CGView server was used for comparative genome analysis [36]. A graphical circular genome map was constructed using CGView by BLAST analysis of the DNA sequence of V. cholerae non-O1 PS15 (3,910,387 base pairs) with the complete DNA sequence of V. cholerae El Tor N16961 (4,033,460 base pairs) [28,30]. Results The genome of non-O1 V. cholerae PS15 is distantly related to O1 V. cholerae genomes We previously determined the whole genome sequence of a non-toxigenic, non-O1 V. cholerae isolate from Puget Sound, strain PS15 [28]. It had been shown that genomes of toxigenic O1 V. cholerae bacteria were highly related [30], possibly implying that non-O1 genomes would be more distantly related. We tested this prediction by comparing non-O1 V. cholerae PS15 with other microorganisms by constructing a phylogenetic tree using BLAST pair-wise alignment in order to represent genomes that are most closely related to V. cholerae non-O1 PS15 and to establish relatedness of PS15 to these microorganisms (Figure 1). Although the non-O1 V. cholerae PS15 genome sequence is most closely related to those of V. cholerae LMA 3984-4, O395, O1 strains 2010EL-1786, MJ-1236, O1 biovar El Tor strain N16961, IEC224, and M66-2, the non-O1 V. cholerae PS15 strain is, nonetheless, the most distantly related member within this cluster. Tentative chromosome assignment in non-toxigenic, non-O1 V. cholerae PS15 Since the two chromosomes of the toxigenic O1 V. cholerae strain N16961 were elucidated [30], we predicted that genomic sequence alignment with the non-toxigenic, non-O1 V. cholerae strain PS15 would implicate chromosome assignment in this bacterium as well. A circular genome representation was generated using the CGView server to plot the structural genome arrangement with BLAST analysis of the non-O1 V. cholerae PS15 genome with that of the O1 V. cholerae N16961 using their respective genomic nucleotide sequences in a FASTA format (Figure 2). Using the genome sequence data from V. cholerae N16961 to compare with the genome of V. cholerae PS15, chromosomes 1 and 2 were implicated for the non-toxigenic PS15 strain and are shown in Figure 2. The majority of genes in the O1 N16961 and non-O1 PS15 V. cholerae genomes are shared We have shown above that although the non-O1 V. cholerae PS15 genome is distantly related to the genomes of toxigenic O1 V. cholerae, the PS15 genome is still closely related to genomes of the Vibrio genus. This implies a striking similarity between the non-O1 and O1 genomes, specifically regarding the commonalities within the gene space. To test this, we used RAST Seed Viewer and UniProt to compare the genome sequences of O1 V. cholerae N16961 and non-O1 V. cholerae PS15, the general features of which are shown in Table 1. The O1 and non-O1 V. cholerae genomes shared 766 genes (open reading frames) that are predicted to code for proteins within functional categories pertaining to virulence, disease, defense, membrane transport, phages, prophages, transposable elements, plasmids, DNA metabolism, dormancy, sporulation and regulons. Interestingly, when compared to the N16961 genome, the V. cholerae PS15 genome appears to be truncated sporadically throughout by approximately 120 kbp ( Table 1 and Figure 2). In Table 2 we listed 58 of 766 genes that share 98% identity between both genomes. The remaining genes are listed in Supplement Table S1. Even though non-O1 V. cholerae PS15 is believed to be nonpathogenic compared to the known virulent O1 V. cholerae N16961 strain, their genomes shared 90 genes in common that code for functions pertaining to virulence, disease and defense. Some of these genes included accessory colonization factor (acfD), TCP pilus virulence regulatory protein (tcpN), toxin coregulated pilus biosynthesis protein E (tcpE), TCP pilus virulence regulatory protein (toxT) and accessory colonization factor (acfC). In addition to these virulence-associated genes, both genomes shared 287 genes encoding functional properties in the DNA metabolism category, 8 genes encoding proteins for dormancy and sporulation, 366 genes encoding membrane transporters, 12 genes in the categories of phages, prophages, transposable elements and plasmids, and 3 genes pertaining to regulons. Among these shared genomic elements encoding membrane transporters are genes known to express multidrug resistance efflux pumps, including AcrA of the RND superfamily [37], SugE of the SMR superfamily [38], and NorM of the MATE superfamily [39]. Genes present in O1 V. cholerae N16961 genome and absent in the non-O1 PS15 genome The pathogenicity of the O1 V. cholerae serotypes suggests that they harbor genomic elements that confer virulence. For instance, the cholera toxin of toxigenic V. cholerae strains is the primary virulence factor in endemic and pandemic cholera cases [40]. Thus, in order to establish the association between presence of virulence-encoding genomic elements and pathogenicity, we compared the functional determinants between both PS15 and N16961 genomes. Our analysis revealed that of the 619 genes absent in the non-O1 V. cholerae PS15 genome [29], 56 of these genes, when compared to O1 V. cholerae N16961, are in the categories including virulence, disease and defense, membrane transport, DNA metabolism, dormancy and sporulation ( Table 3). The virulence genes which were present in O1 serotypes but largely absent in the non-O1 strains, including the PS15 strain, include the accessory cholera enterotoxin (ace), the cholera enterotoxin subunit B (ctxB), the cholera enterotoxin subunit A (ctxA), and the zona occludens toxin (zot). Comparison of the predicted proteins encoded of both PS15 and N16961 genomes using UniProt revealed the absence of other virulence genes in PS15, which include genes predicted to encode accessory colonization factors A and B (acfA and acfB), and the genes encoding VceA and VceB proteins shown to confer resistance to antimicrobial agents ( Table 3) [41]. Notably, the gene demonstrated to confer multidrug resistance and encoding a drug efflux pump, EmrD-3, of the MFS is present in N16961 but absent from the non-O1 V. cholerae PS15 genome [17,21]. A phylogenetic tree, which was generated by BLAST for bacterial genomes that share the cholera toxin, indicated the absence of the cholera toxin gene in the non-O1 V. cholerae PS15 bacterium (Figure 3). The most closely-related microorganisms that shared the DNA encoding the cholera toxin include V. cholerae IEC224, O1 biovar El Tor strain N16961, O395, MJ-1236 and the O1 strain 2010EL-1786. Other genes that were absent in non-O1 V. cholerae genome but present in O1, include genes that encode glycerolipid and glycerophospholipid metabolism, and genes that code for VPI [25] ( Table 3). Additional genes that are absent in non-O1 V. cholerae PS15 include those coding for the Rst operon essential for the synthesis of phage related replication protein (RstA), phage related integrase (RstB), phage related antirepressor (RstC), phage related transcriptional repressor (RstR) [24], and sulfur metabolism. Other genes that are found in O1 V. cholerae but absent in non-O1 include those coding for TsaE, a protein required for the synthesis of threonylcarbamoyladenosine in the presence of tRNA [42]. Genes present in the non-O1 V. cholerae PS15 genome and absent in the O1 N16961 genome Because the non-O1 V. cholerae PS15 environmental isolate is considered to be nontoxigenic [31,43], this implies that genes unique to this microorganism, compared to the toxigenic N16961 bacterium, possibly encode non-virulent functions. To test this hypothesis, we performed a function based genome comparison using RAST and UniProt for PS15 and N16961. This comparative analysis revealed that 113 genes were excluded in N16961 but present within the PS15 genome ( Table 4). The three known genes (characterized) that are present in PS15, but absent in N16961, include the oligopeptide ABC transporter called periplasmic oligopeptide-binding protein (OppA) [44], a proteinexport membrane protein (SecF) [45], and the UvrABC system protein A (uvrA) [46], all of which belong to the membrane transport category. Remaining genes annotated as uncharacterized hypothetical proteins as per UniProt are surprisingly predicted to code for proteins involved in functions related to virulence, pathogenesis, defense, solute transport, and DNA metabolism ( Table 4). Conclusions Upon comparison of the non-O1 V. cholerae PS15 genome, a non-toxigenic strain, to that of an O1 V. cholerae N16961, a toxigenic strain, we found that of the 619 missing genes, 56 of these missing genomic elements encode dormancy, sporulation, ribosome modulation in persister cells, lipid metabolism, phage infection, nucleoside metabolism, and sulfur metabolism which in turn is essential for biosynthesis of amino acids, vitamins and prosthetic groups [43]. As non-O1 V. cholerae lacks genes coding for metabolism of sulfur, the non-O1 serotype is predicted to be unable to convert naturally available sulfur to sulfide, which could then be incorporated into various sulfur containing metabolites. Sulfur is critical for the biosynthesis of many important compounds like amino acids (cysteine and methionine), vitamins (biotin, thiamin), and prosthetic groups (Fe-S clusters) [43]. These genetic elements and their putative gene products represent novel and promising targets for modulation of gene expression or activity and therapeutic efforts [47], in order to effectively reduce conditions that foster virulence and dissemination of V. cholerae pathogens through populations. These determinants, therefore, clearly also warrant further studies in order to elucidate the complete molecular mechanisms of pathogenesis in cholera infections. Not surprisingly, also among the 56 missing genes in the non-O1 PS15 genome are those that are known to confer virulence, such as the cholera toxin [40], colonization factors [48], and antimicrobial resistance mechanisms [16]. We thus confirm that the genes encoding the cholera toxin are absent from the genome of the non-toxigenic V. cholerae PS15. We confirm, however, the presence of other genes predicted to encode distinct toxins and colonization factors, as previously shown for the non-O1 V. cholerae strain NRT36S [49]. This latter study and our findings here are consistent with previous work demonstrating that aquatic environments are reservoirs for O1 and non-O1 V. cholerae [50], predicting that such environments allow genetic exchange between unrelated strains. In order to gain valuable insights into enhancing chemotherapeutic efficacy against cholera, it is imperative to study and gain understanding into the modes of action of the toxicity-inducing factors combined with other antibacterial resistance factors in toxigenic V. cholerae [51]. Interestingly, we found that the genome of the nontoxigenic V. cholerae PS15 strain harbors genes absent from the genome of its toxigenic counterpart, N16961. Such determinants mainly include still uncharacterized genetic elements that are predicted to encode proteins that confer virulence, disease, defense, membrane solute transport and DNA metabolism, suggesting that PS15 may be pathogenic to organisms excluding humans, perhaps in environments such as estuary waters [52,53]. Among the genetic determinants unique to PS15 that have been experimentally characterized include OppA, an oligopeptide primary active transporter [44], and SecF, a protein exporter [12]. We propose that these unique genetic elements represent good targets for future development of new therapies against V. cholerae infections in animals other than humans. The genome of non-O1 V. cholerae PS15 shares >97% identity with El Tor O1 biovar V. cholerae strain N16961, as per BLAST analysis at the nucleotide level. Based on the alignment of the non-O1 PS15 genome with that of O1 N16961, chromosomes 1 and 2 were assigned to the PS15 genome (Figure 2). This tentative chromosome assignment will require confirmation with additional experimental work. Even though the genomes of both strains are highly similar to each other, the non-O1 PS15 microorganism is considered to be non-pathogenic, compared to the O1 N16961 strain, possibly due to the absence of the cholera toxin in PS15, which is responsible for endemic and pandemic diseases [54]. More recent genomic analysis, however, has demonstrated that other genetic elements are also critical for conferring pathogenesis such as genes coding for housekeeping, homeostasis, metabolism, energy generation, and antimicrobial resistance-type functions [55]. Our phylogenetic and genome comparison analyses between the toxigenic and non-toxigenic V. cholerae microorganisms support both of these contentions. Further work with additional variants, such as atypical El Tor [56], NRT36S [49], and CT-producing non-O1 strains [57], will be necessary to definitively gain a complete picture of the relationships between pathogenic versus non-pathogenic V. cholerae. Remarkably, we found that both of the toxigenic and non-toxigenic V. cholerae strains harbor a variety of genes that have previously been demonstrated to confer multidrug resistance via active drug efflux pump systems, such as AcrAB, NorM / VcmA, SugE, and VcaM [58]. All six RND transporters in V. cholerae N16961 have been studied physiologically [59], and our data showed that V. cholerae PS15 was missing only one of these pumps, called VexA. Additionally, we found a shared but uncharacterized genetic element, VC_A0083 in the toxigenic strain and OSU_1537 in the non-toxigenic strain, tentatively called multidrug resistance protein D and predicted to encode an MFS drug efflux pump. These multidrug resistance mechanisms may be important because of their potential selection and maintenance in environments containing antimicrobial agents, their genetic mobility to other microorganisms, and dissemination within populations [60][61][62][63][64]. We conclude that the study and comparison of the genomic sequences between pathogens and their non-virulent counterparts will help discover genes encoding both the classical virulence factors and those encoding novel virulence factors. Future work will focus on the study of solute transport and antibacterial resistance mechanisms of V. cholerae pathogenic strains and on the identification of novel housekeeping genes which may be equally significant in contributing towards the microorganisms' pathogenicity [17,65,66]. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Included in this table are genetic elements that are absent in the non-Ol genome but present in the O1 genome, which have putative functions in virulence, disease and defense, membrane transport, DNA metabolism and dormancy and sporulation. In the table, the first column includes gene descriptions as per UniProt. Second and fourth columns represent abbreviated gene identification; the third and fifth columns represent accession numbers for the listed genes. * The symbol denotes proteins that have functions in virulence, disease and defense. † The symbol includes proteins that are putative membrane transporters. § Symbols and include proteins that have putative functions in DNA metabolism and dormancy/sporulation categories, respectively. ∥ Symbols and include proteins that have putative functions in DNA metabolism and dormancy/sporulation categories, respectively.
5,203.8
2014-07-23T00:00:00.000
[ "Biology" ]
Exosomes in osteoarthritis and cartilage injury: advanced development and potential therapeutic strategies Articular cartilage injury is a common clinical problem, which can lead to joint dysfunction, significant pain, and secondary osteoarthritis (OA) in which major surgical procedures are mandatory for treatment. Exosomes, as endosome-derived membrane-bound vesicles, participating in intercellular communications in both physiological and pathophysiological conditions, have been attached great importance in many fields. Recently, the significance of exosomes in the development of OA has been gradually concerned, while the therapeutic value of exosomes in cartilage repair and OA treatment has also been gradually revealed. The functional difference of different types and derivations of exosomes are determined by their specific contents. Herein, we provide comprehensive understanding on exosome and OA, including how exosomes participating in OA, the therapeutic value of exosomes for cartilage injury/OA, and related bioengineering strategies for future therapeutic design. Introduction The occurrence of osteoarthritis (OA) implies an imbalance in degradation-synthesis of chondrocytes, extracellular matrix (ECM) and subchondral bone, but the detailed molecular mechanisms remain unclear. Exosomes, as a type of extracellular vesicles (EVs), play a role in tissue-tissue and cell-cell communications in homeostasis and diseases, and the mechanism of exosomal involvement in the development of OA has only recently been reported. Thus, understanding how exosomes participate in the process of OA will help to seek a novel way for developing the treatment of OA. Traditional non-surgical treatments for OA can improve symptoms but cannot restore articular cartilage regeneration or modify degenerative processes [1] . Although surgical arthroplasty results in long-term functional improvement and improves quality of life, this only suit for the end stage of the diseases, and instability and infection are the most common limitations, necessitating further revision surgery. Cell-based therapy, especially mesenchymal stem cells (MSCs), has facilitated rapid advances in regenerative medicine for OA/cartilage injury in recent years. However, the clinical applications of stem cells have raised considerable concerns, such as teratomas, immune rejection, non-stable between batches and dose-dependent effect [2][3] . The bio-effects of MSCs mainly through paracrine, especially via the exosomes they produced [4] . Thus, exosome-based therapy may be a promising substitute for stem cell therapy on cartilage injury/OA, which provides the possibility of "cell-free" therapy. Ivyspring International Publisher This review will focus on the biological characteristics of exosomes, and the involvement of exosomes in pathology of OA will be discussed in detail. We will also discuss the evidence that shows how exosomes can be used as "cell-free" therapy for OA/cartilage injury, and the detailed exosome-based tissue engineering strategies. Finally, we will also discuss future development in this exciting field. The biological characteristics of exosomes Exosomes are endosome-derived membranebound vesicles with diameters of 50-150 nm, released by cells in all living systems in physiological and pathophysiological conditions [5] . Exosomes originate from the endosomes that are generated by endocytosis of the cytoplasmic membrane. Later, inside the cell, cargoes such as mRNA or proteins accumulate inside the endosomes, thus forming multivesicular endosomes or intraluminal vesicles [6] . After further processing, exosomes are finally released through membrane fusion [7] . At early ages, exosomes were regarded as useless cellular metabolic waste, but it has since been recognized that exosomes carry proteins, lipids, and nucleic acids, including mRNA, microRNA (miRNA), and long non-coding RNA, which play important roles in intercellular communications and cellular immune response [8] . The biogenesis and release of exosomes is a complex symphony involving a series of factors, representatively including the endosomal sorting complexes required for transport (ESCRT), ALIX (also known as programed cell death 6-interacting protein, PDCD6IP), phospholipase, vacuolar protein sorting-associated 4 (VPS4), Rab GTPase proteins, sphingomyelinase and ceramide. Recently, the inhibition effect of exosome release was reported by sustained activation of mechanistic target of rapamycin complex 1 (mTORC1) in both cells and animal models, while inhibition of mTORC1 stimulated the release of exosomes, which occurred concomitantly with autophagy [9] . The recognition of membrane receptors is the basis for the exosome-cell reaction and interactions. The activation of the receptors and subsequently lead to the activation of associated signaling pathways, subsequently the fusion of exosomes with plasma membrane or the endocytosis of exosomes occurs. Through above-mentioned internalization pathways, exosomes can either release their cargos into the recipient cells to exert their functions, or be directly degraded by lysosome for recycling [10] . Thus, the function and biological characteristic of exosomes are determined by their specific contents. Pathological related exosomes in OA Although many beneficial biological effects are based on exosomes, in OA joint, chondrocytes, synoviocytes and immune cells may also deliver pathogenic signals to each other through exosomes, and these communications break the balance of joint microenvironment and further aggravate OA. In OA tissues, an increased number of immune cells associated with pro-inflammatory cytokine expression, including tumor necrosis factor (TNF)-α, interleukin (IL)-1β, IL-6 and IL-22, was detected from OA synovial tissue [11] , while matrix metalloproteinase (MMP), is thought to be the major mediator of ECM breakdown, which causes the majority of the pathologies seen in OA [12] . Analysis the relationship between exosomes and these important pathological factors may provide insights for our understanding of OA pathology (Figure 1). Exosomes in OA pathology Exosomes in OA joint fluid were firstly analyzed by many research groups and it indeed usually links with pathological effects. Exosomes from OA joint fluid could influence the gene expression of chondrocytes negatively. Kolhe et al treated healthy articular chondrocytes with OA-derived exosomes, showing decreased anabolic genes expression and elevated expression of catabolic and inflammatory genes [13] . In addition, exosomes from OA joint fluid could activate inflammatory cells. Domenis et al found that synovial fluid-derived exosomes significantly stimulated the release of several inflammatory cytokines, chemokines and metalloproteinases by M1 macrophages [14] . Incubating macrophages with exosomes from synovial fluid of OA, Song et al demonstrated that the exosomes could induced proliferation and osteoclast formation without macrophage colony-stimulating factor (M-CSF) and receptor activator of nuclear factor kappa-B ligand (RANKL) [15] . Furthermore, some researches have studied the role of exosomes in the communication between chondrocytes and other cells in OA. Nakasa et al reported that when exosomes derived from chondrocytes treated with IL-1β were applied to fibroblast-like synoviocytes, there was a nearly three-fold increase in MMP-13 production as compared with exosomes derived from chondrocytes without IL-1β stimulation [16] . In turn, Kato et al found exosomes from IL-1β stimulated human synovial fibroblasts (SFB) up-regulated MMP-13 and ADAMTS-5 expression in articular chondrocytes and down-regulated collagen alpha 1 (Col2a1) and aggregated proteoglycan core protein (ACAN) in vivo [17] . In one chip-assay study, 50 miRNAs were identified in exosomes in response to IL-1βstimulated SFB compared to in non-stimulated SFB, and among them, miR-4454 and miR-199b are related to inflammatory stimulation [18] and cartilage formation [19] , respectively. Ni et al found the exosome-like vesicles from IL-1β-pretreated chondrocytes could promote mature IL-1β production of macrophages [20] . Kolhe et al reported the differential expression of miRNAs between exosomes from OA synovial fluid and normal synovial fluid [13] . They found the gender differences of the expression of exosomal miRNAs, with only one (miRNA-504) existing in both genders. Interestingly, the authors also demonstrated that female OA-specific exosomal-miRNAs from synovial fluid were estrogen-responsive and targeted toll-like receptor (TLR) signaling pathways, which might relate to the increased prevalence of OA in post-menopausal females. Furthermore, the patients with OA had lower levels of exosomal miR-193b in plasma than normal control subjects [21] . The differential expression of miRNA of exosomes from chondrocytes in the inflammatory microenvironment which related to OA has been reported recently. Exosomal miR-92a-3p expression was significantly reduced in the OA chondrocyte-secreted exosomes. Mao et al found that miR-92a-3p suppressed the activity of a reporter construct containing the 3'-UTR and directly targeted WNT5A in both MSCs and chondrocytes [22] , and WNT5A plays important role in both chondrogenic differentiation and cartilage degradation [23] . Mao et al also found another exosomal miRNA (miR-95-5p) was down-regulated in OA chondrocytes [24] . Furthermore, they demonstrated that miR-95-5p could regulate cartilage development and homoeostasis by directly targeting histone deacetylase (HDAC)2/8. HDAC2/8 tends to impede cartilage development by inhibiting the expression of cartilage-specific genes [25] . Many exosomal miRNAs are founded related to OA development, there are exosomal proteins related to OA too. Recently, Varela-Eirín et al found that overexpression of channel protein connexin43 (Cx43) in chondrocytes increased senescence and exosomal Cx43 levels [26] . In this study, OA chondrocytes showed increased levels of Cx43 within their EVs in comparison to the EVs isolated from healthy donors. The Diagnostic Value of Exosomes for OA Theoretically, the variation levels of exosomal miRNAs or proteins mentioned above have potential to become biomarkers for OA diagnose. However, the value of exosomes as a diagnostic tool of OA is still under discussion. Recently, Zhao et al investigated the diagnostic value of exosomes from plasma and from synovial fluid in patients with OA in distinguishing the early stage of OA from progressive stage of OA [27] . They found that in synovial fluid, the expression of exosomal lncRNA PCGEM1 was markedly higher in late-stage OA than in early-stage OA, and markedly higher in early-stage OA than normal controls, demonstrating that exosomal lncRNA PCGEM1 from synovial fluid might be a powerful indicator in distinguishing early-stage from late-stage OA. IncRNA PCGEM1 acts as a sponge lncRNA targeting miR-770 and then stimulates proliferation of osteoarthritic synoviocytes [28] . However, the general evaluation of plasma and synovial fluid exosome was not viable for identifying OA stages, and the clinical application of exosomes in the diagnosis of OA remains challenging. Exosomes with therapeutic effects for OA and cartilage injury The exosomes from mesenchymal stem cells Exosomes are one of the key secretory products of MSCs, resembling the effect of parental MSCs, and can be directly used as therapeutic agents for various disease models, such as cutaneous wound [29] , osteonecrosis of the femoral head [30] , and neurological injury [31] . Recent years, exosomes from different unmodified MSCs were reported to have exact therapeutic effects on OA and cartilage injury (Figure 2). Exosomes from different types of MSCs Cartilage regeneration from bone marrow mesenchymal stem cells (BMSCs) is the core of microfracture technology. Exosomes from BMSCs have been studied in recent years. Cosenza et al demonstrated the protective effect of exosomes from BMSCs in the collagenase-induced OA model [32] . Furthermore, they found exosomes from BMSCs could restore the expression of chondrocyte markers (type II collagen, aggrecan) while inhibiting catabolic (MMP-13, ADAMTS5) and inflammatory (iNOS) markers in OA-like chondrocytes in vitro. Zhu et al also revealed that BMSCs derived exosomes could protect chondrocytes from apoptosis and senescence [33] . Furthermore, Qi et al observed the uptake of exosomes from BMSCs by chondrocytes [34] , and exosomes from BMSCs could inhibit mitochondrialinduced apoptosis of chondrocytes in response to IL-1β, with p38, ERK, and Akt pathways involved. Exosomes from embryonic MSC (ESCs) have shown the potential of alleviating matrix degradation and promoting cartilage repair in some animal models [35][36][37] . Further in vitro, exosomes from ESCs could maintain the chondrocyte phenotype by increasing collagen type II synthesis and decreasing ADAMTS5 expression [36] . Zhang et al demonstrated that the joint repair effects of exosomes from ESCs could be attributed to adenosine activation of protein kinase B(AKT), extracellular signal-regulated kinase (ERK) and adenosine monophosphate-activated protein kinase (AMPK) signaling [37] . In addtion, they also found that the joint repair effects might be related to exosomal CD73 expression which can convert extracellular AMP to adenosine, as well as exosomal transforming growth factor-β (TGF-β) and insulin growth factor (IGF). Due to the feasibility to obtain human infrapatellar fat pad from OA patients by arthroscopy, using exosomes from adipose-derived mesenchymal stem cells (ADSCs) has gradually gain more attentions. Tofiño-Vian et al found exosomes from ADSCs could down-regulated senescence-associated β-galactosidase activity and the accumulation of γH2AX foci, and reduced the production of inflammatory mediators from OA osteoblasts [38] . They also reported that exosomes from ADSCs could reduce the production of inflammatory and catabolic mediators from OA chondrocytes stimulated with IL-1β [39] . The chondroprotection role could be the consequence of a lower activation of nuclear factor-κB and activator protein-1. In other study, Wu et al attributed the cartilage protection of exosomes from ASCs to the high level of exosomal miR-100-5p [40] , because the exosomal miR-100-5p could bind to the 3′-untranslated region (3'UTR) of mTOR, then significantly enhance autophagy level in OA chondrocytes via mTOR inhibition. Therapeutic contents in exosomes It is significant to figure out the therapeutic contents in exosomes. Recently, Liu et al verified that lncRNA KLF3-AS1 in human MSCs and exosomes derived from human MSCs (MSC-Exos) by qRT-PCR analysis [41] , and it might be the key molecule with therapeutic effect. In their study, treating rat chondrocytes with the MSC-Exos whose lncRNA KLF3-AS1 expression was knocked down could reverse the normal chondroprotection of MSC-Exos. The knee joint cartilage damage of rat OA model was also deteriorated by the MSC-Exos without lncRNA KLF3-AS1 expression. These data suggested that the therapeutic effect of MSC-Exos on OA is related to the newly discovered exosomal lncRNA KLF3-AS1. Comparison of the effects of exosomes More importantly, exosomes from different cell types may have different effects, and this topic is still under investigation. Zhu et al compared the effects of exosomes secreted by synovial membrane MSCs (SMMSC-Exos) and exosomes secreted by induced pluripotent stem cell-derived MSCs (iMSC-Exos) in treating OA [42] . They found both exosomes attenuated OA in the mouse OA model, but iMSC-Exos had a superior therapeutic effect compares to SMMMSC derived exosomes, and iMSC-Exos exerted a stronger effect on chondrocyte migration and proliferation in vitro. In another study, Chen et al found that exosomes derived from chondrocytes (CC-Exos) increased collagen deposition and minimized vascular ingrowth in engineered constructs, and efficiently and reproducibly developed into cartilage, while the BMSC-Exos treated tissue engineered construct was characterized with hypertrophic differentiation accompanied by vascular ingrowth [43] . In vitro, CC-Exos could stimulate cartilage progenitor cells proliferation and significantly promoted chondrogenesis-related factors at the mRNA and protein levels. Exosomes from molecular engineered cells The strategies of utilizing exosome loading technology to obtain customized drug-loaded exosomes are gradually applied in the field of OA therapy (Figure 3). There are two ways to load drugs into exosomes, one is to load drugs into the donor cell of exosomes, such as using transfection and co-incubation; the other is to load drugs into exosomes after they are secreted, such as direct mixing, which the loading efficacy is a big concern. At present, most researchers prefer to obtain the exosomes for OA therapy with high expression of miRNA or lncRNA from modified MSCs [22,[44][45][46] . Exosomal miR-92a-3p [22] , exosomal lncRNA-KLF3-AS1 [44] , exosomal miR-140-5p [45] and exosomal miR-320c [46] from transfected MSCs have been reported to have significant therapeutic effects on OA in vivo and in vitro. Furthermore, it was reported that primary chondrocytes also could be modified to serve as the donor cells of exosomes [24] . In that study, exosomes derived from miR-95-5p-overexpressing primary chondrocytes promoted cartilage developpment and cartilage matrix expression by directly targeting HDAC2/8 in MSCs induced to undergo chondrogenesis and chondrocytes, respectively. In addition to transfection, treating the donor cells with proper growth factors such as TGF β1 can also improve the productivity of therapeutic exosomes [47] . Tissue engineering strategies for exosomes Exosome-based tissue engineering technology representing an advanced strategy, has attracted increasing attention in many fields ( Table 1). In this section, we will focus on some representative exosome-based tissue engineering strategies ( Figure 4) and discuss their application potential in OA therapy/cartilage repair. 3D culture for exosome generation In a multicellular organism, tissue cells are highly organized in a 3-dimensional (3D) fashion and are surrounded by the extracellular matrix (ECM) [48] . Under 2-dimensional (2D) conditions, cells lack the in vivo spatial polarization and architecture, leading to changes in cellular morphology, proliferation and functionalities, such as the processing and function of EVs [49] . Therefore, for better understanding the roles of exosomes in OA development and treatment, the technology of 3D culture, has attracted increasing attention in order to improve on the inadequate reproduction of the in vivo microenvironment by 2D culture. Nanoparticles Nanoparticles can influence and assist the production and function of exosomes. Kasper et al found silica nanoparticles could decrease secretion of ICAM/E-selectin bearing exosomes/microvesicles when exposed to the inflamed endothelium [58] . Roma-Rodrigues et al found that gold nanoparticles (AuNPs) functionalized with thiolated oligonucleotides anti-RAB27A could decrease the release of exosomes due to specific gene silencing [59] . In addition, nanoparticles can also affect the sorting of exosomal cargos. Liang et al demonstrated that Sphk2 gene silencing induced by siRNA loaded nanoparticles could reduce miRNA-21 sorting into exosomes [60] . Nanoparticles can also enhance the targeting ability of exosomes. Khongkow et al reported that the surface modification of AuNPs with brain-targeted exosomes derived from genetically engineered mammalian cells enhanced their transport across the blood-brain-barrier [61] . Moreover, nanoparticles can also enhance the drug loading ability of exosomes. In order to solve the problem of low efficiency of exosomes in encapsulation of large nucleic acids, Lin et al developed a kind of hybrid nanoparticle combining exosome and liposome via simple incubation, which efficiently encapsulates large plasmids [62] . 3D biomaterials and exosome retention 3D scaffolds could be further used as working platforms for the exosomes while attempting to control the release of exosomes in the tissue repaired area [63][64] . Hydrogel has been widely used as 3D scaffold due to their unique features, such as high water content, biocompatibility, swelling behavior, and modulated 3D networks, in many restorative areas, such as cardiac repair [65] , vascular disease [66] , wound healing [67] . Recently, Zhang et al reported that chitosan hydrogel could notably increase the stability of proteins and miRNAs in exosomes [68] . In OA therapy, hydrogel materials also have been widely to better fill the cartilage defect and provide a mode and mechanical support for cartilage regeneration [69] . Interestingly, hydrogel materials have been proven to have good exosome retention and sustained release function. Schneider et al found that many types of proteins secreted by chondrocytes encapsulated within photoclickable poly(ethylene glycol) hydrogels have been reported to be present within cell-secreted exosomes [70] . They suggested that the ability of diffusing through the hydrogel of smaller exosomes contributed to the results. Liu et al exploited a photoinduced imine crosslinking hydrogel glue as an exosome scaffold to prepare an acellular tissue patch for cartilage regeneration [71] . They found that most of the encapsulated exosomes were retained inside the hydrogel (>90%) after immersing in PBS for 14 days. In addition, the tissue patch could release low concentration of exosomes showing positive regulation to the surrounding cells. 3D bio-printing 3D printing has been well used in cartilage tissue repair, and to adopt the great bioeffects of exosome into 3D printing technology, there are two break points for further development. One is to improve the productivity and functions of exosomes by using optimized 3D culture microenvironment, which can be precisely designed and printed with 3D bio-printers [57] . The other is to design advanced bio-mimic scaffolds with more optimized geometric structure and better incorporate with exosomes, thus enhancing the therapeutic effects of exosomes or EVs [72][73] . For instance, Chen et al reported the interaction between 3D printing scaffold and exosome in cartilage repair [74] . They fabricated a 3D printed cartilage ECM/gelatin methacrylate/exosome scaffold with radially oriented channels using desktop-stereolithography technology. They found the 3D printed scaffold could effectively retain exosomes for 14 days in vitro and could retain exosomes for at least 7 days in vivo. They also found that the 3D printed scaffold could recruit chondrocytes, which was mainly attributed to ECM, and that exosomes could further enhance this effect. Drug-loading techniques The cell-friendly biological feature of exosome decides that the future drug application via exosome may provide a higher delivery efficiency. The recent research tend is to modify the exosome-secreting cells with transfection to load miRNA(s) into exosomes, thus can be used for OA therapy/cartilage repair [23,[44][45][46] . However, the transfection method in loading RNA has many limitations, such as the unstable productivity of RNA and the unclear factors influencing the RNA level and loading [75] , which means the application is still far from clinic. Other drug loading techniques applied for OA treatment/cartilage repair have not been reported yet, therefore it is significant to develop new drug loading technologies. One of the direction is to use nanoparticles, which could improve the drug loading capacity of exosomes [62] . All in all, to explore the cargo-loading mechanism of exosomes and search for more efficient and stable drug loading technology, will accelerate the development of exosome-based OA therapy/cartilage repair. Local sustained release system Most recent studies demonstrate that exosomes play a significant role in OA therapy/cartilage repair both in animal models and preclinical trials. Nevertheless, the current therapy requests repeatedly local administration for maintaining the effective concentration, which increase pain and the risk of side effects to the subjects. To solve this problem, local controlled release of exosomes with bio-scaffold has begun to be valued [74] . Controlled delivery of therapeutic agents on local joint lesion possesses two advantages. One is that controlling the degradation time of drug vehicle/bio-degradable scaffold and maintaining the effective concentration of drugs, thus prolonging the functional time of drugs at desired dosages. The other is that it will be helpful to combine the exosome-working-platform with other sustained drug release technologies according to the degrees of cartilage injury and osteoarthritis, which can be designed and become personalized therapeutics in the future. Personalized and "point-to-point" treatment Over the past few years, the arrival of liquid biopsy technology has made the generating a database of OA patients much easier, including information concerning exosomes. With the gradual understanding of the mechanism of exosome involved in OA and adequate information from the database, the patient's articular condition can be analyzed individually. It would be possible to assemble the needed cargos and drugs into the modularized exosomes to achieve personalized treatment with maximizing the therapeutic effect. In addition, the key concept of precision medicine is to treat or "ablate" the pathological condition without damage to normal tissues. As a consequence, exosomes will be a poetical biomarker and therapeutic tool for personalized and precision treatment of OA therapy/cartilage repair in the future. Conclusion Exosomes derived from chondrocytes, synovial cells, and synovial fluid have been shown to be involved in the pathogenesis of OA. Meanwhile many studies have shown that exosomes from natural cells, especially MSCs, could maintain chondrocyte homeostasis and ameliorate the pathological severity of OA, demonstrating the potential therapeutic effect of exosomes for OA/cartilage injury. In addition, exosomes from modified cells with drug loading technologies have been shown improved therapeutic effect. Tissue engineering techniques are also used in the exosome-based OA therapy/cartilage repair. Biological scaffolds, especially hydrogel, have been shown to have a good sustained exosome-release effect in cartilage repair. The 3D printing technology can be used to construct more reasonable 3D culture microenvironments for exosomes, and can contribute to design scaffolds with more optimized geometric structure. We believe that drug loading, sustained release and individualized treatment will be the main directions of exosome-based OA therapy/cartilage repair in the near future.
5,419.4
2020-03-31T00:00:00.000
[ "Medicine", "Biology" ]
The replicator dynamics of generalized Nash games Generalized Nash Games are a powerful modelling tool, first introduced in the 1950's. They have seen some important developments in the past two decades. Separately, Evolutionary Games were introduced in the 1960's and seek to describe how natural selection can drive phenotypic changes in interacting populations. In this paper, we show how the dynamics of these two independently formulated models can be linked under a common framework and how this framework can be used to expand Evolutionary Games. At the center of this unified model is the Replicator Equation and the relationship we establish between it and the lesser known Projected Dynamical System. Introduction Nash Games as we know them were first introduced in 1950 by Nash [17] and have a wide array of applications in applied sciences, most notably economics and engineering. The Generalized Nash Game, the subject of this paper, was described only four years later by Arrow and Debreu [1], but took much longer to unravel and has not yet gained the currency of its precursor. A Generalized Nash Game (GN) seeks to describe a situation where each player's choice of strategy somehow affects the choices available to his/her opponents. However, since everyone takes their turn all at the same time, this leads to games that cannot be played by normal people, at least not in the traditional sense. To illustrate, a classic example of a Nash Game is rock-paper-scissors. A GN version of this is rock-paper-scissors where ties are prohibited, i.e. if one player picks rock, another cannot also pick rock. But this game is impossible to play with another individual because a player cannot possibly know in advance what their opponent is going to pick, thus one cannot knowingly adhere to the mentioned restriction. For this reason, Ichiishi calls them pseudo-games [13]. Despite this artificiality, there are settings where outside forces ensure the satisfaction of constraints and, moreover, the model has explanatory value even in circumstances where this is not the case [9]. In general, it is difficult to find the equilibrium points of a GN. However the issue becomes easier if each player is subject to identical constraints (as far as variables under that player's control are concerned). This is known as a GNSC (Generalized Nash Game with Shared Constraints). There are many computational methods for finding some equilibria of the GNSC [9] and recent work gives a method for extracting all such points [15,6]. Another type of game we consider is the Evolutionary Game, first described by Maynard Smith [19]. Evolutionary games seek to model the evolution of phenotypes as a function of natural selection, such as in the well-known Hawk-Dove game [12]. The dynamics of these games can often be described by the Replicator Equation of Taylor and Jonker [20]. In this paper we build a bridge between the most fundamental type of Evolutionary Game and the GNSC, by establishing a connection between their Nash Equilibria via the Replicator Equation. This bridge allows us to extend the existing model to accommodate new types of problems, as GNSC's are richer and more diverse than population games. We build this bridge by extending the Replicator Equation so that it may be applied to GNSCs and we derive this extension using an analogy between the Replicator Equation and what is known as the Projected Dynamical System. The Projected Dynamical System (PDS) is a type of discontinuous dynamical system that was first introduced in the 70's by Henry [11], studied further in the 90's by Dupuis and Nagurney [8] and extended in the early 2000's by Cojocaru and Jonker [5] to Hilbert spaces of any dimension. PDS are intimately linked to GNSCs in that the steady state set of a PDS is a subset of the Nash Equilibria of its associated GNSC. The PDS is useful to us in this paper because it gives a known, distinct, game dynamic that's already applicable to Evolutionary Games and GNSCs alike. The relationship between the Projection Dynamic and the Replicator Dynamic was first studied by Lahkar and Sandholm [18,14]. In their papers, the authors elucidate similarities between the revision protocols implied by the two dynamics, and establish some properties of the solution trajectories of these systems for population games. In this paper we aim to expand this analogy beyond just population games, by extending the Replicator Equation and showing that a key theorem still holds that relates the rest points of our extended Replicator Equation to those of the Projection Dynamical System. In doing so, we allow for these two dynamics to be considered for GNSCs, which are varied and more general than population games. To show that our extension of the Replicator Equation is useful, we prove a part of the Folk Theorem of Evolutionary Game Theory, namely that every stable rest point of the Replicator Equation is a Nash Equilibrium of the corresponding Population Game [7,12]. As such, our Theorem 4.2 generalizes this aspect of the Folk Theorem so that it may be applied to any GNSC defined on a polytope. Then, after associating these two concepts, we show how we can extend Evolutionary Games under our new framework in Section 5. But to accomplish this, we first need a way to frame population dynamics problems as Nash Games, which we illustrate in Section 3 for a standard Nash Game, and in Section 5 for a GNSC. 2. Brief mathematical background 2.1. Convex Analysis. We will recall some basic definitions used in convex analysis, for ease of reading. Most of these definitions and results are drawn from or based on those found in [3]. Given a finite set of vectors β = {x 1 , . . . , x m } where x i ∈ R n we say that a vector y is an affine combination of β if we can find λ 1 , . . . , λ m ∈ R such that y = λ 1 x 1 + . . . + λ m x m , λ 1 + . . . + λ m = 1. If each λ i ≥ 0, then we say that y is a convex combination of β. A set K ⊆ R n is said to be convex if K contains every convex combination of vectors in K. Given a set K ⊆ R n , we can construct a convex set by taking all convex combinations of vectors in K or construct an affine set by taking all affine combinations. We call these the convex hull and affine hull of K respectively and we formally define them as The affine hull is important for defining the relative interior of a set. In optimization we are often working with low dimensional sets embedded in higher dimensional spaces, so we need a more general notion for the interior of a set: the relative interior fulfills this role. Given some set K, the relative interior of K (ri(K)) is defined as is the open ball of radius , centered at x. We also often consider the normal cone of a convex set. Given some convex set K ⊆ R n , we define the normal cone of K at some point x ∈ K to be In addition to convex sets, we also consider convex functions. Given some convex set K ⊆ R n , a function θ : K → R is said to be convex if for every t ∈ [0, 1] and x 1 , x 2 ∈ K, we have Polytopes. Now we will give a brief review of polytopes. A bounded, convex polytope is defined as the convex hull of a non-empty finite set of real points. For simplicity, we will just call this a polytope for the remainder of our paper. Let P be a polytope. The set s = {x 1 , . . . , x n } such that P = conv(s) is not unique; however, there does exist a unique minimal spanning set. We call this set ext(P ) and its elements are called vertices of P [14]. A convex subset F of P is called a face when for every distinct x, y ∈ P , if the line segment connecting x and y intersects F at some point other than x or y, then F contains the entire line segment. A face is itself a polytope, the convex hull of some subset of the vertices of P , therefore we can denote a face F r where r ⊆ ext(P ). Note that P = F ext(P ) is a face of itself. If the line segment connecting two vertices V i , V j is a face of K, we call them adjacent vertices and denote this relationship V i V j . Theorem 2.1. Suppose P is a polytope and F r is a face of P . Let x * ∈ F r and r = {V 1 , . . . , V n }. Then for every V i ∈ r, we have that span( Proof. This follows from Corollary 11.7 in [10]. For any face F r , we define the dim(F r ) as the dimension of the vector space associated with the affine hull of F r . Equivalently, for every x * ∈ F r , we have that dim(F r ) = dim(span(F r − x * )). Theorem 2.2. Suppose F r is a face of some polytope K and x ∈ F r . Then F r is the lowest dimensional face containing x iff x ∈ ri(F r ). If F r is a face of F r and dim(F r ) < dim(F r ) we say that F r is a proper face of F r . If dim(F r ) = dim(F r ) − 1 we call F r a facet of F r . Theorem 2.3. Suppose F r is a face of some polytope K and dim(F r ) ≤ dim(K) − 2. Then F r is the intersection of at least two facets of K with K. If dim(F r ) = dim(K) − 1 then F r is the intersection of exactly one facet of K with K. We say that two distinct faces F r and F r are adjacent if F r ∩F r is nonempty and dim(F r ) = dim(F r ). Lemma 2.4. If F r is a facet of K, x ∈ F r and x / ∈ ri(F r ), then x ∈ F r for some facet F r adjacent to F r . Proof. Since x / ∈ ri(F r ), then by Theorem 2.2 we can find a face F r * with dim(F r * ) < dim(F r ) such that x ∈ F r * . Then dim(F r * ) ≤ dim(K) − 2, hence by Theorem 2.3 F r * is the intersection of at least two facets of K. Any such facet is adjacent to F r and contains x. We can equivalently describe a convex bounded polytope K in terms of a set of affine functions H = {g 1 , . . . , g n }, where K = {x : g 1 (x) ≥ 0, . . . , g n (x) ≥ 0}. We call this the halfspace representation, and just like with the vertex representation, the choice of H is not unique. The fact that we can express bounded polytopes in these two different ways was first proved by Krein and Milman [16]. Any face of K is just K intersected with some combination of hyperplanes g i (x) = 0 where g i ∈ H. Given such a representation of K, we can define the faces based on which functions g i are nonzero somewhere on that face. Specifically, we can denote the faces of K as We will refer to the set h as the halfspace identifier of the face F h with respect to H. We now have both a top-down and bottom-up representation for a face. 1]. Since x * ∈ ri(F h ) we can extend this segment beyond x * slightly so that γ(λ) ∈ F h for all λ ∈ [0, 1 + ] for some > 0. Since g i (x) is affine, this gives us: Variational Inequalities and Projected Dynamical Systems. Given a subset K of R n and a mapping F : R n → R n , the Variational Inequality Problem (VI) is to find a solution x to the inequality (see [5]) (2.1) A Projected Differential Equation is defined as follows: Given some closed convex set K ⊆ R n , we define the projection operator P K at a point x, as the unique element P K (x) ∈ K such that We then define the vector projection operator from a vector v ∈ R n to a vector x ∈ K as Note that this is equivalent to taking the Gateaux derivative of the projection operator onto K, in the direction of v. Now, for some vector field −F : R n → R n , the class of differential equations known as Projected Differential Equations takes the forṁ Last but not least, it is known from [5], that if K is closed and convex then for any x * such that and the converse is also true. In general the projected system we introduced here has a discontinuous righthand side, though existence and qualitative studies have been known since the first work by Henry in 1970's [11], Aubin and Cellina in the 80's [2] and followed up by many others (see [8,5], see [4] and the extensive references therein). There is a similar notion of a projected equation (see [2,21]) defined byẋ The righthand side of this equation is continuous and, with good values of α, solutions of this projected equation amount to "smoothed out" (differential) approximations of the trajectories of (2.2). In general it is up to practitioners to choose one of the two versions of the projected system. If continuity in the equation's vector field is desired, then equation (2.3) can be considered. We take here the point of view of the PDS (2.2). Games and the Replicator Equation A Generalized Nash Game (GN) is characterized by a finite set of players {P 1 , . . . , P n }, where player P i controls the variable x i and has an objective function θ i (x 1 , . . . , x n ). The goal of each player is to minimize their objective function subject to some constraint set The key feature here is that each K i depends on variables beyond Player i's control. A Nash Equilibrium is any strategy (x 1 , . . . , x n ) where no player can lower their objective function by unilaterally altering their strategy, i.e. for every i ∈ {1, . . . , n}, for every Here is the basic form of a Generalized Nash Game: For the remainder of this paper we will assume that K i (x −i ) is closed and convex and ∇θ i is Lipschitz continuous for each i. If there additionally exists a convex set K such that for each i we have that then we call the game a GNSC, or a Generalized Nash Game with Shared Constraints, named because in the case of sets defined by inequalities, it is easy to see that this restriction amounts to saying that each player's strategy set K i (x −i ) can be defined by the exact same set of inequality constraints. Now, let us turn to a more specific kind of problem, Evolutionary Games. An evolutionary game is a game where there is a population of agents, whose strategies evolve according to some rule, that may model various adaptation processes. In the simple two-player symmetric case, Evolutionary Games are matrix games where each member of a population has a choice among n pure strategies {e 1 , . . . , e n }. In the associated matrix A, we have that A ij = π(e i , e j ) is just the payoff of playing pure strategy e i in a population that exclusively plays strategy e j [7]. All strategies must belong to the simplex ∆ n = {(p 1 , . . . , p n ) : A Nash Equilibrium is any strategy x ∈ ∆ n such that [12] π(p, x) ≤ π(x, x) for every p ∈ ∆ n . Although these games are usually described in the literature as we did above, it is easy to see that they are essentially solving the following Nash Game: where K = ∆ n . In this system y is just a shadow variable that tests strategy x against all other strategies, and its objective function ensures x = y at all solutions. This system is known from [9] to share its Nash Equilibria with the solutions to the Variational Inequality Note that at any solution to (3.1) we must have that y * − x * = 0, otherwise we could always find some y ∈ R n to make the inner-product negative. Therefore this variational inequality shares its solutions which is known from [8] to share its solutions with the rest points of the Projected Dynamical Systeṁ For simplicity of notation, let us denote x := x * going forward, hence we writė It is also known [7] that the Replicator Equation associated to our game iṡ We therefore have two different dynamics we can use to study evolutionary games, (3.3) and (3.4). In [18,14], the authors throughly study the relationship between these two dynamics and the revision protocol implied by each. We would like to extend these dynamics to the much more general problem of GNSCs. The projection dynamic of course, is already used extensively in the study of GNSCs. However the Replicator Equation has not to our knowledge ever been applied to GNSCs; equation (3.4) only gives us the dynamics of very elementary evolutionary games. Versions of (3.4) have been adapted for more sophisticated kinds of population games (see [7] for an overview), however so far there is no analogue for anything as broad as GNSCs. In the next section we build this analogue, by devising a method to derive a Replicator Equation from a given Projected Dynamical System. Extending the Replicator Equation. It is known by the Folk Theorem of Evolutionary Game Theory [7], that any stable rest point of (3.4) in K = ∆ n must be a Nash Equilibrium. This implies that such a point is also a rest point of (3.3), which raises the question: what is the relationship between the system in (3.4) and the system in (3.3)? Notice that we can rewrite (3.4) in the following way ([·] denotes the Iverson bracket)ẋ where e i and e j are just the coordinates of two adjacent vertices on the simplex ∆ n , Ax·(e i −e j ))(e i −e j ) is the un-normalized projection of Ax onto the line connecting these two vertices and x i and x j are just the constraints which aren't identically zero on that line. This system is similar to (3.3), however we exclusively project onto the edges of the polytope K instead of the tangent cone of the entire set. This system is continuous, but the cost of this continuity is that we generate new rest points which are not necessarily equilibria of the original game. However we can find some, but not all, of these equilibria via stability tests [7]. In the spirit of this process, let us now try and apply this technique to a more general type of problem. Suppose K is a bounded convex polytope in R n (See Chapter 2 for background on polytopes). Then K has some half-space representation where each g i is affine and K = {x : g 1 (x) ≥ 0, . . . , g k (x) ≥ 0}. Let −F : R n → R n be Lipschitz continuous. Consider the Projected Differential Equatioṅ Number the vertices of K as {V 1 , . . . V m }. For each V i V j , we can find a halfspace identifier in H for the edge connecting these two vertices h ij ⊆ H. Let g ij = g∈hij g. Then, mirroring the procedure we used with the Replicator Equation, we can consider the following classical dynamical systeṁ [12] for a short and simple proof), there is no obvious way to extend this proof to GNSCs. Therefore the next subsection is dedicated to proving that a result analogous to a part of the Folk Theorem holds for our system in (4.3) (Theorem 4.2). Equipped with this theorem we can then relate the rest points of our system in (4.3) to the Nash Equilibria of our original GNSC, which we do in Section 5. Connecting the extended Replicator Equation to GNSCs. For absolute clarity, in the theorems that follow when we say stable rest point we mean the usual definition: a rest point x * is stable if and only if, for every > 0, there exists a δ > 0 such that for every solution x(t), if t 0 is such that x(t 0 ) − x * < δ, then x(t) − x * < for every t ≥ t 0 . Also, when we say face invariant, we mean that if any solution x(t) lies on some face at time t 0 , then it will remain on that face for all future t ≥ t 0 (usually called forward invariance). Before we can prove these theorems, we need four more lemmas about polytopes. Lemma 4.3. Let F r be a face of some polytope K and x * ∈ F r . We have that span( ) and x / ∈ F r − x * . Now consider any y ∈ ri(F r − x * ). By convexity we have that λx + (1 − λ)y ∈ span(F r − x * ) ∩ (K − x * ) for every λ ∈ [0, 1]. Since y ∈ ri(F r − x * ), then for some λ ∈ (0, 1) we must have that λx + (1 − λ)y ∈ F r − x * . Therefore by the definition of a face, F r contains x, a contradiction. The (⇐) direction is obvious. Lemma 4.5. Suppose F r is a facet of F r . Then for every x * ∈ F r there is a unit vector n r (x * ) ∈ (F r − x * ) called the inner-normal of F r at x * on F r , such that for every v ∈ span(F r − x * ) n r (x * ) · u = 0, for every u ∈ span(F r − x * ) (4.4) v = u + kn r (x * ), for some u ∈ span(F r − x * ), k ∈ R (4.5) If x * ∈ ri(F r ), then n r (x * ) is a feasible direction, (4.6) Then we can take any orthonormal basis of span(F r − x * ) and extend it to an orthonormal basis of span(F r − x * ) by the addition of a single vector, call it w. Then from the last paragraph, we know we can write v 1 = u 1 + k 1 w and v 2 = u 2 + k 2 w, with u 1 , u 2 ∈ span(F r − x * ) and k ∈ R. If k 1 = 0 or k 2 = 0, this would contradict Lemma 4.3. Assume that k 1 < 0 < k 2 . Then the line connecting v 1 and v 2 intersects F r , but contains points that aren't in F r , contradicting the fact that F r is a face. Therefore k 1 and k 2 must have the same sign, and so by choosing n r (x * ) = w or n r (x * ) = −w as appropriate, we get that (4.4) and (4.5) hold. Lemma 4.6. Suppose F r is a polytope, and F r is a proper face of F r . Then for every If there doesn't exist V j ∈ r \ r with the desired property then and therefore span(F r − x * ) = span(F r − x * ), a contradiction. Equipped with the above Lemmas, we can now prove Theorem 4 and 5. Proof of Theorem 4.1. Then h ⊆ h. Therefore g ij (x) = 0. Taking this together with Theorem 2.2 ensures that we remain on F h and therefore every face to which x belongs. Proof of Theorem 4.2. If K is a singleton, then the result is trivial, therefore assume dim(K) ≥ 1. Suppose x * is not a rest point of (4.2), but is a rest point of (4.3). Then −F (x * ) / ∈ N K (x * ). Let F r be the lowest dimensional face containing x * such that −F (x * ) / ∈ N Fr (x * ). We have thaṫ Where the sum is over all i and j such that i < j, V i V j and V i , V j ∈ r. Since x * is a rest point of (3.4) we must have thatẋ * = 0. If x ∈ ri(F r ), this means that However since for any fixed V i we have that F r − x * ⊆ span{V i − V j : V j ∈ r , V i V j } (Theorem 2.1) then this contradicts the fact that −F (x * ) / ∈ N Fr (x * ). Therefore x * / ∈ ri(F r ), hence there must exist some facet F r * of F r such that x ∈ F r * (Theorem 2.2). Now suppose x * / ∈ ri(F r * ). Then we can find another facet, F r adjacent to F r * such that x * ∈ F r (Lemma 2.4). We must have that −F (x * ) ∈ N F r * (x * ) and −F (x * ) ∈ N F r (x * ) (otherwise we've found a lower dimensional face that doesn't have this property). However by Lemma 4.4, this implies that −F (x * ) ∈ N Fr (x * ), a contradiction. Therefore x * ∈ ri(F r * ) and x * belongs to only one facet of F r . Let n r * (x * ) be the unit inner-normal of F r * on F r at x * (obtained from Lemma 4.5). which is ≥ 0. Now fix V p ∈ r, V q ∈ r * \ r such that V p V q (exists by Lemma 4.6). By Lemma 4.3 we have that n r * (x * ) · (V p − V q ) = 0. This implies that (−F (x * ) · n r * (x * ))(n r * (x * ) · (V p − V q )) 2 > 0. Then by continuity of F , we can find an open ball B 0 (x * ) in F r such that for some γ > 0 We can further constrain 0 so that B 0 (x * ) ∩ F r * ⊆ ri(F r * ) and B 0 (x * ) contains no other facets of F r aside from F r * (we can do this second part because x * belongs to only one facet). Therefore within this neighbourhood we have Where the sums are over all i and j such that i < j, V i V j and V i , V j ∈ r. Consider any , δ such that 0 < δ ≤ < 0 . We know that n r (x * ) is a feasible direction by Lemma 4.5, therefore let y(0) = k 0 n r (x * ) be a solution to the IVP, where k 0 > 0 is sufficiently small so that hence U contains no facets of F r . Thereore U ⊆ ri(F r ) (Theorem 2.2). Thus we can find λ > 0 such that g pq (x) > λ for all x ∈ U (continuity and Lemma 2.5). Since d((x−x * )·nr) dt > 0 on ri(F r ) and (4.2) is face invariant (Theorem 4.1) we know that y(t) is also U invariant. Therefore contradicting Lyapunov stability. Note that the converse to Theorem 4.2 is not true (see [12], exercise 7.2.2). With this theorem we show that stable rest points of our extended replicator equation are rest points of the projection dynamic. Since it is known that the rest points of PDS are Nash Equilibria, then this taken together with Theorem 4.2 allows us to ultimately say that stable rest points are Nash Equilibria, and hence a key part of the Folk Theorem applies to our extension of the replicator equation. While Lahkar and Sandholm [17] could rely on an already established link between their games and the Replicator Equation, we needed to reprove that this link is still there for our extension. Our hope is that this result potentially paves the way for the type of analyses conducted by Lahkar and Sandholm [18] to be extended into this more general setting. Examples and Extensions In this section we will consider examples that illustrate how (4.3) achieves three basic purposes. First, it recovers the standard Replicator Equation, showing that what we have is in fact a generalization of that concept. Second, it allows us to incorporate the shared constraints of Generalized Nash Games into elementary Evolutionary Games. Finally, it enables us to express a given GNSC as a classical dynamical system, regardless of whether that GNSC corresponds to any particular Evolutionary Game. We should recall that our method for finding the extended Replicator Equation associated to a game and vice versa, consists of several steps. For clarity, and since they are used to solve the examples that follow, we will enumerate these steps. First, to find the extended Replicator Equation associated to a Generalized Nash Game we must (call this Method 1): (1) Find the Variational Inequality associated to the Generalized Nash Game (2) Find the Projected Dynamical System associated to this Variational Inequality To take an Evolutionary Game and find the extended Replicator Equation we must (call this Method 2): (1) Express the Evolutionary Game as a Generalized Nash Game (as at start of Section 4) (2) Find the Variational Inequality associated to the Generalized Nash Game We can of course skip Variational Inequalities, and just move straight to a Projected Dynamical System, however we include this procedure for clarity of analysis and to better mirror our exposition in the previous sections. We will now apply these steps to three examples. Example 5.1. The 1-species evolutionary game in (2.1) can be extended to an arbitrary number of species [7]. In the simplest case, we have two species, call them Species A and Species B, each of whose members can choose between n and m possible pure strategies {e 1 , . . . , e n } and {f 1 , . . . , f m } respectively. In this case we have two associated matrices A ∈ R n×(m+n) and B ∈ R m×(m+n) , where e T i A(p, q) represents π 1 (e i , (p, q)), the payoff of playing pure strategy e i in a population where Species A adopts mixed strategy p and Species B adopts mixed strategy q. π 2 (f i , (p, q)) has a similar meaning for Species B. The strategies of Species A and Species B respectively must belong to the simplices ∆ n = {(p 1 , . . . , p n ) : A Nash Equilibrium is defined as any strategy x ∈ ∆ n and y ∈ ∆ m such that [6] π 1 (p, (x, y)) ≤ π 1 (x, (x, y)) for every p ∈ ∆ n , π 2 (q, (x, y)) ≤ π 2 (y, (x, y)) for every q ∈ ∆ m . Conclusions and Future Work In this paper we generalize the Replicator Equation so that it may applied to any GNSC defined on a Polytope. Theorem 4.2 relates the stable rest points of this extended Replicator Equation with the rest points of a Projected Differential Equation. This connection allows us to expand certain Evolutionary Games by introducing shared inter-species constraints via the GNSC. Currently there are many different variations of the standard two player population game, for example games where the payoff is not a matrix or multiplayer games (see [7] for an overview), each of which has its own version of the Folk Theorem of Evolutionary Game Theory. If further work is done to adapt parts 2 and 3 of the Folk Theorem to our model, then we would have a complete general version of the Folk Theorem for which these all could be considered special cases. In the literature, the Replicator Equation has already been unified with other models such as the Price equation and the Generalized Lotka-Volterra equation [14]. With this result we make yet another such connection, however it should be noted that Generalized Nash Games are extremely broad and are a superset of all classical Nash Games. We also point out that our connection is reciprocal, we don't just place the Replicator Equation under the umbrella of the Projected Dynamical System, it actually gives us a new way of looking at GNSCs. This new perspective is an alternative to the Projected Dynamical System, in that the Replicator Equation frames these problems as classical (with righthand sides of class C 1 ) dynamical systems. Further work could investigate whether our results hold on an arbitrary convex set, not just a convex bounded polytope. We are optimistic that such a generalization is achievable, and it should be noted that the Projected Dynamical System itself was originally only shown to work on polyhedral constraints in [8] before it was shown to apply to arbitrary convex sets 11 years later in [5]. Another possible direction would be adapting our method to the much broader class of Generalized Nash Games without shared constraints. This would perhaps be the most ambitious way to continue since there are still large theoretical obstacles that need to be overcome before we can solve these types of games in general. More specifically, we would need to find a way to determine the Replicator Dynamics of a quasivariational inequality, which is a much less well understood mathematical object.
8,218
2021-05-18T00:00:00.000
[ "Mathematics" ]
WPO-Net: Windowed Pose Optimization Network for Monocular Visual Odometry Estimation Visual odometry is the process of estimating incremental localization of the camera in 3-dimensional space for autonomous driving. There have been new learning-based methods which do not require camera calibration and are robust to external noise. In this work, a new method that do not require camera calibration called the “windowed pose optimization network” is proposed to estimate the 6 degrees of freedom pose of a monocular camera. The architecture of the proposed network is based on supervised learning-based methods with feature encoder and pose regressor that takes multiple consecutive two grayscale image stacks at each step for training and enforces the composite pose constraints. The KITTI dataset is used to evaluate the performance of the proposed method. The proposed method yielded rotational error of 3.12 deg/100 m, and the training time is 41.32 ms, while inference time is 7.87 ms. Experiments demonstrate the competitive performance of the proposed method to other state-of-the-art related works which shows the novelty of the proposed technique. Introduction Autonomous vehicles, including unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), and unmanned underwater vehicles (UUV), are increasingly used to explore the different difficult and dangerous environments to minimize human interaction. In addition, mobile robots became an integral part of the present industry evolution for logistics and supply chain management. Estimating the ego-motion or continuous localization of the robot in an environment is a fundamental long-standing challenge in autonomous navigation. Traditionally, continuous localization is performed using sensors, such as global positioning systems (GPS), inertial sensors, and wheel encoders for ground robots. Traditional methods suffer from accumulated drift and GPS is constrained to only open environments. Recent studies expressed immense interest to perform the localization task using cameras due to vast information. The method of performing the continuous localization using cameras or visual-only sensors is known as visual odometry (VO). The applications of visual odometry vary widely from scene reconstruction [1], indoor localization [2], biomedical applications [3], and virtual and augmented reality [4] to self-driving vehicles [5]. VO acts as a fundamental block of a similar set of algorithms, such as visual simultaneous localization and mapping (VSLAM) and structure from motion (SfM). State-of-the-art are the earliest methods of VO algorithms and are classified into sparse methods [6,7] and dense methods [8] based on the minimization objectives. Sparse methods use the features extracted from consecutive images to estimate the motion by minimizing reprojection errors. Dense methods concentrate on individual pixels of consecutive images to reconstruct a more comprehensive scene and work on the principle of photometric consistency. Though the state-of-the-art methods are efficient in estimating the motion, these methods require a series of complex pipelines consisting of individual components addressing the multi-view geometric tasks which require hard tuning based on the environment. A slight malfunctioning of a subcomponent can result in the degradation of the entire pipeline. However, estimating visual odometry is a multi-view geometric problem and requires knowledge of the underlying 3-dimensional (3D) structure. In addition, these methods are less generalized, which means they are not intelligent to learn from the different modalities of environments. Considering the above shortcomings of the state-of-the-art methods, researchers of the computer vision community concentrated on alternative algorithms based on the learning paradigm. Learning-based algorithms gained massive attention due to their capability of implicitly learning the hidden representations with more generalization ability. Recently, methods using deep learning revealed superior performance over traditional methods in object classification, detection, and recognition [9,10]. Earlier learning-based methods used recurrent neural networks to improve the long-term temporal dependencies that mitigate pose drift problems [11]. On the other hand, some methods used optical flow estimates extracted from images to feed the networks [12]. The resultant of either of these are larger network parameters with high computational time. Current work deals only with monocular videos and learning-based methods using left-right consistency for training are not included in the evaluation [13,14]. The main aim of this paper is to improve pose predictions derived from convolutional neural networks given a set of images stacks and ground truths using windowed optimization. This is achieved by multiple forward passes from multiple inputs and a single back-propagation based on cumulative loss. From a point, the proposed network can be viewed as multiple siamese networks that share the same parameters among the same networks. The main contributions of this paper are: 1. A new learning-based optimization method without any additional modifications to the network is proposed. 2. Proposed network is independent of optical flow preprocessing and temporal processing modules, such as recurrent neural networks. Most importantly, WPO-Net is relatively small and consists of only 0.48 million parameters. 3. Experiments are performed to emphasize the importance of data augmentation in learning-based VO methods and the effect of varying window sizes in the proposed optimization framework. 4. Comparative experiments showcase the competitive performance of the proposed method with other geometric or state-of-the-art methods, supervised and unsupervised learning-based methods. The paper is organized as follows: Section 1.1 presents an overview of the published related works. Section 2 describes the building blocks of the method, including network architecture, windowed pose optimization technique, and loss function. Section 3 presents details of training and testing datasets, hardware, and software environments. In addition, this section also presents the evaluation of the present method on the KITTI dataset, data augmentation, and ablation tests. Related Work VO estimation is a long-standing multi-view geometry problem. Over the years, there have been several approaches that are being used to address the task of VO estimation. These algorithms can be classified into two distinctive types, namely state-of-the-art methods and learning-based methods. State-of-the-art methods are also referred to as geometric or traditional methods, alternatively. State-of-the-Art Methods State-of-the-art or geometric methods are further classified into the sparse of featurebased methods and direct or dense methods. As discussed, feature-based methods work by minimizing the reprojection error between features from consecutive frames. The feature extracted can be edges, lines, or blobs. Most famous feature extraction methods are ORB [15], FAST [16], and SURF [17]. Some of the early feature-based methods, such as in Reference [7], used filtering techniques to simultaneously optimize the map points and position of the robot. The major drawback associated with filtering-based VO/VSLAM is the increase in computational cost as the map grows. This issue was addressed by keyframebased algorithms, which use independent threads for mapping and tracking threads [4]. These keyframe-based methods use bundle adjustment as the backbone of optimizing the position and map points to reduce drifts. Down the road, these algorithms became more efficient and are highly dependent on the robustness of feature extractors. ORB-SLAM [6] and VISO2 [18] are some of the most efficient real-time feature-based VO/VSLAM algorithms. Nevertheless, feature-based algorithms suffer from textureless and noise-induced regions. On the other hand, direct methods minimize the pixelwise reprojection error from consecutive images. Direct methods can reconstruct more comprehensive 3D scenes but are computationally expensive and limit the real-time usability of these algorithms [8,19]. A combination of direct and feature-based methods are also developed to estimate the pose using the features and the regions surrounding the pixels, and these are known as semidirect methods [20]. However, the direct method works on the principle of photometric consistency and is not designed to deal with large viewpoint changes. Learning-Based Methods Learning-based methods are the most recent VO algorithms. Due to the continuous increase in the availability of graphic processing units (GPUs), benchmark datasets, such as KITTI [21], and synthetic data generation frameworks, such as CARLA [22] and Tar-tanAir [23], there has been a shift in increased research towards learning-based algorithms. Learning-based methods are robust to unmodeled noise and environmental changes and work by learning the hidden feature representations. Learning-based methods are further classified into supervised and unsupervised based on the learning paradigms. One of the main challenges of learning-based methods is adapting to the architectures that were being used for 2D tasks, such as classification, recognition, and localization. These architectures operate by taking a single image as input, but the VO estimation requires a stack of consecutive images. Supervised learning-based methods rely on the ground truth 6 degrees of freedom (DOF) poses to optimize the parameters. Earliest learning-based method can be dated back to 2008 [24]. Later, the VO estimation was recognized as a regression task. The invention of architectures, such as PoseNet [25], used to regress the absolute 6 DOF pose, and FlowNet [26], used for optical flow extraction between two images, provided great support for learning-based VO estimation algorithms. Supervised learning-based methods learn the hidden mapping by taking optical flow or raw images. LS-VO [27] and Flowdometry [12] learn to predict the pose by used optical flow. However, these methods involve computationally expensive preprocessing to extract the optical flow from images. Methods, such as DeepVO [11] and PCGRU [28], used recurrent neural networks to minimize the prediction errors. Another interesting development includes uncertainty quantification in the pose prediction process [29]. DeepVO estimates the covariance matrix along with pose estimation. This work is highly motivated by the fact that this uncertainty quantification can be used to adaptively weigh the translation and rotational components of the pose estimates. Reference [30] estimates the 2 DOF pose for ground vehicles by neglecting the less significant movement along the other four axis. The proposed WPO-Net inherits some architectural design philosophies, such as rectangular convolutions from Reference [30]. On the other hand, unsupervised methods work on the foundational principle of single view image synthesis. These methods operate in complex end-to-end format involving several networks to address tasks, such as depth estimation, dynamic region masking, and pose estimates. SfMLearner [31] is designed to estimate the depth and pose by neglecting unexplainable pixels. GeoNet [32] further included the dynamic object compensation to avoid the erroneous pose estimates. CM-VO [33] proposed a confidence quantification and refining the trajectory based on the confidence. Though unsupervised methods eliminate the requirement of ground truths, the performance of these methods is not on par with the supervised learning-based methods. To address the above problems in learning-based methods, a windowed optimization approach is presented in this paper. The proposed method optimizes the pose of a short window of images using the trajectory consistency constrain and is analogous to windowed bundle adjustment in traditional methods. Methodology This section includes the introduction to subcomponents of the proposed method. The entire framework is composed of two subcomponents, namely a feature encoder and pose regressor. The feature encoder transforms the high-level gray images into a compact global feature descriptor. The extracted feature descriptor is transformed into a 6 DOF pose estimate by the pose regressor. Further, CNN-based windowed pose optimization and loss function used for training are explained in Sections 2.4 and 2.5, respectively. Preprocessing The original raw grayscale input images of size 1241 × 376 are resized to 640 × 192 to meet the specifications of the proposed network and to reduce the memory consumption of the GPU. A general procedure of standardizing the images about mean and variance is used to narrow down the distribution and to pace up the convergence. Two consecutive images are stacked along the channels to serve as the input to the feature encoder. A temporal skipping strategy for augmenting the data is used by selecting a consecutive random frame within an interval of 0 to 4 in the forward direction to learn more distinctive and complicated mapping. Feature Encoder VO or continuous ego-motion estimation requires consecutive image pairs. In traditional methods, this is performed by feature matching or photometric consistency across the frames of the sequence. In learning-based methods using deep learning, the hidden representations of the images are automatically extracted to estimate the 6 DOF pose. The proposed feature encoder takes in a stack of two grayscale images of size 640 × 192 at each training step. The details of the architecture of the feature encoder used for this method are presented in Table 1. Feature encoder consists of seven layers using the rectangular kernels, except the last one. A combination of different strides and dilations are used to efficiently reduce the size of the network by extracting the features with greater receptive coverage. The last layer is a special convolutional pooling layer to downsample the dimensions of the descriptor. Batch normalization and ELU (exponential linear unit) are used for every layer to accelerate the convergence. Pose Regressor The extracted global feature descriptor from the feature encoder is transformed into a 6 DOF pose estimate by feeding into a two-layered MLP (multilayer perceptron). The first layer consists of 256 nodes with ELU activation. The output or the second layer of the pose regressor consists of 6 nodes with linear activation. The output vector represents the translations and rotations in Euler angles about each axis. The predicted values are quantitatively used to estimate the loss with the labeled ground truth. Windowed Pose Optimization Proposed approach adopts a unique strategy motivated by the benefits of windowed bundle adjustment in reducing drifts. The proposed networks use four images of the video sequence and stack them into 3 overlapping samples to feed the network. Let {I t , I t+1 , I t+2 , I t+3 } be the four consecutive images stacked into {I t,t+1 , I t+1,t+2 , I t+2,t+3 }, as shown in Figure 1. First, each training iteration consists of forward propagating a triplet network using three consecutive image stacks. Second, the gradients are propagated backward by estimating the cumulative loss of predictions from triplets. A detailed explanation of the formulated loss function used for training is presented in Section 2. Consider u = [x, y, z, ω 1 , ω 2 , ω 3 ] ∈ se(3), where (x, y, z) and (ω 1 , ω 2 , ω 3 ) representing the translations and Euler angles. The corresponding generators of se(3) representing the derivatives of translations and rotations about each axis can be formulated as Equation (1): For mathematical convenience, we denote translations u and rotations ω separately. The linear combinations of generators can written as Equation (2): where G 1 , G 2 , G 3 are partial derivatives of translations about X, Y, Z axis with linear combinations p = xG 1 + yG 2 + zG 3 , respectively. G 4 , G 5 , G 6 are partial derivatives of Euler angles (ω 1 , ω 2 , ω 3 ) on the X, Y, Z axis with linear combinations ω = ω 1 G 4 + ω 2 G 5 + ω 3 G 6 , respectively. The linear combinations of generators representing δ = (p ω) ∈ se(3) are transformed to SE(3) by applying the exponential mapping Using Taylor expansion, exponential map of ω and V can be formulated as: where θ = |ω|, ω x is the skew-symmetric matrix from the linear combination of rotational generators. Similarly, T = R t 0 1 , where T ∈ SE(3). R ∈ SO(3) and t ∈ R 3 are translational and rotational elements and can be inverted to the logarithmic map using: where θ is the axis angle calculated from Equation (5). ω can be recovered from the offdiagonal elements of ln(R) and p = V −1 t. These pose estimates from SE(3) composition layers are referred to as unrelated stacks due to the reason that these are estimated based on the predicted poses of {T t→t+1 , T t+1→t+2 , T t+2→t+3 } corresponding to image stacks {I t,t+1 , I t+1,t+2 , I t+2,t+3 } in the forward pass from: where represents the dot product. Loss Function The training process consists of adjusting the network parameters θ by minimizing the deviation between predicted u t and ground truth u t poses. The conditional probability of the VO problem can be formulated, and optimal parameters θ * can be estimated by maximizing the following objective: This method uses a homoscedastic uncertainty-based loss function to automatically choose the weighting coefficient between translational and rotational counterparts. The selected homoscedastic loss function consists of two uncertainty quantification regularization terms (ŝ p ,ŝ ω ) as given in Equation (8): where L p = p t − p t 2 2 and L ω = ω t − ω t 2 2 are the euclidean distance between ground truth (p t , ω t ) and predicted ( p t , ω t ) translational and rotational elements, respectively. Standard networks solely minimize the relative transformational errors. Optimizing the nearest frames by enforcing the geometric constraints using composite poses jointly is the key to maintain lesser drifts. The total loss term consists of directly estimated relative poses with estimated composite poses are written as Equation (9): Loss relative = Loss t→t+1 + Loss t+1→t+2 + Loss t+2→t+3 , Loss composite = Loss t→t+2 + Loss t→t+3 + Loss t+1→t+3 , Loss total = Loss t→t+1 + Loss t+1→t+2 + Loss t+2→t+3 + Loss t→t+2 + Loss t→t+3 + Loss t+1→t+3 , Loss total (DA) = Loss t→t+j + Loss t+j→t+k + Loss t+k→t+l + Loss t→t+k + Loss t→t+l + Loss t+k→t+j , where Loss total (DA) is the loss function for samples with data augmentation (DA), and j, k, l are the random values ranging from 0 to 4. Experiments This section presents the details of the performance evaluation of the proposed method. First, the software and hardware environment used to train and test the proposed method with a set of selected hyperparameters are presented. Second, details of the benchmark and evaluation metrics associated are described. Next, the importance of DA in the VO task is presented by choosing the varying amount of augmented data. Performance of the related works is compared relatively to current method to evaluate the efficiency and accuracy of the current windowed deep optimization technique. Finally, a detailed ablation study is performed on the network to visualize the importance of windowed optimization with a detailed run-time analysis. Implementation Details The network was trained and tested using PyTorch framework in Python on Nvidia 2080S GPU with a memory of 8 GB and Intel i9-10900F at 2.80 GHz. An Adam optimizer with default setting of β 1 = 0.9, β 2 = 0.999 was used, as presented in Reference [34]. The initial learning rate of 0.001 with a half decay rate for every 30 epochs until 150 epochs was selected to train the network. Even though our model only consumes one-fourth of the total GPU available, batch size remained at 32 for training and testing. Dataset We used the KITTI VO benchmark [21] to train and test WPO-Net. The dataset consists of 21 sequences composed of 23,201 images; 11 of the 21 sequences are available with ground truth pose estimates. For this work, we adopted a split used in Reference [31][32][33][35][36][37], which reserves 00-08 sequences for training and 09, 10 sequences for testing. A station wagon is used to collect the dataset in outdoor environments with a frequency of 10 frames per second and compromises of challenging scenarios with dynamic objects. The default image size of the images in the dataset is 1241 × 376, and the images are resized to half for training and testing the proposed network to constrain the computational cost. Training data is augmented using a temporal skipping technique, and no DA is involved while testing the network. Three evaluation metrics, namely absolute trajectory error (ATE(m)), translational error (t rel (%)), and rotational error (r rel (deg/100 m)), are used to efficiently evaluate within various sizes of samples of the present method and related works. Translational and rotational errors are obtained by averaging the subsequence errors from 100 to 800 m with an interval of 100 m. Effects of Data Augmentation Data is one of the crucial components for any learning-based paradigm, such as deep learning. This section emphasis on a long-standing yet challenging problem in training deep networks. The majority of supervised learning works adapted a manual weighting approach to tune the balance between the rotational and translational elements, which is time-consuming and needs an extensive parameter search space. However, it is very difficult to derive a quantitative measure between rotational and translational samples in the VO task, and, to avoid these data-related uncertainties and to adaptively weight the elements, a homoscedastic based loss is used. Another interesting direction is to increase the size of the available dataset with techniques, such as random sampling, cropping, and noise addition. A temporal skipping technique is used for this study to augment the data, and the effects of different percentages of augmentation with respect to evaluation metrics are shown in Table 2. The predicted trajectories of the best model DA (30%), second-best DA (10%) are plotted against the ground truth in Figure 2. The overall estimated trajectory trained with DA 30 percent performed well on ATE and translational error (t rel ). This study considers ATE as one of the significant evaluation metrics in the aspects of VO tasks to reduce the drift and is often underemphasized. From the experiments, it is evident that increasing the dataset by augmenting does not always result in higher accuracies, especially in a complex multi-view geometry problem, such as VO. The best model for comparison with other related works is chosen to be the dataset with DA (30%). Though the dataset with DA (10%) performed superior to other splits in terms of translational error, the dataset with DA (30%) outperformed it over the other two evaluation metrics. Rotational and translation errors of models trained on different augmentation split and tested on sequences 09, 10 for subsamples are shown in Figure 3. From Figure 3c,d, it can be observed that the model trained on DA with 30 percent is stable and accurate compared to other splits. Similarly, from Figure 3a,b, DA (30%) performed superior to other splits. Though DA (30%) is lagging behind DA (10%) in a singular case (translational error (t rel )), overall performance of DA (30%) is better compared to others, and this model is used to compare with the related works in the next section. Comparison with Related Works This section evaluates the proposed method with other significant published works. The proposed WPO-Net is evaluated across three different algorithms. First, Monocular VISO2 [18] and ORB-SLAM [6] are used to evaluate against the state of art algorithms. Second, a supervised version of Reference [35], DeepVO [11], and Flowdometry [12] are employed to compare with the supervised learning-based methods. Though DeepVO and Flowdometry are some of the most prominent supervised learning-based methods, different splits were used for training and testing. To effectively deal with such train-test split discrepancies in comparison with other methods, the average translation, and rotational errors across all sequences are used. Finally, unsupervised learning-based methods, such as in References [31][32][33]36,37], are included in the comparison with WPO-Net in Table 3. Although the performance of WPO-Net is slightly unsatisfactory on sequence 09 against VISO2M, the overall performance advantage is higher and accurate. In addition, the current method avoids the complex pipeline involving numerous subsystems, such as VISO2M and ORB-SLAM. On the other hand, WPO-Net performed significantly better on sequence 09 than any other learning-based methods used for comparison. Supervised learning-based methods take the advantage of implicitly learning the scale during the training process. The overall rotational error is minimal in comparison with other methods. This experiment verifies the ability of the learning-based windowed pose optimization technique in improving the accuracy of the system. Ablation Study This section includes the experimentation on the proposed WPO-Net to examine the efficiency of learning-based windowed pose optimization. The conclusion is drawn by training and testing the network with three different window sizes (WS). The WS defines the number of consecutive images used for every single backpropagation. Let WS be equal to n images, and the number of times the network is forward propagated is given by (n − 1) with a single backpropagation. When WS = 2, the network by default acts as a standard supervised network with one sample input and one sample output. The three different window sizes are selected to observe the efficiency of windowed pose optimization by examining the evaluation metrics. Figure 4 illustrates the number of images used for a single iteration as the windows slide towards the right. All the networks used for comparison in this section are trained and tested with the same split, as mentioned in Section 3.3, with 30 percent of DA. The network with WS = 4 was the one used to compare with related work, and the data is derived from Section 3.4. The results of the evaluation metrics of different WS's are presented in Table 4. This experiment provides clear evidence of increased performance while using windowed optimization. This technique also can be viewed as a resemblance to windowed bundle optimization used in state-of-the-art VO methods. It is also important to consider the computational overheads during training with a larger WS. Thus, to limit the total training time of WPO-Net, WS is limited to 4. Furthermore, the predicted trajectories of WS = 2, 3, 4 are illustrated in Figure 5. Time taken for inference and training are measured by using a batch size of 2 averaged over hundred iterations. The inference, training time on GPU is 3.98, 19.54 and CPU is 7.87, 41.32 ms, respectively. The total parameter count of WPO-Net is 0.48 million, which makes it a light and affordable network to run on embedded controllers. Comparison of run-time analysis of WPO-Net with other methods is not included because the hardware used is different from method-to-method. Conclusions In this paper, an optimization method for learning-based VO is proposed. The proposed method can reduce overall trajectory drift and improves the accuracy of the system. From experiments, it was clear that increasing the data augmentation over a specific point degrades the performance. The proposed method outperformed most of the unsupervised methods included in comparison on the KITTI dataset. This method achieved the least rotational error than any other methods included in the comparison. The mean rotational error was improved by 13.06% compared to Reference [36], which is the best among the related works used to compare. It is certainly helpful to also note that learning-based methods included in the evaluation consist of a larger number of parameters than WPO-Net. The inference time of the proposed method on the CPU is 7.87 ms. In future work, we will validate the real-time performance of the proposed WPO-Net, along with some generalization tests. Data Availability Statement: The KITTI dataset [21] used for this study is openly available at http://www.cvlibs.net/datasets/kitti/ (accessed on 15 June 2021). Conflicts of Interest: The authors declare no conflict of interest.
6,202
2021-12-01T00:00:00.000
[ "Computer Science" ]
Computer-Aided Medical Diagnosis Using Bayesian Classifier-Decision Support System for Medical Diagnosis This study employs a Bayesian framework to construct a Web-based decision support system for medical diagnosis. The purpose is to help users (patients and physicians) with issues pertinent to medical diagnosis decisions and to detect diseases with highest probability through the Bayesian framework. Users could perform a more accurate diagnosis with the prior/conditional probabilities obtained from selected data sets and compute the posterior probability using the Bayes theorem. The proposed system identifies diseases by analyzing symptoms or by analyzing medical test results. Currently the system detects different types of diseases that people suffer in their day-to-day lives (general diseases) with an average detection accuracy of 92.56%. System also detects complex diseases (e.g.: heart disease 83.67%, breast cancer 80.98%, liver disorders 79.43%, lung cancer 71.00%, primary tumor 78.02%, etc.) based on the analysis of the medical test results. The proposed system enhances the quality, accuracy and efficiency of decisions in medical diagnosis since the use of Bayesian theorem allows this system to offer more accurate platform than the conventional systems. Other than that this web-based system provides value-added services in conjunction with CAD system, such as; e-Chat & e-Channeling. More importantly, the targeted user group will be able to access the system as a software element freely and quickly. In this way the goal of this study – which is to provide a web-based medical diagnosis system is effectively achieved. INTRODUCTION Computer-aided medical diagnosis (CAD) has become far more widespread in the world and provides real world business solutions to users in areas ranging from automated medical diagnosis (Chang, 1998) to the extended applications such as decision support tools, clinical diagnosis, prediction of diseases, etc. In medical science, Bayes' theorem can be used as the logical process of performing medical diagnosis, particularly in automated medical diagnosis decision support systems (Sahai, 1991).This research also incorporates the theoretical framework of Bayesian classifier to implement a web based medical diagnosis decision support system to perform medical diagnosis and find appropriate recommendations and solutions when encountering medical diagnosis problems.This web-based system also provides value-added services in conjunction with the CAD system, such as; e-Chat & e-Channeling.The proposed system provides users with following facilities.i. Effectively access the knowledge and provide solutions when needed from clinical databases.ii. Help doctors when decision making.iii. Facilitate a communication channel between two peers over a web-browser.v. E-channeling. Importance & Advancements of Bayes theorem in Automated Medical diagnosis When considering medical science, Bayes theorem plays a major role in intelligence systems by modelling the underlying process of medical diagnosis.When considering about clinical medicine, clinical diagnosis is very crucial. Medical diagnosis is based on several different parameters like symptoms, allergies, signs, etc. Physicians diagnose a disease by making a subset of all the possible diseases depending on the symptoms provided.Similarly, the advancement of mathematics & computer engineering has achieved a greater success in automating computer-aided medical diagnosis (Sahai, 1991).Accuracy of the computer-aided medical diagnosis depends on the wide range of information used to calculate the probability (Sahai, 1991;Chung & Lu, 2009).(Sahai, 1991). Medical decision support systems can be traced back to the Dombal's acute abdominal pain diagnosis system and Shortliffe's MYCIN (blood infectious diseases diagnosis system). The research was done with 304 case studies to successfully identify most of the acute appendicitis patients.However, in 6 cases it generated misjudgments; non-specific abdominal pain for acute appendicitis. MYCIN is another decision support system that offers recommendations about the treatments and type of dosage (Shortliffe, 1990;Shabot & Gardner, 1994).This is a system that separates the problem-solving rules and inference engine of domain knowledge, using rule-oriented syntax through program code.Because of the birth of MYCIN, two other expert systems were derived from it: CLOT (for diagnosing abnormal bleeding) and PUFF (examining lung functions).This study employs a Bayesian framework to construct a web-based decision support system for medical diagnosis because the Bayes theorem has been frequently used in many studies and has performed remarkably well in clinical applications against the independent assumptions. Computer Technology and Computeraided Medical diagnosis decision support systems Computer-aided Medical decision support systems could be classified into Probability systems and Knowledge based systems (Sahai, 1991).Due to the advancement of computeraided Medical decision support systems, it could assist medical practitioners basically in two areas: medical diagnosis and data interpretation (Chung & Lu, 2009).Now we could implement computer-based or web-based programs to generate more accurate output via computation of various mathematical formulae and help medical practitioners to extract vital information to arrive at better diagnosis, minimizing misjudgments and effectively use the computational analysis in their medical operations (Chung & Lu, 2009).Features of these systems are listed as follows: i. Access the knowledge and provide solutions when needed from clinical databases effectively.ii. Store patient's historical medical records.iii. Help doctors with decision analysis.iv. Determine suitable medications. In our work, we use the Bayesian Classifier for accurate medical diagnosis and provide valueadded services like e-Chat & e-Channeling in conjunction with CAD system. Bayes theorem and System Architecture of CAD system The proposed system uses probability distributions of symptoms/medical tests of twenty-five (25) diseases and uses the Bayesian classifier to predict the presence of a disease.It has the capacity to predict whether a disease is positive or not for a new set of measurements by using different measures obtained from conducting various tests per disease. Figure 1 and Figure 2 represent the prototype implementation of the CAD system along with e-Chat and e-Channeling functionalities and the data that the system can access when performing the computer-aided diagnosis respectively.With the use of Bayes theorem, given the symptoms, the posterior probability of a disease being positive could be computed (Sahai, 1991). Therefore, we calculate the probability of a particular disease as follows: where P(+) and P(−), are the prior probability distributions of a particular disease.When generating the probability distributions, the prior probability was assumed as 0.5 as there are two (2) possible outcomes in our situation; either the presence of a disease or not. Peer-to-peer communication over web browser As an additional feature of the system, when implementing the video streaming chat, web real-time (WebRTC) technology is used as the theoretical basis as it provides high accuracy level, flexible support in different browsers/platforms and real-time communication capabilities over JavaScript APIs'.WebRTC enables browser to browser communication starting from small scale group up to multipart communication and is very inexpensive. Peer-to-peer live video streaming chat through the web browser is one of the most suitable option that provides a live communication channel for patients and doctors to communicate and establish a high-level of relationship. WebRTC provides 3 types of APIs' for developers (David, 2014);  Media Stream (Get User Media): identifies and captures the end-user camera and microphone for use in video chat.  RTC Peer Connection (Peer Connection): enables audio/video call setup. Although Real-Time Communications (RTC) provide such services like call control, call handling and present modifications to enable browser-based video communication (David, 2014), WebRTC enables users such services without installing any plugin. WebRTC provides several features along with the above mentioned APIs'; such as, effective end-to-end performance tests, compatible with most of the platforms, flexible implementation process and accurate issue identification & active feedback (David, 2014). WebRTC technology also provides service benefits like; implement a solution for a problem as expected, observe customer experience by increasing customer satisfaction, reduce issues regarding time and cost, achieve customer requirements and ready the product with a confident (David, 2014). Access symptoms to diagnose general diseases Access medical test results to diagnose complex diseases Detect disease with highest probability CAD system Computer-aided medical diagnosis using Bayesian classifier e-Channeling functionality over the web browser window To increase the accuracy and the usability of the proposed system, we have implemented a traditional e-Channeling gateway along with the CAD system. The channeling sub-section provides users with several facilities like channeling physicians, getting appointments for medical check-ups, generating remainders to keep track of patient's medications along with SMS and email generating facilities, another four (4) interactive health tools such as BMI (Body Mass Index) calculation, pregnancy calculation, calculate paracetamol dosage for kids and firstaid help over several video tutorials. RESULTS & DISCUSSION In this experiment, we use the Bayesian Classifier for accurate medical diagnosis.We generated probability distributions for selected twenty-five (25) diseases by collecting data and using online databases.Selected twenty-five (25) diseases were categorized into two (2) sectors; general diseases (diseases that people suffer in their day-to-day lives) and complex diseases (diseases that cause long-term harmful effects to the human body). Data used for the general diseases category were collected over several medical centers in western province, Sri Lanka with relevant permissions from doctors and only the patients who were willing to contribute to this research were examined.Data used for the complex diseases category were collected over an online source 'UCI Machine Learning Repository: Data Sets' (Asuncion & Newman, 2007). Independency of all the possible inputs/data/measurements were checked mathematically by obtaining the reduced row echelon form of the data matrix as Bayes theorem requires independent measurements. Results analysis -Generated by Bayes theorem For testing the performance of the system, considerable amount of data-samples were used as test data and some of the validation techniques were used to reduce the error rate of the system.The system diagnoses a particular disease from the given symptoms by computing the posterior probability for each disease and choosing the disease with the highest probability.The results of general and complex diseases are tabulated in Table 1 and Table 2 respectively & Newman, 2007). e) To provide an ethical clearance of the data, we did not collect any personal details of patients; such as (NIC numbers, Names, Gender, etc.) other than the required symptoms/measurements for probability calculations and testing. f) We used questionnaires to gather data for general diseases by attending several medical centers after obtaining the prior approval from doctors and only the patients who were willing to contribute on this questionnaire were examined. Limitations of medical diagnostic support programs Some users may face difficulties when interacting with the web based systems due to lack of computer literacy. Sometimes it might be very hard for the doctors to convey the complex understanding of the patient to a computer program efficiently. Advantages of e-Chat and e-Channeling Users can perform another two important functionalities such as chatting with physicians over a web-based communication channel and e-Channeling functionality over a one single window along with the CAD system. All the activities done by the user via the system will be notified/informed either by an email or a SMS. Implemented functionalities will be time saving and very convenient for users. CONCLUSION Usage of the Bayesian theorem as the theoretical basis using prior and conditional probabilities to determine the posterior probability helps users to perform accurate medical diagnosis. Implemented CAD system verifies the value of the Bayesian theorem in medical decision support systems. Integration of the theories above with web technology provide a quick and efficient way of providing treatment for the users.Physicians can use this system to make better decisions in medical diagnosis and the users have the opportunity to use these functionalities (CAD system, e-Chat and e-Channeling) quickly and efficiently over a one single browser window. Figure 1 .Figure 2 . Figure 1.The prototype implementation of the CAD system along with e-Chat and e-Channeling functionalities. Table 1 . . Detection Rates of General Diseases Table 2 . Detection Rates of Complex Diseases
2,650.2
2017-01-28T00:00:00.000
[ "Medicine", "Computer Science" ]
Einstein clusters as models of inhomogeneous spacetimes We study the effect of small-scale inhomogeneities for Einstein clusters. We construct a spherically symmetric stationary spacetime with small-scale radial inhomogeneities and propose the Gedankenexperiment. An hypothetical observer at the center constructs, using limited observational knowledge, a simplified homogeneous model of the configuration. An idealization introduces tensions and side effects. The inhomogeneous spacetime and the effective homogeneous spacetime are given by exact solutions to Einstein equations. They provide a simple toy-model for studies of the effect of small-scale inhomogeneities in general relativity. Introduction The concept of idealization is one of basic tools of modern physics. Macroscopic physical systems could be modelled only if unimportant details are neglected. Unfortunately, it is not always easy to decide which elements in the construction of the model are essential and which are not. It is believed that decisive role is played by observational or experimental falsification. Again, this is not always straightforward. The most famous example is the model of our Universe. Its foundations have been proposed hundred years ago. This extremely simple model, which extrapolated by many orders of magnitude our faith in applicability of general relativity, turned out to be very successful. A hundred years later the model is alive and able to accommodate enormous flux of observational data provided by advances of modern technology. But, what some people see as a pure success for others is a failure: 96% of the energy content of the model has not been previously known and is seen only via gravitational interactions. This apparent contradiction motivated broad studies of validity of a basic assumption of the model -exact spatial homogeneity. In our article, we take on this topic. Our approach is restricted to a simple exact toy-model and, as such, it is only indirectly relevant for cosmology (for other studies based on exact solutions see also [2,12,13,14,10,11]) Einstein cluster is a class of solutions to Einstein equations which was discovered by Albert Einstein in 1939 [5]. It provides an effective description of a cloud of massive particles moving in randomly inclined circular geodesics under the collective gravitational field of all the masses (see figure 1). The spacetime is spherically symmetric and stationary. The radial pressure vanishes because the whole system is centrifugally supported. Einstein clusters have been studied extensively in literature (see [7] and references there). In the astrophysical context, they have been proposed as models of galactic dark matter haloes [3]. The vanishing of radial pressure allows to construct stationary spacetimes with small-scale inhomogeneities without introducing unphysical equation of state. This property suits our purpose superbly. The problem of small-scale inhomogeneities may be split into two topics: the effect of inhomogeneities on geodesics (light, gravitational waves, test bodies) which alters interpretation of our observations and the so-called backreaction effect which alters the structure of spacetime in a sense which will be explained below. The backreaction problem is usually formulated as a fitting problem [6]. In this approach, one asks how to fit an idealized solution to a realistic ('lumpy') spacetime. The aim of this approach is to find covariant procedure which uniquely assigns the best effective spacetime to a realistic one. However, it is more common in a down-to-earth scientific work to assume an effective model a priori. A physicist who want to describe the complicated system usually neglects 'details' and propose a simplified model. This model is later being tested in experiments or against observations. In cosmology, spatial isotropy and homogeneity of the universe was a natural first guess. These assumptions led to our standard cosmological model ΛCDM . This Figure 1: The solution called 'Einstein cluster' describes a spacetime filled with a high number of massive particles moving in their own gravitational field on randomly inclined and directed circular orbits. (The Cartesian coordinates x, y, z were scaled for simplicity.) model, with free parameters estimated by astronomers, constitutes the 'effective spacetime'. Therefore, instead of looking for a fitting procedure one may formulate backreaction problem in an alternative way and ask what kind of errors has been introduced by idealization. In this alternative approach, the effective spacetime is known from the beginning. An idealised geometry does not fit to the matter content exactly and a discrepancy between the left hand side (geometry) and the right hand side of Einstein equations (the energy-matter content) arises. If one assumes that Einstein equations hold, then additional or missing terms are incorrectly interpreted as a contribution to the energy-momentum tensor. These artificial terms are known as a backreaction tensor. Since ΛCDM energy-momentum tensor is dominated by dark matter and dark energythe forms of energy and matter detected so far only through their gravitational interactions, then the backreaction effect has a potential to clarify our understanding of the Universe. The results presented in the article [8] suggest that that this potential has not been realised in nature: under appropriate mathematical conditions the backreaction tensor is traceless, thus it may mimic radiation, but it cannot mimic cosmological constant nor cold dark matter. In the context of the ΛCDM model this implies that backreaction effect introduces a minor correction and it is definitely not the 'order of magnitude effect' (which is needed to explain cosmological observations without cosmological constant or other forms of dark energy). One may, at least formally, find relevant examples of spacetimes with small-scale inhomogeneities, such that the formalism [8] cannot be directly applied to them, e.g. a vacuum cosmological model with all the mass concentrated in a statistically homogeneously distributed black holes [1]. Moreover, even if the backreaction vanishes, the effect of small-scale inhomogeneities still may alter interpretation of our observations, as will be illustrated by our example. The aim of the article is to conduct the Gedankenexperiment. We construct an exact solution to Einstein equations which contains small-scale inhomogeneities. We show that in our model the backreaction vanishes (in the sense of the Green-Wald framework). Moreover, we present a heuristic analysis which implies that the backreaction vanishes in all possible models constructed within Einstein cluster class. Next, we adopt a point of view of an astrophysicist who would like to model our inhomogeneous spacetime by available idealised exact solutions. We argue that astronomical observations interpreted within simplified model would lead to the misinterpretation of the energy content of the model. Our analysis is restricted to the particular class of solutions to Einstein equations, but it illustrates what the effect of small-scale inhomogeneities could be in principle. Setting Any spherically symmetric stationary spacetime could be written in the following form where ν, λ are functions of r only. Two of the Killing fields could be immediately read out from the form of the metric: ∂ t , ∂ ϕ . For the centrifugally supported cloud of massive particles (the so-called Einstein cluster) the energy-momentum tensor non-vanishing components are [5] where ρ = ρ(r) is the energy density and p = p(r) is a tangential pressure. The Einstein equations imply and In order to find a particular solution one may set ρ(r), solve (4) for ν(r) and calculate λ(r) from (3). The standard pseudopotential analysis reveals [7] that the radial stability conditions have forms The equation (4) if written in terms of auxiliary function λ = ln ζ [using (3)] reduces to the Bernoulli differential equation where P = −1/r, Q = −1/r + 8πrρ. The substitution ζ → 1/µ leads to a linear equation of the form which has a solution µ = 1 − 8π r ρr 2 dr , where an integration constant is fixed by regularity at the center (it depends on the form of ρ). Therefore, for a given density profile solution to Einstein equations is given by where µ is given by (8). This solution may be matched to the Schwarzschild exterior. The active gravitational mass inside of the sphere with an area radius r is given by [7] M (r) = 4π 3 Small-scale inhomogeneities Let ρ contain 'a high frequency component' such that where ρ l is an oscillating function and l is a constant (small l corresponds to high frequency oscillations). Using equations (8), (9), (10) one may calculate metric functions ν(r), λ(r) that correspond to ρ(r) given by formula above. In this way, we construct a spacetime with small-scale inhomogeneities. It is not aim of this paper to model any realistic astrophysical system, but in order to gain physical intuition one may pretend that our inhomogeneous spacetime describes the galactic halo. For simplicity, we choose ρ(r) in such a way that it oscillate about a constant density ρ 0 for a class of stationary observers. Moreover, our system constitute a finite configuration: at some radius r = R it is matched to the Schwarzschild solution. We assume that a hypothetical astrophysicist living at the center of the system does not know ρ(r) precisely, but knows that ρ(r) is 'approximately' constant and that the configuration is finite. Both facts would become basic assumptions of his idealised model: the Einstein cluster with a constant energy density and anisotropic pressure, from now on called the model A. It follows from the Birkhoff theorem that vacuum spacetime outside a spherically symmetric configuration is given by the Schwarzschild metric. Thus, any effective spacetime must be also matched to the Schwarzschild solution. Observations of trajectories of satellite stars and dwarf galaxies, in such a hypothetical configuration, would allow to estimate gravitational mass of the system M . We assume that this parameter is known. Moreover, we assume that the astrophysicist observes most distant stars (at the matching surface r = R) and known their bluesift z. To sum up, assumptions and hypothetical observational results which are made/known to our astrophysicist: • the spacetime is stationary, • the spacetime is spherically symmetric, • observer is at the center, • the matter is distributed uniformly on average ρ(r) = const, • the configuration is finite (vacuum outside), • the gravitational mass M of the configuration is known (based on observations of satellite dwarf galaxies and orbits of stars encircling the halo), • the blueshift z of most distant stars in the halo is known (we assume that z has been corrected for a perpendicular Doppler shift), • the state of art observations are not good enough to resolve individual inhomogeneities (their density profiles, etc. ) -the observer may detect only the cumulative effects. It will be more instructive for a reader to start with the description of a constant density Einstein cluster (our effective and background spacetimeour approach does not distinguish between these two concepts). Model A: constant density Einstein cluster A constant density profile ρ(r) = ρ A and the equation (7) give (see also [4]) where a A = 8πρ A /3 is a constant and where an additive constant was chosen to satisfy regularity at the center r = 0. We have from (9), (10) where without loss of generality we have chosen the additive constant Finally, the metric reads The metric is regular and of the Lorentzian signature for 0 ≤ r < 1/ √ a A . The Ricci and Kretschmann scalars blows up at r = 1/ √ a A , so there is a curvature singularity. From now on we assume that 0 ≤ r ≤ R A < 1/ √ a A . For r = R A the spacetime is matched to the vacuum exterior Schwarzschild solution -the active gravitational mass inside of the sphere with an area radius r is given by (11). For a constant density profile M (r) = a A r 3 /2. The radial stability conditions (5) reduce to 0 < 3a A r 2 < 2 and a A r 2 < 4/3 which gives additional restriction on the matching hypersurface r = R A . Inhomogeneous spacetime A toy-model studied in this paper is constructed as follows. We assume that ρ(r) = ρ l (r) = 2ρ 0 cos 2 (2πr/l + π/4), where ρ 0 and l are constant. The parameter ρ 0 is an average density as measured by stationary observers in our coordinate system. We introduce auxiliary constant a such that ρ 0 = 3a 8π . The frequency of density perturbations is fixed by l (we assume that this parameter is small l 1 which corresponds to high-frequency oscillations). Moreover, we assume that for some integer number n. This condition together with the choice of phase π/4 eliminates local variations of the gravitational potential -the energy density at the center (at position of the observer) and at the matching surface R corresponds to its average value ρ 0 . We split µ into two parts: one which does not depend on l and the second one which is O(l): µ = µ 0 + µ l . Using (8) we find where an additive constant was chosen to satisfy regularity at the center. We have ν = ν 0 + ν l , where from (9) Therefore, we have g tt = −e ν , g rr = 1/(µ 0 + µ l ) and One may show that the Ricci and Kretschmann scalars blows up at µ = 0, so there is a curvature singularity. From now on we assume that that 0 ≤ r ≤ R < 1/ √ a. The term µ l which is proportional to l can be made arbitrary small, so the metric is regular and of the Lorentzian signature in R × (0, R) × S 2 . For r = R the spacetime is matched to the vacuum exterior Schwarzschild solution -the active gravitational mass inside of the sphere with the area radius r is given by (11). The radial stability conditions (5) have complicated form for this solution, but once the parameters of the model are fixed, one may verify by direct calculation that they hold. Green-Wald framework Our model corresponds to a one-parameter family of solutions to Einstein equations (with l being a free parameter). One may verify by inspection that it satisfies all assumptions of the Green-Wald framework [8] with the background spacetime g (0) which corresponds to g A with a A → a, R A → R, ρ A → ρ 0 . We define h(l) = g(l) − g (0) . The non-zero components of h αβ for small l are It follows from the equations (19), (21) that µ l and ν l and the first derivatives of ν l vanish in the high frequency limit l → 0 (or n → ∞). The derivative ∂ r µ l is not pointwise convergent, but it remains bounded. We have lim l→0 h αβ = 0 as expected. Although w-lim l→0 (∇ δ h αβ ∇ γ h κι ) does not vanish for δ = α = β = γ = κ = ι = r, the backreaction tensor is zero (w-lim denotes a weak limit as defined in [8] and a connection is associated with the spacetime g (0) ). In summary, the one-parameter family of spacetimes (23) has a high frequency limit lim l→0 g = g A . It satisfies assumptions of the Green-Wald framework [8]. Although one component of ∇ δ h αβ is not pointwise convergent, the backreaction tensor vanishes. Vanishing of backreaction gives rise to another interesting question: Does there exist one-parameter families of solutions within Einstein cluster class [different choices of ρ(r)] with non-trivial backreaction in the Green-Wald framework? We think that the answer to this question is no. We justify it as follows. The possible source of backreaction is a nonlinear term (ν ) 2 in (4). In order to be a source of the backreaction it would have to be non-zero in the high-frequency limit -it should be at least O(l 0 ). However, if ν does not vanish for l → 0, then it follows from (3) that λ is not pointwise convergent which contradicts one of the Green-Wald assumptions about behavior of h αβ as l → 0. Taking the high-frequency limit is a covariant procedure provided that the background (effective) spacetime is fixed. Therefore, all one parameter families of Einstein clusters to which the Green-Wald framework may be applied have vanishing backreaction. Effective spacetime Our inhomogeneous spacetime is defined by three parameters • an average energy density ρ 0 , • a size -an area radius R, • a frequency of inhomogeneities l or a number n of inhomogeneous regions (such that nl = R). These parameters are fixed. The effective model A is defined by analogous two parameters: ρ A , R A . We assume that the available observational data allow to determine gravitational mass of the system M and gravitational blueshift z of stars at the boundary of the configuration. The observer is at the center of configuration. From (11), we have for the inhomogeneous spacetime Since the spacetime is spherically symmetric and stationary the blueshift z is given by where ν(R) = ν 0 (R) + ν l (R) must be computed numerically from (20), (21). For the effective spacetime g A given by (16) the mass M and the blueshift z may be calculated as follows. Let a A = 2M/R 3 A , then at some r = R A the metric g A will match to the Schwarzschild solution. Since we have also a A = 8π/3ρ A , then The blueshift is Finally, unknown parameters of the model A (the effective spacetime), namely, ρ A , R A in terms of 'observational parameters' M , z and parameters of the inhomogeneous spacetime ρ 0 , R, n are given by It follows from the radial stability inequalities (5) that the most compact stable/metastable configurations [7] in the homogeneous case correspond to R A = 6M , R A = 3M , respectively. Using (29) one may show that the blueshifts for these configurations are given by z = −1 + (2/3) 1/4 , z = −1 + 1/3 1/4 . We start our analysis with the inhomogeneous spacetime with z ≈ −1 + 1/3 1/4 ≈ −0.24016, thus a relativistic system. For R = 3M and n = 100 we get from (26) z ≈ −0.23926. Using observational data in the form of M and z within the homogeneous model the observer at the center will estimate ρ A to a different value than physical ρ 0 . Therefore, other measurements of the energy density, i.e., from radiation that originates in decay of dark matter particles of the galactic halo (assuming that this will be known one day) will lead to a disagreement with ρ A . The inhomogeneity effect is presented in figure 2. The effect of inhomogeneities for tens of inhomogeneous regions is of order of a few percent. As the number of inhomogeneities grows and the amplitude decreases the effect vanishes (the limit l → 0 or n → +∞) in accordance with the analysis within the Green-Wald framework (the high-frequency limit). The density contrast remains constant in this limit. What is interesting, the effect of inhomogeneities slightly increases with the ratio R/M (the area radius over the gravitational mass of the system) -see figure 3. Since there is no backreaction in the sense of the Green-Wald framework, the misinterpretation of the energy content is of trivial nature. It reduces to misinterpretation of the parameters of the model. New form of the energy content cannot appear here because the It was not an aim of our paper to model a realistic astrophysical system, but we find it instructive to calculate the effect of inhomogeneities for parameters corresponding to the dark matter halo of our Milky Way. We assume that in geometrized units the mass is M = 10 12 M = 1.477 × 10 15 m and the radius R = 400000ly = 3.784 × 10 21 m which gives R/M = 2.563 × 10 6 . The Schwarzschild radius is one order smaller than stellar distances 2M = 0.312ly. The energy density for the system compressed million times to the minimal configuration R = 3M would be 5.45 × 10 −6 kg/m 3 which qualifies as a high vacuum for Earth standards. If the local clustering scale is assumed to be l ≈ 1kpc (the size of satellite dwarf galaxies), then n ≈ 40. For these parameters the inhomogeneity effect is small (ρ 0 − ρ A )/ρ 0 ≈ 1%. Summary We have constructed the spherically symmetric stationary Einstein cluster with small-scale radial inhomogeneities and applied the Green-Wald frame-work to show that there is no backreaction. Next, we have conducted the Gedankenexperiment: an observer at the center of this configuration modelled surrounding spacetime by an effective solution -an homogeneous Einstein cluster. The parameters of this effective spacetime are based on two straightforward astronomical 'observations': the gravitational mass of the system M and the blueshift of stars at its outer boundary. The idealization of the inhomogeneous spacetime resulted in the misinterpretation of the energy content. The effective energy density is lower than the original average energy density. The effect is not bigger than a few percent and, as expected, it vanishes in the limit in which the size of inhomogeneous regions are reduced, but the density contrast is kept constant.
4,943.4
2018-12-28T00:00:00.000
[ "Mathematics", "Physics" ]
Wi-Fi based smart wireless flipchart Notice Board plays a vital role in many places like educational institutes, railway stations, school’s colleges and everywhere. Presently a person will stick a notice on notice board in offices, schools and colleges. This causes a lot of time to stick the information on notice board. So reduce the time to build a new device as wireless flip chart by using this flip chart we reduced the time, manual power, paper and printer ink. The main aim of this paper is to display the information on LCD screen from the user’s mobile device through internet. Introduction Online Notice Board is an application which will automate a lot of activities in a school or college or office etc., depending upon the usage that is expected by different organizations. If it's a school they can use it for displaying information related to different extracurricular events and winner's info. Th ey can display info of all teachers in various departments, display timetable for students, display results of students. They can display info related to any holidays or info related t o any fees collection scenarios or any common regulators that are announced by management. In the same way, it can be used by colleges also. This paper is a flip chart board that is operated by a mobile and shows a message on it. Previously, we use manual power to display any information on notice board .this causes t h e delay t o know the information. To avoid this problem by producing a flip chart board, that is connected to a mobile. Then message from the mobile that is sent to a microcontroller. The microcontroller shows the in form at ion on LCD screen. This project can be used in anywhere in the public places that means schools, shopping malls, bus stations or airports for showing any information. In this paper, we will aim to provide a way to automate the way in which notice board messages can be updated, deleted or removed. Provide access to students/ professors or administrat ive etc., officers to different features which will provide various information. We will also provide roles and basing on its permissions will be granted to add or remove data to notice board features. This paper is used to establish the communication between microcontroller and mobile ph on e, wh ich shows information on the notice board when a notice is sent from the android device of t h e u ser. Th e operation of this device can be attained by any android application, upon a Graphical User Interface [8, 9, and 10]. The functionality is achieved by sending a message from the user interface through mobile phone, the cloud searches for the messages, if any message is found, cloud transfer the data to the receiver. This paper is done to reduce the manual work of changing the notice board daily. Instead, we can display the messages through mobile phone from remote areas with high security. 3.1Block Diagram The implemented module is integrated with NodeMCU which is interfaced with an android application which sends the information to the cloud. This is used to collect the data and sends the information to the NodeMCU using this app. Thereby the information is displayed on the LCD Screen. Power supply is also being given to the NodeMCU 4.1Flow Chart Description Step 1: -If want to send any message to print and notify all we need to open the mobile application and enter the application pin/password. Step 2: -Application password is known only to the admins and if the entered password matches, a new screen will be displayed Step 3: -Then after entering the text message and submitting it, the message will be showed on the 16x2 LCD. Step 4: -If in case the password entered is wrong, then it shows invalid user text on the screen. Step 5: -Once the text set on the LCD we can clear the message with the help of clear option provided Based on the size of LCD the message is going to be displayed, if the LCD is 16x2 at a time it displays 32 characters and larger LCDs take more data Experimental results When grains are placed on the load cell, the load cell measure the weight of the seeds and value is displayed on the LCD. Once the weight is measured the relay will ON the dryer. After the dryer gets off, the grains are collected in the container. Conclusion This is achieved by flip chart board using Wi-Fi is an association of Software and Hardware through which most of the complexity reduces, even systems size and cost also reduced. This system is very efficient as anyone can send the message from remote place without any human intervention. The android application developed in this paper makes the user experience great as it is very simple and easy to use. The paper was finished using very simple and easily available components making it lightweight and portable.
1,166.2
2020-12-05T00:00:00.000
[ "Computer Science" ]
Buildings and constructions safety operation on karstic territories of the Samara city During the last 20-25 years the geological structure of the city has considerably changed due to different anthropogenic and technological factors. The geotechnical situation of the Samara city needs to be specified. In the first place, it is about karstic territories that are under active maninduced impact. Water utilities leakage, snow melting and rainfalls cause carbonates soakage which results in carbonates damage and karst appearance accompanied by cavities in the earth shell of different size and shape. Karst changes the current eco and geological system – the natural landscape. These deformations are highly disadvantageous for buildings and constructions on karstic territories, as they can lead to serious circumstances partial or full building failures. Besides, karst and suffosion have recently become more frequent, which directly endanger human safety and even life. This paper presents results of the analysis of karst and suffosion development on the territory of Samara. It observes geotechnical problems of using karstic grounds as structure bases, gives reasons for necessary geotechnical monitoring at the stage of examination, as well as reasons for implementation of anti-karstic engineering and technical measures during building erection and operation. Introduction Nowadays, more sites with difficult ground conditions,including karstic territories, are developed for constructionin Samara.Water utilities leakage, snow melting and rainfalls cause carbonates soakage which results in carbonates damage and karst appearance accompanied by cavities in the earth shell of different size and shape.Karst changes the current eco and geological systemthe natural landscape.These deformations are highly disadvantageous for buildings and constructions on karstic territories, as they can lead to serious circumstances -partial or full building failures.Besides, karst and suffosion have recently become more frequent, which directly endanger human safety and even life.Thus, it is necessary to monitor the conditions and dynamics of karst appearance at the stage of examination, designing, erecting and maintenance of buildings, as well as to introduce effective anti-karsticmeasures. The aim of this work is to assess the condition and dynamics of karst formation in Samara on the stages of research, design, construction and operation of buildings and to classify the most effective anti-karstic measures. Materials and Methods At the beginning of 90s of the 20 th century, a map of unfavorable for building geological conditions appeared on the territory of the Samara city.The results of engineering examination analysis were introduced by the staff of the Department of Engineering Geology, Baselines and Foundations in the Institute of Architecture and Civil Engineering with the support of Kuibyshev Trust of Construction-Engineering Examination (Fig. 1).The map also included karstic areas of the city.During the last 20-25 years, the geological structure of the city has considerably changed due to different internal and external, primarily anthropogenic factors.The geotechnical situation of the Samara city needs to be specified.This study covers the following tasks: -it reveals characteristic features complicating the construction of buildings and structures in Samara at present; -it specifies karst and karst processes negative impact on geotechnical properties of building foundations; -it assess karst-suffosion processes state and growth on the territory of Samara; -it gives analysis of materials illustrating failures of some construction projects in Samara; -it reveals the most important causes of karst processes on the territory of Samara; -it justifies the choice of the most effective engineering activities reducing risk of karstsuffosion processes formation and growth in buildings and structures foundations in Samara. To achieve these results the researchers applied methods of scientific analysis and of factual material classification. 3Results The analysis conducted by the authors showed that at present building construction in Samara is determined by the number of features [1]. One of them is a geotechnical situation.Almost all the sites with good structure bases are in use today in the Samara city.That is why new building process is forced to be introduced on the territories with unfavorable engineering and geological conditions with foundationson grounds prone to non-uniform deformation. Geological condition of the Samara city is defined as extremely difficult and determined by difference in age, origin and geological structure of the solid,which results in development of dangerous geological processes such as underflooding, karst and suffosion, slides, sagging, etc., related tochanges in hydrogeologic situation. In recent years, more sites are developed in high flood basins in water conservation districts, on the slopes on the banks of rivers Volga and Samara.It causes certain geotechnical and ecological problems and restrictions.The Article 65 of the Water Code of the Russian Federation [2] says that water conservation areas are territories adjacent to coastlines of seas, rivers, streams, canals, lakes, reservoirs where special regime of economical or other activity is set with purpose to prevent pollution, contamination, muddying of the mentioned water objects and depletion of their waters, and to save habitat of water biological resources and other objects of flora and fauna.Within their boundaries, designing, location, construction and reconstruction, putting to operation and maintenance of the economical or other structures are allowed on conditions that these constructions are equipped with facilities, providing water objects with protection from pollution, contamination and depletion of waters according to water laws in the region of environment protection [3].Within protected shoreline belts, along with the introduced restrictions for water conservation districts, the following activities are prohibited: land plowing; dumping spoil ground, cattle pasturage with summer camps and baths arranged.During building process, connected with stacking, damage of natural stratification can happen as the result of additional load on the slope.In such a case, slope stability can be disturbed which follows by the earth slide process.Beside slope processes, karst and suffosion activity of the sites can have a negative influence on new building as well as hotspot construction.It often results in non-uniform deformation of the erected buildings. Another particularity is land shortage, i.e. lack of sites for new constructions.According to the General layout, development of the Samara city should undergo by means of internal territory reserves.One of the variants of the city territory development is to actively use the territory within the current urban district boundaries.This variant allows not to go beyond the boundaries of the present city infrastructure.The modern post-soviet period of the Samara city development was characterized by Samara architect V.G.Karkaryan as the total absence of any city-planning idea [4].Indeed, buildings are introduced in a hotspot, hasty, random way and generally in central parts of the city.This results in the tendency of increasing the number of floors in a building and constructions, which increases the load on structure base.In addition, city urbanization leads to the necessity to construct underground facilities different in purpose, design planning, constructive nature; buildings go deep underground [5]. The third particularity is that the last 15-20 years reconstruction and modernization of existing constructions take place more frequently, together with new building process.In this regard, building is often introduced in tight working space of ready built-up area of the city, which means close to existing buildings [6].Reconstruction (redesign, add-on structures etc.), demolition of small, shabby dwellings with high-rise buildings erectioninstead also leads to higher load on structure base.In addition, the increase in city construction density and number of floors result in higher load on city utilities that have long been in need of major reconstruction and modernization.In most cases, man-caused leakages and additional pressure on structure base leads to deterioration of their qualities and as a result to partial or full building failures of separate structures or construction in general. It should be noted that intensity and variety of human economic activities at the modern stage cause significant negative impact on the geological environment of variable types of man-caused influence.They activate old and bring about new dangerous geological processes.Thus, it is critical to assess the geological environment reaction to man-caused impacts and to define its stability. The most pressing issue for the Samara city is the impact of karst and karstic processes on structure bases of buildings and constructions, the reason for it being construction growth in less favorable engineering and geological conditions and human economic activity.In last decades, karstic territories of the city started to be actively utilized. The staff of the Department of Engineering Geology, Baselines and Foundations in the Institute of Architecture and Civil Engineering introduced a research at the end of 80sbeginning of 90s of the XX century.The research showed that karstic solids occupy up to 70% of the Samara territory [7,8].Karst seriously complicates building process and makes it more expensive.Thus, study of karst and its aspects has an important practical meaning.Karstic territories are the ones with water-soluble geological formations (limestone, dolomite, marlstone, chalk, plaster, anhydrite, mineral salt, etc.) and territories with existing or possible surface and (or) underground karst. Analysis of archive data and engineer and geological research of the authors [1,7-8] of karstic territories of the Samara city in the last 20-25 years showed that karstic processes date back to ancient times and cover huge depths of Permian and coal formations.They are characterized [9] by solids with increased fracture zones and masses of decayed rock; favorable recharge hydrogeologic situation, underground water draining into cavities and fractures; existence (or lack) of covers for karstic formations with complicated lithography. Karst and suffosion processes are widespread within the Samara city on slopes of the riversides Volga and Samara.Karst processes here are accompanied by ground wash away, suffosion, deformation of earth surface and construction structure bases, changes in qualities of ground covering formation, particular circulation and mode of underground and surface waters. Type of rock bedding and terrain pattern of the ground features lowland karst with horizontal bedding.Karst can be characterized by its composition as carbonate (lime stone, dolomite, marl stone, chalk); carbonate-sulphate (lime stone, dolomite, plaster and anhydrite of Permian age); sulphate (plaster and anhydrite). Carbonates within the territory of the Samara city are highly fractured and partly destroyed to earthly marl and dolomite powder.As the result of karst process, karst and suffosion sinkholes appear on the surface, gulleys.The areas with karst and suffosion process on the territory of the Samara city are situated generally on water-dividing plateau of the rivers Samara and Volga.Their geological structure is defined with Permian formation of the Tatarian Stage up to the depth of 50-70 m.On the surface, they are covered with Quarternary protective formation with different depths in the form of alluvial and eluvial-deluvial formations and spread almost everywhere.They are mostly clays and sandy clays, less often clay sands and sands.Below are Permian formations of the Tatarian Stage, spread over the Permian formations of the Kazanian Stage. In the majority, the abovementioned grounds are solid structure bases for buildings and constructions, but they have a number of disadvantages.First of all, it is Permian formation clay watering and their marl quality.Marl stones are different in proportions of clay, silt and sand particles and always contain calcite.Some types of them quickly and instantly swell in water during soaking.The process is accompanied by suffusion particles removal with water trickles into cavities that appear around and underneath.Water leakage in water and sewerage systems, heat traces go high-scale in some cases and lead to extra- SPbWOSCE-2016 2029 moisturized ground.There is a danger of near complete wash away of ground particles from their resting place.Consequently, such sites could be reasonably supposed as zones with serious possibility of karst-suffusion breakdown. In general, sinks represent the greatest threat to the stability of buildings and structures in karstic areas because of the suddenness of their occurrence.Sinks are forming at a depth for a long period of time, whereas on the surface the breakdown process passes very quickly.A weakened zone is noted around the breakdowns, which reduced load-bearing capacity of soil, cracks and slight surface subsidence.Gradual subsidence of the earth's surface, resulting, for example, from undermining the territory or global change of groundwater levels, presents relatively low risk compared with the sinks. In the recent years, due to the dry summer-spring period, the occurrence of karstsuffusion processes, that threaten the security and even human life, became more frequent.For example, a sink, having an oval shape in terms of the maximum diameter of 7.8 m and a tapered shape in cross-section with the depth of approximately 13 m, was detected during the examination of construction projects on the sites of Samara slope on the street Maloyaroslavskaya in the Zheleznodorozhny district of the city [9].Estimated volume of dissolved and groundwater-borne rocks was about 800 m 3 .According to the results of research conducted at the neighboring sites, it was determined that all of the surrounding area was full of karst sinkholes.Thus, the integrity of the active base zone destroys and breaks gradually but progressively over time, which leads to unacceptable deformations of buildings and, possibly, to a complete failure.Figure 2 shows a well-known house in the city of Samara, located on the Aurora Street in the Zheleznodorozhny district.The house is in a critical condition, and progressive deformation has been continuing for several years.In March 2016, the ground next to the specified house broke down and formed a pit with a diameter of 6 m and a depth of 10 m. Fig. 2. Residential house. Samara, Zheleznodorozhny district, 20 Aurora Street Analysis of the geotechnical material suggests that there has recently been the activation of karst and pseudo-karst in many areas of the city due to man-made processes. Intensive construction and anthropogenic activities have increased direct and indirect effects on the geosphere, which led to significant reduction in the strength and deformation properties of the soil at the base of buildings and structures. There is still a serious problem to regulate atmospheric discharge, sewage and flood water, unacceptable leakage of liquids from pipelines and canals.Figures 3 and 4 Discussion Engineering and construction development of the karstic territories must be based on objective assessment of karst danger to make a decision about the development of these territories and foresee the likelihood of accidents and disasters.This problem can be solved through scientifically-based geotechnical monitoring at the stage of pre-construction research and anti-karst engineering technical activities during construction and operation of buildings and structures. In the current situation, in the absence of a complete picture of the present geological situation, it is necessary to conduct geological and hydrogeological monitoring throughout the whole territory of Samara, which will make it possible to predict the behavior and changes in karstic grounds in the base of structures under the influence of various factors.In the first place, we should identify areas of karstic rocks spread and areas of actual or potential activation of karst processes, as well as natural and man-made sources of soakage.Observations can be carried out with all available resources.It is advisable, for example, to combine the drilling of exploration wells with GPR investigations using ground-penetrating radars.This drilling will directly determine the strength and nature of the soil strata, the level and direction of groundwater; it will detect karst cavities and take samples to determine the parameters of their physical and mechanical properties.GPR research will help outline the picture of the spread of cavities in the built-up and operating areas, and identify the sources of man-made water leaks from the mechanical, electrical and plumbing networks.The creation of special geotechnical monitoring service in Samara is necessary to gather information on research and its analysis; the aim is to develop an actual unified information geotechnical base for construction. During the period of construction and operation of buildings and structures on karstic territories, it is necessary to eliminate all the factors that lead to soakage sources and increase the permeability of karst rocks.Engineering and technical measures to reduce the risk of development and karst-suffusion processes include the following [1,[10][11]: -Decreasing the load of structures on the base.In the given soil conditions, it is reasonable to build low-rise building on the foundations that can redistribute the load on the base more evenly (slab, cross-belt).The horizontal and vertical layout of the buildings should be rational.Construction of the "heavy" buildings, infill development and -Using the methods of foundations installation that donot have any dynamic effect on the soils of the base, for example, impact methods.First of all, this refers to piling, using foundations in tamped foundation pits, creating artificial bases with tamping, etc. Impact load in this case can also enhance fracturing and, consequently, the permeability of karsted rock; -Using "dry" technology in the construction of the underground part and of the whole building; -Not allowing soil soakage of the base during the construction of the zero cycle; protecting the foundation pits of buildings and trenches for mechanical, electrical and plumbing networks from flooding; -Taking waterproofing measures on construction sites.This is achieved by the rational layout of the master plan; vertical layout of the territory, which provides runoff of surface waters; drainage devices, watertight diaphragm and screens; -Continuous monitoring of the technical condition and immediate repair in case of damaging protective blind areas around buildings, asphalt pavements and roads; -Laying water communications in special channels; constant control of the equipment technical condition and possible leakages of water supply and sewerage systems (for example, using ground penetrating radar); maintaining the city storm water sewage in good operating condition. According to the academician F.P. Savarenskii et.al. [12][13][14][15], adverse geological conditions are not so much dangerous during the construction of engineering structures, as are lack of knowledge of the conditions and the inability to evaluate them in terms of a particular engineering measure.In this case, this statement can be fully attributed to the construction in the karstic areas in Samara. Conclusions 1. Building construction conditions on the karst territories of Samara should be described as complicated.Karst processes affect the properties of buildings and other structures foundations. 2. To improve the quality of construction products and extend their service life, geotechnical investigations must be carried out accurately and fully according to current regulations; buildings and facilities must be erected and maintained professionally and competently.Non-compliance with these requirements may result in failure of the "basebuilding" system and, as a consequence, in significant material losses and threat to human safety. 3. Following the recent government documents for the construction industry, it is important to note the current guidelines for construction in the fields of engineering survey, design, construction and operation, which ensure the reliability and safety of buildings and structures.They include the following: -Scientific support at the stages of engineering survey, design, construction and operation of the building objects; -Geo-monitoring of environmental components in the construction area; -Monitoring the technical state of the building base, construction structures and systems of engineering support during construction and operation of a building or structure; -Monitoring the status of adjacent buildings and structures within the area of influence of the building object during its construction and operation. DOI show the photographic evidence of the roadway breaks as a result of heating main burst on Lenin Avenue in the Octyabrsky district of Samara, which took place in 2011 and 2016 first incident of August 23, 2011, took the life of the driver whose car fell down.In the second case, on October 18, 2016, the breakdown area was about 15 m 2 .Two cars fell into the resulting hole. on the foundation may lead to increased fracturing in karstic rocks;
4,419.8
2017-01-01T00:00:00.000
[ "Environmental Science", "Engineering", "Geology" ]
Structure and origin of the Vaivara Sinimäed hill range , Northeast Estonia Rein Vaher The distribution, structure and origin of the hill range of Vaivara Sinimäed and bedrock folds are discussed. Many narrow (150–500 m), eastto northeast-trending folds with 1–10 km long axes are found in the studied area. Anticlines resulting from diapiric processes are prevailing. Cambrian clay-, siltor sandstones are cropping out in the centre of the anticlines. Terrigenous outcrop zones are surrounded by Middle Ordovician carbonate rocks, which enabled us to use low-resistivity anomalies for tracing the distribution of the proven anticlines. The Vaivara Sinimäed, a 3.3 km long and 200–300 m wide range of three elongated hills, rise 20–50 m above the surrounding land. The tops of Tornimägi, Põrguhauamägi and Pargimägi hills are 69, 83 and 85 m a.s.l., respectively. Two saddles between the hills are on the level of 50–55 m a.s.l. Uplift of Middle Ordovician carbonate rocks at Pargimägi Hill is mostly due to the thickening of Cambrian claystone. Its core, and most likely also the cores of Tornimägi and Põrguhauamägi hills, consist of squeezed-out and folded sedimentary bedrock, diapirs, which are probably of glaciostatic origin. The dominant glaciotectonic feature is a glacial erratic. The surrounding bedrock and cores of the hills are covered with a thin blanket of Quaternary deposits: till, glaciofluvial gravel and sand, glaciolacustrine silt and clay. The Vaivara Sinimäed as a whole represent a diapir, modified by glaciers. INTRODUCTION As a part of the vast East European Plain, Estonia is characterized by a rather flat surface topography with small relative and absolute heights.Structurally, it lies mostly within the boundaries of the southern slope of the Fennoscandian (Baltic) Shield.The dominant macrostructures of glaciotectonic origin are thrust faults along which sheets have been displaced in front of the advancing glacier (van Gijssel 1987).Thrust faults are very rare in Estonia, but glacial erratics are often found.Glaciotectonic structures may be created in the substratum under ice or in the foreland region in front of the ice cover.In most cases in Estonia they seem to have been dragged along in the lower part of the glacier and left behind when they were released from the ice by basal melting.Ridges, hills and composite massifs composed of Quaternary deposits appear to be the most common glaciotectonic landforms in Estonia (Rattas & Kalm 1999). The hill range of Vaivara Sinimäed (Blue Hills) is located in the northeasternmost part of Estonia (Fig. 1).A 3.3 km long and 200-300 m wide range of three elongated hills rises 20-50 m above the surrounding land, forming, besides the Baltic Klint, a most prominent land form in northeastern Estonia.Since the beginning of the previous century, a number of researchers have studied these hills (Hausen 1913;Granö 1922;Jaansoon-Orviku 1926;Tammekann 1926;Stumbur 1959;Sammet 1961;Miidel et al. 1969;Raukas et al. 1971;Malakhovskij & Sammet 1982;Rattas & Kalm 2004;Suuroja 2005Suuroja , 2006)).They have posed several hypotheses about the structure (push moraine, large sedimentary bedrock erratics, horst, fold or diapir) and origin (glaciotectonic, glaciostatic or tectonic) of the Vaivara Sinimäed.This paper is based on our interpretation of the available drilling and electricalresistivity data, aimed at solving these problems. MATERIALS AND METHODS Numerical data (location and altitude of the borehole mouth, thickness of sediments and rocks) of 570 boreholes and results of resistivity prospecting were obtained from unpublished reports stored in the Depository of Manuscript Reports of the Geological Survey of Estonia (GSE) and Estonian Land Board.The data were interpreted manually by interpolating between higher and lower values, and contour maps were drawn taking the resistivity maps into account.Electrical mapping in 1961-1965 for locating anticlines was made by the GSE in the resistivity profiling and sounding technique with the Schlumberger array AMNB where the separation between the current electrodes on profiles was AB = 60 m, and between the potential probes MN = 15 m.Resistivity data were surveyed at traverse spacing of 250 m in 25 m steps along the line of the array.Ten boreholes were drilled and a 60 m long trench was dug out for interpretation of resistivity anomalies.Additional resistivity measurements were made in 2007-2008 as profiling with axial dipole-dipole array ABMN, where AB = MN = 20 m and the distance between the inner electrodes BM = 40 m, and sounding with the Schlumberger array. GEOLOGICAL BACKGROUND The studied area covers about 150 km 2 in northeastern Estonia (Fig. 1).Structurally it belongs to the Russian Platform, East European Craton.The ~ 1.88 Ga crystalline basement is formed of Palaeoproterozoic Svecofennian orogenic rocks (mostly Al-rich gneisses).The gentle (10′) southeast-dipping top surface of the basement lies at a depth of 200 to 240 m below sea level (b.s.l.).The basement is overlain by an up to 260 m thick sequence of subhorizontally layered 470-600 Ma old Neoproterozoic and Palaeozoic sedimentary rocks (Table 1).Detailed information on local stratigraphy is available in Raukas & Teedumäe (1997). The Ediacaran, Cambrian and Lower Ordovician sections are dominated by sandstone, siltstone and claystone.The Middle Ordovician is represented by carbonate rocks (mostly limestone and marlstone).The regional low-angle (8-11′) southerly-dipping homoclinal structure of the sedimentary bedrock is diversified by local folds (generally diapirs).The most prominent bedrock and land surface feature is the 20-35 m high west-east-ranging Baltic Klint where Cambrian and Ordovician strata crop out.South of the klint the bedrock lies mainly 25-35 m above sea level (a.s.l.). The Quaternary is represented by mostly 1-5 m thick accumulations of glacial (Pleistocene) and postglacial (Holocene) deposits (till, clay, silt, sand, gravel, pebbles and boulders, peat).The plain south of the klint lies mainly 30-35 m a.s.l.Besides the Baltic Klint, three ridges of the Vaivara Sinimäed rising 20-50 m above the surrounding land are most noticeable landforms in the area. DIAPIRS A number of narrow (150-500 m) east-to northeasttrending folds are found in the study area (Fig. 2), whereas anticlines are prevailing (Vaher & Mardla 1969).Most of the fold axes are 1-3 km long, but some extend to 10 km.Cambrian claystone or siltstone (predominantly), or Cambrian and Lower Ordovician sandstone (rarely) In b.h.314 (Fig. 3) the Ediacaran strata are practically undisturbed above the basement of normal altitude.The claystone of the overlying Lontova Stage is disturbed by numerous slickensides increasing upwards in number.Siltstones of the Dominopol′ Regional Stage are severely disturbed and nearly three times as thick as normal.Evidently, the claystone was locally squeezed upwards along the lines of minimum resistance forming a diapir (Vaher & Mardla 1969;Puura & Vaher 1997). MORPHOLOGY AND STRUCTURE OF THE VAIVARA SINIMÄED The 3.3 km long and 200-300 m wide range of three elongated hills (Fig. 5), named the Vaivara Sinimäed after blue-looking forest that once covered them, rises 20-50 m above the surrounding land.The tops of the Tornimägi (Tower Hill), Põrguhauamägi (Hell Pit Hill) and Pargimägi (Park Hill) (Fig. 6) ridges are 69, 83 and 85 m a.s.l., respectively.Two saddles between the hills are on the level of 50-55 m a.s.l.A glaciofluvial plain, some 8 km 2 in area and 35-45 m a.s.l., lies to the south of the hills.A clear scarp at an altitude of 34-37 m a.s.l. and a fragmentary one at an altitude of 39-42 m a.s.l can be followed at the southern border of the plain.Possibly these are coastal scarps of the Baltic Ice Lake, with the heights at Laagna (Fig. 2B) and Vaivara (6 km west of b.h.168) of 38 and 38-39 m (stage BI), and 32-33.5 and 33-34 m a.s.l.(stage BIII), respectively (Saarse et al. 2007).The westernmost and lowest Tornimägi Hill is 800 m long and rather narrow (up to 200 m).The western half of the steeper northern hillside includes an up to 15 m high vertical wall: an exposure of Middle Ordovician limestone (from Kunda to Uhaku regional stages) (Suuroja 2005).The altitude of the top of the Kunda Regional Stage on this wall (Fig. 5, exposure 1) is 45.25 m a.s.l.(Jaansoon- Orviku 1926).The normal position of that surface in b.h.168 is 18.1 m a.s.l., i.e. 27 m lower.Figure 7A shows a section of an exposure (Fig. 5, exposure 2) of Lower and Middle Ordovician rocks on the western hillside.The Lower Ordovician sandstone strata at the northern end of the exposure are vertical.Clear traces of ice pressure are observed here: the upper parts of the vertical beds below the till are turned to the south according to ice movement (Fig. 8).The adjacent slightly wavy Middle Ordovician beds are close to vertical.The middle part of the exposure consists of mostly shattered limestone strata dipping 60-70° SSE.An overturned syncline occurs in the southern part of the exposure (left half of Fig. 9).The right half of Fig. 9 shows a flat anticline.The central Põrguhauamägi Hill is 800 m long and 300-400 m wide.An oval depression (Fig. 5, exposure 3) about 300 m long, 150 m wide and 20 m deep, called Põrguauk (Hell Pit), is observed on the top of the hill.The southern slope of the depression is an exposure of south-dipping (60°) Middle Ordovician limestone, with the Uhaku Regional Stage on the top edge (Suuroja 2005(Suuroja , 2006)).On the eastern side of Põrguhauamägi a 4 m high exposure (Fig. 5, exposure 4) of Cambrian sandstone was seen in 1924.In the eastern part of the southern hillside (Fig. 5, exposure 5) limestone of the Aseri Regional Stage and on the southwestern hillside (Fig. 5, exposure 6) in several 2 m deep trenches south-southwest-dipping (65°) limestone (from Volkhov to Lasnamägi regional stages) was exposed (Jaansoon- Orviku 1926).The easternmost and highest Pargimägi Hill (Fig. 6) is 2300 m long and 400-500 m wide.Middle Ordovician limestone of the Uhaku Regional Stage crops out on top of the hill (Suuroja 2005).Several blindages (Fig. 5, exposure 7) were dug out of the limestone of the Lasnamägi Regional Stage in the northeastern hillside; here the limestone strata dip 20° SSE (Jaansoon- Orviku 1926).In three boreholes an outcrop of Cambrian siltstone (Dominopol′ Regional Stage) was found on the northern (Fig. 5 hillsides.Five vertical electrical soundings (VES) on the southern hillsides (Fig. 5, VES 10L, 67L, 68L, 74L and 75L) belong to a low-resistivity zone which corresponds to an outcrop area of Cambrian silt-and sandstones.This is proved by an exposure (Fig. 5, exposure 8; Fig. 10) of Cambrian sandstone on the southern hillside.According to the gamma-ray log and drill core of b.h.2333 on the southwestern hillside, the Phanerozoic sequence (from the Uhaku Regional Stage on top to the Ediacaran System on the bottom) is normal.The top of the Lower Ordovician lies here at about 36 m a.s.l., i.e. about 28 m higher than the position of that surface in drill cores at the northern (b.h.310) and southern (b.h.168 and 2339) feet of the hill (7.9, 8.7 and 8 m a.s.l., respectively).The mode of occurrence of Ediacaran QUATERNARY DEPOSITS Quaternary deposits are represented mainly by the till of the last (Late Weichselian) glaciation, distributed in all three hills and to the north and west of them.In the east and south the area is covered with glaciofluvial (coarse sand, gravel and pebbles) and glaciolacustrine deposits (sand, silt and clay) (Fig. 5).Glaciofluvial deposits are spread also at the proximal side of Põrguhauamägi and Pargimägi hills.A small settlement exists at Pargimägi and therefore there are more drillings than in other hills.The thickness of Quaternary deposits in the upper part of Pargimägi is only half a metre, increasing up to 10 m on the hillsides.Among these deposits up to 7 m thick till is prevailing.Figure 7B demonstrates an exposure (Fig. 5, exposure 9) of Quaternary deposits on the western hillside.The section of this exposure shows three about 25 m long limestone erratics being 10-15 m apart.Limestone strata of two erratics are nearly horizontal, but vertical in the eastenmost one.Four limestone erratics of the same size, with dip of the strata 45°S, 22°NE, 25°S and 16°SSW, were found also on the southern hillside.The composition of material between bedrock erratics is changing rapidly. Two types of tills could be distinguished.The upper till is greyish-brown or grey, in places containing yellowish loamy sand with poorly rounded carbonate clasts 8-40 cm in diameter.The content of rough material (over 10 mm in diameter) is up to 60%.Even rather large crystalline boulders were found (Fig. 11).The crystalline clasts represent Vyborg rapakivi, Suursaari quartz porphyries and helsinkites, showing ice movement from north to south (Miidel et al. 1969).The lower loamy till (at least 3.6 m thick) is greenish or bluishgrey with violet streaks and interlayers, resembling blue claystone of the Cambrian Lontova Regional Stage.Both till beds are highly deformed.The upper till is underlain by 1.5-4.9m thick sand, silt or clay void of organic matter.Therefore it is not possible to date Quaternary deposits more exactly than to the Late Weichselian. On the southern hillsides the upper limit of coarsegrained glaciofluvial delta deposits (pebbles, gravel and coarse sand, Fig. 12) lies at 50 m a.s.l., marking, according to Ramsay (1929) and Vassiljev et al. (2005), the level of the Baltic Ice Lake stage AI.The layers of delta deposits are dipping to the south at an angle of 8-24°.The grain size and sorting coefficient of the deposits are diminishing southwards (Miidel et al. 1969).South of Pargimägi, the delta diverges into two branches, with 3-9.6 m thick deposits of variable composition.The 2-2.5 m thick coarse-grained deposits are underlain by loam, sandy loam and sand.An up to 3 m thick layer of variegated sand can also be seen above loamy material or inside the loamy bed in the form of lenses.To the south of Põrguhauamägi, up to 5.5 m thick fine sand covers the undulating till bed. The area south of Tornimägi is rather well investigated.Here the till is covered by gravelly-pebble deposits of the glaciofluvial delta, which are cemented in the proximal part, resembling a conglomerate.The thickness of deposits is growing southwards from 2.5 to 7 m, whereas up to 4 m thick coarse deposits cover up to 3 m thick sand.To the south of the railway coarse-grained material is often covered with sand or loamy sand.In lateral direction the thickness of deposits is variable: 3.5-6.5 m thick fine-grained yellowish-brown sand lies on 2.5 m thick clay or loam.The whole Quaternary cover is here at least 12 m thick.The delta deposits are dominated by carbonate clasts (over 80%), and crystalline clasts have a high content of Vyborg rapakivi (some 45%).To the south glaciofluvial deposits are replaced by glaciolacustrine deposits, which are represented by loam and silt (in the east) and varved clays (in the west) (Fig. 5). DISCUSSION Since the beginning of the last century the majority of scientists (Hausen 1913;Granö 1922;Jaansoon-Orviku 1926;Tammekann 1926;Miidel et al. 1969;Raukas et al. 1971;Malakhovskij & Sammet 1982;Rattas & Kalm 2004;Suuroja 2005Suuroja , 2006) ) have supported the glacial origin of the Vaivara Sinimäed, mostly interpreting them as push moraines.However, in the course of geological mapping Stumbur (1959) discovered several uplifts of Cambrian claystone (Sinimäed included).He claimed that their formation could not be explained by glaciotectonics only, and most likely tectonic movements played the principal role.Sammet (1961) favoured a concept that these hills are seemingly a tectonic fold.Later he suggested that the Vaivara Sinimäed had much in common (erratics, folds, etc.) with Duderhoff and Kirchoff hills (near St Petersburg, Russia), formed in front of the advancing glacier (Malakhovskij & Sammet 1982). According to K. Orviku (Jaansoon-Orviku 1926), the assumed huge limestone erratics within the Vaivara Sinimäed were broken by ice from the edge of the North Estonian Klint and transported for 4-5 km to the south or southwest.Rattas & Kalm (2004, p. 17) reached a similar conclusion: 'Large glacial rafts of pre-Quaternary bedrock, transported for some kilometres away from the cliff, are located at Sinimäe in NE Estonia.Three bedrock blocks of Lower Ordovician limestone form the cores of three hills at Sinimäed'.Later K. Orviku (Orviku 1960) changed his former viewpoint, and (based on the results of the above geological mapping) suggested that at the Vaivara Sinimäed Cambrian claystone as more plastic rock was squeezed upwards along the existing tectonic zone by ice pressure.Suuroja (2005Suuroja ( , 2006, p. 208, p. 208) assumed that 'the Vaivara Blue Hills arose due to upward squeezing (diapiring) of Cambrian claystone ("blue clay") within a tectonic fault zone under the pressure of a 2-3-km-thick continental glacier'. In the present paper the diapiric origin of Pargimägi Hill has been proved for the first time.This conclusion arises from the fact that the 28 m uplift of Middle Ordovician carbonate rocks on the southwestern side of Pargimägi (b.h.2333) is mostly due to the 25 m thickening of Cambrian claystone, while the surface of the Ediacaran rocks is at normal height of 100 m b.s.l.As seen from Fig. 3, the core of Pargimägi Hill consists of pushed-up carbonate bedrock (diapir).An unusually high altitude of the inclined Middle Ordovician limestone strata, observed at Tornimägi and Põrguhauamägi, may be caused by (1) huge erratics broken by ice from the edge of the North Estonian Klint or (2) diapirism.According to Suuroja (2005), the klint cannot be a source of such erratics, because carbonate rocks at the klint are about 5 m thick and the youngest limestone there is of Kunda age, while at the Vaivara Sinimäed they are over 25 m thick and the youngest one belongs to the Uhaku Regional Stage.More likely the cores of Tornimägi and Põrguhauamägi hills are diapirs as well.Three vertical electrical soundings of low resistivity on the southern side of Tornimägi Hill, indicating an outcrop area of Cambrian sand-and siltstones, support this conclusion. Diapirs may have been formed due to diapir-inducing load caused by glacier ice and/or earlier deposited thick overburden.Although no final decision between these two alternatives is possible at present, we believe that the diapirs are very likely of glaciostatic origin.If glacier ice is involved, the diapirs were formed rather during the first glaciation than the later ones.The diapiric process and glaciotectonic disturbances may have been favoured by altitude differences in the Viru plateau and at the bottom of the Gulf of Finland, which generated the change of load and increase in the pressure on the edge of the ice cover. Most likely the Vaivara Sinimäed developed during several glaciations.A great part of the diapirs was eroded.However, we cannot prove that two different tills belong to different glaciations.The strong influence of the last glacier on the development of the hills, giving them final shape, is clear.The diapirs are covered with a thin blanket of Quaternary deposits (mostly till).During the melting of the ice on the southern hillsides, a glaciofluvial delta was formed, passing in the south into a glaciolacustrine plain.Thus, the assemblage of the landforms and deposits in the Vaivara Sinimäed and surrounding area resembles a complex of typical glacial marginal formations (Miidel et al. 1969).However, the diapiric core of the hills together with small thickness of the Quaternary cover shows (Fig. 3B) that the Vaivara Sinimäed cannot be considered as a classical push moraine.The glaciofluvial and glaciolacustrine landforms were probably formed near the margin of the glacier. CONCLUSIONS Anticlines caused by diapiric processes are prevailing among the folds of the studied area.In the centre of the anticlines Cambrian clay-and siltstone or sandstone are cropping out.They are surrounded by the outcrop area of Middle Ordovician carbonate rocks, which enabled us to use the low-resistivity anomalies for tracing the distribution of the proven anticlines. Uplift of Middle Ordovician carbonate rocks at Pargimägi is mostly due to the thickening of Cambrian claystone, while the surface of Ediacaran rocks lies at normal height of 100 m b.s.l.It means that the core of Pargimägi Hill consists of squeezed-out and folded sedimentary bedrock (diapir).So, for the first time the diapiric origin of Pargimägi was proved.Most likely the cores of Tornimägi and Põrguhauamägi hills are diapirs as well.The Vaivara Sinimäed as a whole are a diapiric composite ridge modified by glaciers.The diapir and the bedrock of its surroundings are covered with a thin blanket of Quaternary deposits: till, glaciofluvial gravel and sand, and glaciolacustrine silt and clay.The glaciofluvial and glaciolacustrine landforms were probably formed near the margin of the glacier. Fig. 1 . Fig. 1.Location of the study area in northeastern Estonia. Fig. 8 . Fig. 8. Sandstone with graptolite argillite interlayers of the Pakerort Regional Stage on the western side of Tornimägi Hill.The topmost part is inclined towards the direction of the glacier movement and covered with stratified lodgement till.Photo by A. Raukas. Fig. 9 . Fig. 9. Glacially folded and densely fissured carbonate rocks on the western side of Tornimägi Hill.Photo by A. Miidel. Fig. 11 . Fig. 11.A large boulder in the distal part of Tornimägi Hill.Photo by A. Raukas. Fig. 12 . Fig. 12. Inclined bedrock strata in the distal part of Tornimägi Hill are covered with gravel and pebbles, which are poorly sorted and stratified.Photo by A. Miidel.
4,785.2
2013-06-01T00:00:00.000
[ "Geology" ]
Transitioning from Discrete to Continuous Distribution Mathematica vs. Excel —An Example Frequencies of the repeated integers of the first n digits of e.g. π utilizing commercial software are listed. The discrete distribution is utilized to evaluate its statistical moments. The distribution is fitted with a polynomial generating a continuous replica of the former. Its statistical moments are evaluated and compared to the former. The procedure clarifies the mechanism transiting from discrete to a continuous domain. Applying Mathematica the fitted polynomial is replaced with an interpolated function with controlled smoothing factor refining the quality of the fit and its corresponding moments. Knowledge learned assists in the understanding of the standard procedure calculating moments of e Introduction Tabulating the statistical information such as distribution moments for either the discrete and/or continuous ensembles is the quantities of paramount interest when working with either abstract mathematical or collected data in natural sciences.It is somewhat trivial to evaluate the moments for a discrete data set, it is not obvious how to systematically transit from the discrete to the continuous situation. One of the objectives of this report is that by way of example first to show how the moments are evaluated for a set of the discrete mathematical ensemble, and then by applying the same conceptual method to extend the procedure for the continuous case. To achieve this goal, we select one of the ~32,000 known constants in science e.g., the value of π.The shown procedure identically may be applied to any of the chosen constants.For an instant e, the Euler constant γ, the golden ratio φ, etc. Here in this report, we have chosen the π.We form an ensemble comprised of n digits of π; naturally, this is a set of discrete integers.We then show how the statistical moments of the set are evaluated.Taking advantage of the commercially available software, e.g., Excel [1] we tally the data conducive to the needed distribution function.Having the distribution function on hand we evaluate the moments, such as the first, second, third, etc. Excel is an excellent numeric-based program with certain limitations.For instance, because it is a single-precision compiler it displays the digits of π up to 16 significant figures.As such it limits the number of elements of π_List.To circle this, one may use a commercially available scientific software e.g., Mathematica [2].This allows extending the number of the digits of π_List literally to "infinite."To transit from discrete to continuous and hence to evaluate the moments we form the extended π_List say with 50 elements.This list then is imported to Excel and is used as a basis to form the continuous distribution function by fitting it using a polynomial.Here again, Excel is limited to a maximum 6 th order polynomial.For the sake of consistency when we utilize the Mathematica, we apply the same polynomial power; this results in the identical result.However, Mathematica has a useful option smoothening the quality of the fit.Utilizing this option, we perfect the fit.We include tables embodying the values of the calculated moments for all the scenarios.This report is comprised of four sections.In addition to Section 1, introduction that outlines the motivation and goals, Section 2 is procedure; a description that embodies Mathematica codes, charts, tables as well as selected Excel's charts.The interested reader may easily duplicate the steps and modify the codes adjusting to the need, for information c.f. [3] [4].Section 3 is the conclusions and comments on what we learned. Procedure For the sake of efficiency, we begin with Mathematica, as such first we form the π_List, a list of digits of π.Nmax defines the number of desired significant digits, e.g., 50.Shown program is crafted such that with this input parameter one single keystroke runs the entire program with the needed output. Nmax=50; pi=First[RealDigits[N[π,Nmax]]]; Next, we tabulate the tallied digits, (see Table 1) table=TableForm[Tally[pi]/.{p_,q_}→{q,p},TableHeadings→{Automatic,{"Frquency","digit/Event"}}] By defining a few auxiliary components, we display the Frequency vs. the Range.This is shown in Figure 1.As shown, these are identical.These steps ensure the accuracy of our program conducive to laying the bases for transiting to the continuous scenario.We also evaluate the 3 rd moment making the point that evaluating the n th order moment is no challenge. To transit to the continuous domain and be compatible with the capabilities of Excel we consider a 5 th order polynomial for the model.Note that by trial and error the 5 th and the 6 th order proven to be indistinguishable.model=a+b x+c x 2 +d x 3 +e x 4 +f x 5 ; Noting the numeric coefficients of the fitted polynomials utilized Mathematica and Excel are the same.Figure 2 and Figure 3 show the fitted polynomials have the correct trend, however, they aren't to satisfaction.Taking these polynomials as their face value we evaluate their corresponding statistical moments. To do so and fulfill one of our aimed objectives i.e., transiting from discrete to continuous we take the following steps: since normalized discrete distribution is subject to , where F i is the number of events we replace max 1 9 x N ∆ → and where F(x) is the fitted polynomial.With these substitutions, the normalization condition reads, ( ) And because the fitted polynomial or any fitted modeled function is not normalized by multiplying the model function with a constant we enforce the normalization so, ( ) The summary of the output is tabulated in Table 3. The tabulated values of Table 2 and Table 3 reveal the differences between the discrete and its corresponding continuous distributions. As we pointed out Figure 2 and/or Figure 3 are not quite to our satisfaction.If we were using Excel as an ultimate tool this would have been our best fit with its accompanied evaluated moments given in Table 3.However, Mathematica offers a procedure improving the fit quality.With Mathematica the discrete limited data are shown with the blue dots in Figure 1 may be interpolated generating unlimited implicit data making the fit procedure much satisfactory.After trial and error, we noted Interpolation Order of degree 4 works the best.Figure 4 is the result. As shown interpolated function fits exactly the data, dots shown on the right plate are doubly overlapped dots.Meaning the smoothened data is exactly overlaps with discrete ones.The green continuous curve is the 5 th order polynomial that is smoothened by Mathematica.Having such a perfect fit i.e., continuous distribution, we calculate its moments.Beforehand the green curve ought to be normalized, [ ] ( ) Table 4 is the calculated moments for three scenarios in this report.Table 4 embodies the values of the body of the work presented in this report.It compares the values of various moments of discrete, continuous, and improved continuous distributions.To form an opinion about the quality of these moments and the associated quality of distribution functions one needs to have Figure 4 in mind. Conclusions We set two aims crafting this investigation.For practical purposes by a way of example we show how the common knowledge evaluating statistical moments of a discrete distribution is extended by evaluating similar moments for a continuous distribution.Steps shown in this report transiting between these two sets of distributions fill the missing gap that is overlooked in the literature.Our approach identified the shortcoming of a useful program such as Excel justifying looking for a replacement such as the powerful scientific program Mathematica. For a manageable list of a discrete list of integer numbers, we set the list length to 50, as mentioned Mathematica contrary to Excel can extend the list length literally to infinite.Specific of the given example is applied for π, identical steps may be taken utilizing e.g., the e, Euler gamma γ, etc. and combinations amongst e.g., π γ , e π so there is no limitation replicating examples. The lesson learned is that with a solid understanding of steps needed to calculate the moments of a continuous distribution, calculation conducive to the moments for distributions encounter in physical science, e.g., speed distribution given by Maxwell-Boltzmann [5], or probability distribution for quantum system easily may be duplicated as well. Figure 2 . Figure 2. The blue dots are the data shown in Figure 1, the red dots are the values of the Frequencies applied to the polynomial, i.e., our model. Figure 3 . Figure 3. Excel fitted curve with the explicitly employed polynomial. Table 3 . Average, RMS and the third moment of the continuous distribution of the first 50 digits of Pi.The left plate is the same as Figure2, the right plate is the Interpolated fit including the smoothening factor. Table 4 . Summary of the first three moments associated with the three scenarios.Description of each case is embedded in the text.
2,000.6
2022-01-01T00:00:00.000
[ "Mathematics" ]
Using a Structural Root System Model to Evaluate and Improve the Accuracy of Root Image Analysis Pipelines Root system analysis is a complex task, often performed with fully automated image analysis pipelines. However, the outcome is rarely verified by ground-truth data, which might lead to underestimated biases. We have used a root model, ArchiSimple, to create a large and diverse library of ground-truth root system images (10,000). For each image, three levels of noise were created. This library was used to evaluate the accuracy and usefulness of several image descriptors classically used in root image analysis softwares. Our analysis highlighted that the accuracy of the different traits is strongly dependent on the quality of the images and the type, size, and complexity of the root systems analyzed. Our study also demonstrated that machine learning algorithms can be trained on a synthetic library to improve the estimation of several root system traits. Overall, our analysis is a call to caution when using automatic root image analysis tools. If a thorough calibration is not performed on the dataset of interest, unexpected errors might arise, especially for large and complex root images. To facilitate such calibration, both the image library and the different codes used in the study have been made available to the community. INTRODUCTION Roots are of utmost importance in the life of plants and hence selection on root systems represents great promise for improving crop tolerance to biotic and abiotic stresses (as reviewed in Koevoets et al., 2016). As such, their quantification is a challenge in many research projects. This quantification is usually two-fold. The first step consists in acquiring images of the root system, either using classic imaging techniques (CCD cameras) or more specialized ones (microCT, X-Ray, fluorescence,...). The next step is to analyse the pictures to extract meaningful descriptors of the root system. To paraphrase the famous Belgian surrealist painter, René Magritte: " Figure 1A is not a root system." Figure 1A is an image of a root system and that distinction is important. An image is indeed a two-dimensional representation of an object, which is usually three-dimensional. Nowadays, measurements are generally not performed on the root systems themselves, but on the images, and this raises some issues. Image analysis is the acquisition of traits (or descriptors) describing the objects contained in a particular image. In a perfect situation, these descriptors would accurately represent the biological object of the image with negligible deviation from the biological truth (or data). However, in many cases, artifacts might be present in the images so that the representation of the biological object is not accurate anymore. These artifacts might be due to the conditions under which the images were taken or to the object itself. Mature root systems, for instance, are complex branched structures, composed of thousands of overlapping ( Figure 1B), and crossing segments ( Figure 1C). These features are likely to impede image analysis and create a gap between the descriptors and the data. Root image descriptors can be separated into two main categories: morphological and geometrical descriptors. Morphological descriptors refer to the shape of the different root segments forming the root system (Table 1). They include, among others, the length and diameter of the different roots. For complex root system images, morphological descriptors are difficult to obtain and are prone to error as mentioned above. Geometrical descriptors give the position of the different root segments in space. They summarize the shape of the root system as a whole. The simplest geometrical descriptors are the width and depth of the root system. Since these descriptors are mostly defined by the external envelope of the root system, crossing and overlapping segments have little impact on their estimation and hence they can be considered as relatively errorless. Geometrical descriptors are expected to be loosely linked to the actual root system topology, since identical shapes could be obtained from The cumulative length of all root axes mm tot_2+_order_length The cumulative length of all lateral roots mm mean_1_order_length The mean first-order roots length mm mean_2+_order_length The mean lateral root length mm n_1_orders The total number of first order roots -n_2+_orders The total number of lateral roots -mean_2+_order_density The mean lateral root density: for each first-order root, the number of lateral roots divided by the axis length (total length). mm-1 mean_1_order_diam The mean diameter of the first-order roots mm mean_2+_order_diam The mean diameter of the lateral roots mm mean_2+_order_angle The mean insertion angle of the lateral roots • different root systems (the opposite is true as well). They are usually used in genetic studies, to identify genetic bases of root system shape and soil exploration. Several automated analysis tools were designed in the last few years to extract both types of descriptors from root images (Armengaud et al., 2009;Galkovskyi et al., 2012;Pierret et al., 2013;Bucksch et al., 2014). However, the validation of such tools is often incomplete and/or error prone. For technical reasons, the validation is usually performed on a small number of groundtruth images of young root systems. In agreement, most analysis tools are specifically designed for this kind of root systems. In the few cases where validation is performed on large and complex root systems, it is usually not on ground-truth images, but in comparison with previously published tools (measurement of X with tool A compared with the same measurement with tool B). This might seem a reasonable approach, regarding the scarcity of ground-truth images of large root systems. However, the inherent limitations of these tools, such as scale or root system type (fibrous-vs. tap-roots) are often not known. Users might not even be aware that such limitations exist and apply the provided algorithm without further validation on their own images. This can lead to unexpected errors in the final measurements. One strategy to address the lack of in-depth validation of image analysis pipelines would be to use synthetic images generated by structural root models (models designed to recreate the physical structure and shape of root systems). Many structural root models have been developed, either to model specific plant species (Pagès et al., 1989), or to be generic (Pagès et al., 2004(Pagès et al., , 2013. These models have been repeatedly shown to faithfully represent the root system structure (Pagès and Pellerin, 1996). In addition, they can provide the ground-truth data for each synthetic root system generated, independently of its complexity. However, they have not been used for validation of image analysis tools (Rellán-Álvarez et al., 2015), with one exception performed on young seedling unbranched roots (Benoit et al., 2014). Here we (i) illustrate the use of a structural root model, Archisimple, to systematically analyse and evaluate an image analysis pipeline and (ii) use the model-generated images to improve the estimation of root traits. Nomenclature Used in the Paper Ground-truth data: The real (geometrical and morphometrical) properties of the root system as a biological object. They are determined by either manual tracking of roots or by using the output of simulated root systems. (Image) Descriptor: Property of the root image. It does not necessarily have a biological meaning. Root axes: First order roots, directly attached to the shoot. Lateral roots: Second-(or lower) order roots, attached to another root. Creation of a Root System Library We used the model ArchiSimple, which was shown to allow the generation of a large diversity of root systems with a minimal amount of parameters (Pagès et al., 2013). To produce a large library of root systems, we ran the model 10,000 times, each time with a random set of parameters (Figure 2A). For each simulation, the growth and development of the root system were constrained in two dimensions. The simulations were divided into two main groups: fibrous and tap-rooted. For the fibrous simulations, the model generated a random number of root axes and secondary (radial) growth was disabled. For tap-root simulations, only one root axis was produced and secondary growth was enabled (the extent of which was determined by a random parameter). The root system created in each simulation was stored in a Root System Markup Language (RSML) file. Each RSML file was then read by the RSML Reader plugin from ImageJ to extract ground-truth data for the library . These ground-truth data included geometrical and morphological parameters ( Table 1). For each RSML data file, the RSML Reader plugin also created three JPEG images (at a resolution of 300 DPI) for each root system. To simulate one type of image degradation, we added different levels of noise to the images (using the Salt and Pepper Filter in ImageJ) ( Figure 2D). For each root system, we computed overlapping index as the number of root segments having an overlap with other root segments over the total number of root segments. Root Image Analysis Each generated image was analyzed using a custom-made ImageJ plugin, Root Image Analysis-J (or RIA-J). For each image, we extracted a set of classical root image descriptors, such as the total root length, the projected area, and the number of visible root tips ( Figure 2E). In addition, we included shape descriptors such as the convex-hull area or the exploration ratio (see Supplemental File 1 for details of RIA-J). The list of traits and algorithms used by our pipeline is listed in Table 2. Distribution of the different descriptors is given in the Supplemental Figure 2. Data Analysis Data analysis was performed in R (R Core Team) 1 . Plots were created using ggplot2 (Wickham, 2009) and lattice (Sarkar, 2008 The Mean Relative Errors (MRE) were estimated using the equation: where n is the number of observations, y i is the ground-truth and y i is the estimated ground-truth. Random Forest Framework A random forest is a state-of-the-art machine learning algorithm typically used for making new predictions (in both classification and regression tasks). Random Forests can perform nonlinear predictions and, thus, often outperform linear models. Since its introduction by Breiman (2001), Random Forests have been widely used in many fields from gene regulatory network inference to generic image classification (Huynh-Thu et al., 2013;Marée et al., 2016). Random Forest relies on growing a multitude of decision trees, a prediction algorithm that has shown good performances by itself but, when combined with other decision trees (hence the name forest), returns predictions that are much more robust to outliers and noisy data (see bootstrap aggregating, Breiman, 1996). In a machine learning setting one is given a set The learning task is to find a model that predicts the data in a good way, where goodness is measured with regard to an error function L. A decision tree T D is a machine learning method that, for a dataset D, constructs a binary tree with each node representing a binary question and each leaf a value of the response space. In other words, a prediction can be made from an input value by looking at the set of binary questions that leads to a leaf (e.g., is the first-order root bigger than q1 and if yes is the number of second-order roots smaller than q2 and if no, . . . ). Each decision is based upon exactly one feature and is used for deciding which branch of the tree a given input value must take. Hence a decision tree splits successively the set D into smaller subsets and assigns them a value y i = T D (x i ) of the response space. The choice of the feature used for splitting depends on a relevance criterion. In our setting, the default relevance criterion from the randomForest R package (CRAN randomForest, 2015), namely the Gini index, has been used. A Random Forest consists of l decision trees T D,k , where several key parameters such as the feature space, are chosen randomly (hence the word Random in the algorithm name). While using a random subspace strongly accelerates the growth of a single tree, it can also decrease its accuracy. However, the use of large number of trees counterbalance advantageously those two effects. The final prediction for each input value x i corresponds to the majority vote of all the decision trees of the forest T D,k (x i ) in a classification setting while an average of all predicted values is used in a regression task. Framework Description Our method consists of three typical steps: Preprocessing Missing values in our dataset might arise due to highly noisy images, where the measurement of certain descriptors has been infeasible. To deal with this issue, we first replaced missing values. This is done using the imputation function of the randomForest R package. It replaces all missing values of a response variable by the median and then a Random Forest is applied on the completed data to predict a more accurate value. We favored 10 trees for computing the new value over the default value of 300 as we found that it offered sufficiently accurate results for our application while being much faster. Model Generation In the model generation step, for each of the response variables, several forests with different number of trees and different number of splits (t i , m j ) are tested. In practice, the training set D train is divided into m j disjunct subsets D m j train and on each of those, a Random Forest F D m j train is trained on a growing number of t i random trees. Model Selection Given a new data point x, each model predicts a response variable y by averaging the predicted values F D m train (x), i.e., Then in a final step an estimate of the root-mean-square (RMSE) generalized error on the test set D test is computed, where RSME is defined as for D test ={(x 1 ,y 1 ),(x 2 ,y 2 ),...,(x n ,y n )}. Finally, the model with the parameter pair (t,m) having the minimal error (on the separate test set) is chosen in order to make the predictions. Data Availability All data used in this paper (including the image and RSML libraries) are available at the address http://doi.org/10.5281/ zenodo.208214 An archived version of the codes used in this paper is available at the address http://doi.org/10.5281/zenodo.208499 An archived version of the machine learning framework is available at the address https://github.com/FaustFrankenstein/ RandomForestFramework/releases/tag/v1.0 Production of a Large Library of Ground-Truth Root System Images We combined existing tools into a single pipeline to produce a large library of ground-truth root system images. The pipeline combines a root model (ArchiSimple, Pagès et al., 2013), the Root System Markup Language (RSML) and the RSML Reader plugin from ImageJ . In short, ArchiSimple was used to create a large number of root systems, based on random input parameter sets. Each output was stored as an RSML file (Figure 2A), which was then used by the RSML Reader plugin to create a graphical representation of the root system (as a. jpeg file) and a ground-truth dataset ( Figure 2B). Details about the different steps are presented in the Materials and Methods section. We used the pipeline to create a library of 10,000 root system images, separated into fibrous (multiple first order roots and no secondary growth) and tap-root systems (one first order root and secondary growth). The ranges of the different ground-truth data are shown in Table 3 and their distribution is shown in the Supplemental Figure 1. We started by evaluating whether fibrous and tap-root systems should be separated during the analysis. We performed a Principal Component Analysis on the ground-truth dataset to reduce its dimensionality and assess if the type grouping influenced the overall dataset structure (Figure 3A). Fibrous and tap-root systems formed distinct groups (MANOVA p < 0.001), with limited overlap. The first principal component, which represented 30.9% of the variation within the dataset, was mostly influenced by the number of first-order axes. The second principal component (19.1% of the variation) was influenced, in part, by the root diameters. These two effects were consistent with the clear root system type grouping, since they expressed the main difference between the two groups of root-system types. Therefore, since the type grouping had such a strong effect on the overall structure, we decided to separate them for the following analyses. To demonstrate the utility of a synthetic library of ground-truth root systems, we analyzed every image of the library using a custom-built root image analysis tool, RIA-J. We decided to do so since our purpose was to test the usefulness of the synthetic analysis and not to • assess the accuracy of existing tools. Nonetheless, RIA-J was designed using known and published algorithms, often used in root system quantification. A detailed description of RIA-J can be found in the Materials and Methods section and Supplemental File 1. We extracted 10 descriptors from each root system image ( Table 2) and compared them with the ground-truth data. For each pair of descriptor-data, we performed a linear regression and computed its r-squared value. Different types of information are highlighted in Figure 4. First, using a ground-truth image library allows for a quick and systematic analysis of all the descriptors extracted by the image analysis pipeline. Second, it allows researchers to identify which traits can be accurately evaluated (or not) and by which descriptors. Third, for some groundtruth data, such as the mean length of second order roots or the number of first order roots, it shows that none of the classical descriptors gave a good estimation (Figure 4, highlighted with arrows). Finally, the figure highlights that some correlations were different for fibrous-and tap-root systems. As an example, the correlation found between the mean_2+_order_diameter and diam_mean estimators was better for fibrous roots than within the tap-root dataset. Consequently, validation of the different image analysis algorithms should be performed, at least, for each group. An algorithm giving good results for a fibrous root system might fail when applied to tap-rooted ones. FIGURE 4 | Heatmap of the r-squared values between the different image descriptors and the ground-truth values, for the images without any noise. Black represents an r-squared value of 1; white represents a value of 0. Upper panel: fibrous root dataset. Lower panel: tap-root dataset. Arrows highlight the ground-truth data that cannot be accurately described with the different descriptors. The arrows were doubled when it was the case for both fibrous and tap-rooted root systems. Errors from Image Descriptors Are Likely to Be Non-linear across Root System Sizes and Image Qualities In addition to being related to the species of study, estimation errors are likely to increase with the root system size. As the root system grows and develops, the number of crossing and overlapping segments increases (Figure 5A), making the subsequent image analysis potentially more difficult and prone to error. However, a systematic analysis of such error is seldom performed. Estimation errors are also likely to increase as the image quality decreases. Here we artificially added one type of noise (random "salt and pepper" particles) to the images, with two intensity levels. It should be noted that virtually any type of image degradation could be added to the original images using custom image filters (e.g., using ImageJ). Different types of degradation are expected to generate different levels of estimation errors. Figure 5 shows the relationship between the ground-truth and descriptor values for three parameters: the total root length (Figure 5B), the number of roots (Figure 5C), and the root system depth (Figure 5D). For each of these variables, we quantified the Mean Relative Error (see Materials and Methods for details) as a function of the overlap index. This was done for three levels of noise added to the images ("null, " "medium, " and "high"). We can observe that for the estimation of both the total root length and the number of lateral roots, the Mean Relative Error increased with the size of the root system (Figures 5B-C). As stated above, such increase of the error was somehow expected with increasing complexity. Moreover, depending on the metric of interest, such as the number of root tips, low image quality can result in high level of error. For other traits, such as the root system depth, no errors were expected (depth is supposedly an error-less variable) and the Mean Relative Error was close to 0 whatever the size of the root system and image quality. The results presented here are tightly dependent on the specific algorithms used for image analysis and hence might be different for other published tools. However, they are a call for caution when analyzing root images: unexpected errors in ground-truth estimation can arise. Our image library can be used to better identify the errors generated by other analysis tools, current or future. Roadmap for Root Image Analysis Tools Calibration To improve the calibration and validation of future root image analysis tools, we propose the following procedure: 1. Develop the new root image analysis pipeline; 2. Use it to analyse the images from the synthetic root library described here; 3. Compare the results from the new analysis with the corresponding ground-truth; 4. Identify, and clearly state, the type of root systems for which the pipeline works accurately; 5. When releasing the new pipeline, inform the users about the possible errors identified. Using the Synthetic Library to Train Machine Learning Algorithms The main advantage of creating a synthetic library is to generate paired datasets of image descriptors and their corresponding ground-truth values. Having information on both can, in theory, be used to either calibrate the image analysis pipeline or to identify the best descriptors for the ground-truth traits of interest. Here, we explored the second approach and used a Random Forest algorithm to find which combination of descriptors would best describe each ground-truth data (see Material and Methods for details). In short, we randomly divided the whole dataset into training (3/4) and testing subsets (1/4). The training set was used to create a Random Forest model for each ground-truth data, which was then we applied to the test set. The accuracy of these new predictions was then compared to the accuracy of the direct method (single descriptors) ( Figure 2C). Figure 6 shows the comparison of the accuracy (both the rsquared values from linear regressions and the Mean Relative Error, MRE) of both methods for each ground-truth data. We can clearly see that the Random Forest approach performed always better (sometimes substantially) than the direct approach, even for images with high level of noise. In addition, for most traits, the r-squared and MRE values were above 0.9 and below 0.1 respectively, which is very good, especially for such a wide range of images. In addition, the Random Forest approach allowed the correct estimation of traits that were difficult to estimate with the direct approach (such as the number of first-order axes or the mean second-order root density). Figure 7 shows the detailed comparison of both methods for the estimation of the total root length. Again, a clear improvement was visible with the Random Forest method, leading to small errors, even with large root systems and noisy images. Here we presented how machine learning algorithms (Random Forest), could be used in combination with a synthetic image library to improve the estimation of root system traits. Although both the training and test datasets used were made of synthetic images, we believe this approach presents an interesting perspective for the analysis of experimental images. Indeed, a root architectural model can be used to build a custom library of synthetic images from a set of parameters evaluated on a small number of plants from the experimental dataset. Such library could then be used to train the machine learning model which, in turn, will enable the automatic evaluation of root traits from the remaining experimental images. Alternatively, the algorithm could be directly trained on a subset of experimental data obtained by manual or semi-automatic analyses to be then automatically applied to the rest of the dataset. One must keep in mind that the output of the machine learning strongly depends upon the quality of the dataset used for its training and hence must be analyzed carefully. FIGURE 6 | Comparison between the direct trait and the Random Forest approach, for the different root system types and the different levels of noise. For each metric, we computed both the r-squared value from the linear regression between the estimation and the ground-truth (left panels), as well as the Mean Relative Error (right panel). The gray points represent the values obtained with the direct estimation (best descriptor, no noise). Color points represent the values obtained with the Random Forest approach, for different levels of noise. The dotted lines show the 0.9 (r-squared) and 0.1(MRE) thresholds. CONCLUSIONS The automated analysis of root system images is routinely performed in many research projects. Here we used a library of 10,000 synthetic images to estimate the accuracy and usefulness of different image descriptors extracted with a homemade root image analysis pipeline. Our study highlighted some limitations and biases of the image analysis process. We found that the type of root system (fibrous vs. tap-rooted), its size and complexity, as well as the quality of the images had a strong influence on the accuracy of some commonly used image descriptors and their meaning and relevance for ground-truth extraction. So far, a large proportion of the root research has been focused on seedlings with small root systems and has de facto avoided such errors. However, as the research questions are likely to focus more on mature root systems in the future, these limitations will become critical. We showed that synthetic datasets can be used for calibration or modeling (machine learning) steps that allow ground-truth extraction from comparable images. We then hope that our library will be helpful for the root research community to evaluate and improve other image analysis pipelines.
6,003.2
2017-04-03T00:00:00.000
[ "Computer Science" ]
Distinctive Collider Signals for a Two Higgs Triplet Model The extension of the Standard Model (SM) with two complex $SU(2)_{L}$ scalar triplets enables one to have the Type II seesaw mechanism operative consistently with texture-zero neutrino mass matrices. This framework predicts additional doubly charged, singly charged and neutral spinless states. We show that, for certain values of the model parameters, there is sufficient mass splitting between the two doubly charged states ( $H_1^{\pm\pm}, H_2^{\pm\pm}$) that allows the decay $H_1^{\pm\pm} \to H_2^{\pm\pm} h $, and thus leads to a unique signature of this scenario. We show that the final state $2(\ell^{\pm} \ell^{\pm}) + 4b + \mET~$ arising from this mode can be observed at the high energy, high luminosity (HE-HL) run of the 14 TeV Large Hadron Collider (LHC), and also at a 100 TeV Future Circular Collider (FCC-hh). I Introduction A 125 GeV scalar, with a striking resemblance to the Higgs boson proposed in the Standard Model(SM) has been observed at the Large Hadron Collider(LHC) [1,2]. In spite of being a very successful phenomenological theory, the SM, however, cannot generate neutrino masses as suggested in various observations [3][4][5]. A popular set of mechanisms for generating such masses are the three types of seesaw mechanism [6][7][8][9][10][11][12][13][14][15]. Their experimental confirmation, on accelerators in particular are also of considerable interest [16][17][18][19][20][21]. Out of the three suggested types of seesaw, Type II involves an extension of the scalar sector with an additional complex SU (2) L triplet scalar with hypercharge Y = 2. This triplet couples to leptons via interactions which violate lepton number by two units [9-13, 22, 23] and thus generates Majorana masses for neutrinos. The most striking phenomenological consequence of such a triplet scalar is the presence of a doubly charged scalar. The triplet vacuum expectation value (vev), denoted here by w, is rather tightly constrained by the ρ-parameter to values less than about 5 GeV [24]. The doubly charged scalar can decay either to produce same-sign dilepton peaks for w < 10 −4 GeV, or to same-sign W -pairs for w > 10 −4 GeV [25][26][27]. A rather strong lower limit of 770 -800 GeV exists on the doubly charged scalar mass in the former case, from same-sign dilepton searches at the LHC [28]. There is no such bound yet on doubly charged scalar mass for w > 10 −4 GeV [29]. This is because (a) a relatively large triplet vev implies small ∆L = 2 Yukawa couplings from a consideration of neutrino masses, and (b) overcoming standard model (SM) backgrounds for the final state driven by same-sign W-pairs is a challenging task, and requires a large integrated luminosity. Several works can be found in the literature, dwelling on strategies for unravelling the triplet scenario, both before [18][19][20][21][30][31][32] and after [33][34][35][36][37][38][39][40][41] the discovery of the 125 GeV scalar. From some special angles, however, a single triplet is inadequate for consistent neutrino mass generation in the Type-II seesaw model. For example, the somewhat different mass and mixing patterns in the neutrino sector (as compared to those in the quark sector) calls for studies in neutrino mass matrix models. One class of such models consists of zero textures, having some vanishing entries in the mass matrix, thus leading to relations between mass eigenvalues and mixing angles, and ensuring better predictiveness in the neutrino sector. It has been shown, that zero textures are inconsistent with Type-II seesaw models with a single scalar triplet [42]. Such inconsistency is removed when one has two such triplets, as has been demonstrated in [43]. This of course opens up the possibility of new collider signals which has been only partially investigated. In this work, we study the LHC signals that can decidedly tell us about the existence of two complex triplet scalars(∆ 1 , ∆ 2 ). For example, in [43] searches via ∆ ±± 1 → ∆ ± 2 W ± decay mode have been discussed. It should be noted that such a decay is disfavored in the single-triplet scenario, since the ρ-parameter restricts the mass splitting among fields having different electric charges. Another decay channel that opens up in this scenario is ∆ ±± 1 → ∆ ±± 2 h, h being the SM-like Higgs. As has been shown in [44], this mode prevails especially in the presence of at least one CP-violating phase. We neglect CP-violating phases in the present study. Regions of the parameter space answering to both ∆ ±± 1 → ∆ ± 2 W ± and ∆ ±± 1 → ∆ ±± 2 h have been identified, and the corresponding signals have been predicted. Both these channels can lead to the final state 2 ± ± + 4b + E T / , where, ≡ e, µ. We carry out a detailed analysis to estimate signal significance of this scenario in the regions of the parameter space, consistent with all current limits both at the high energy high-luminosity (HE-HL) run of the LHC with √ s = 14 TeV and at the proposed √ s = 100 TeV Future Circular Collider (FCC-hh) at CERN [45] or the Super Proton-Proton Collider (SPPC) in China [46]. Further details on the physics potential of the 100 TeV collider can be found, for example, in [47]. We also comment on how to differentiate the two-triplet scenario from a single-triplet one using the signal analysis in this work. The paper is organized as follows. In Section II we discuss a little bit about the well-motivated Y = 2 single triplet scenario. Relevant phenomenological features of the two-triplet scenario are presented in Section III. Results of the collider analysis are reported in Section IV. We summarize and conclude in Section V. II The Y = 2 single triplet scenario In this section we briefly describe the single triplet scenario. Along with the SM fields, there is an extra SU (2) L complex triplet scalar field ∆ with hypercharge Y = 2. where The vevs of the doublet and the triplet are given by respectively and the electroweak vev is given by The most general scalar potential involving φ and ∆ can be written as whereφ ≡ iτ 2 φ * . In general, both v and w can be complex. However, since we want to avoid all CP-violating effects, we choose both the vevs to be real and positive, which as a result implies that t has to be real. It should be remembered that the choice a < 0, b > 0 ensures the primary source of spontaneous symmetry breaking to be the vev of the scalar doublet. At the same time, the ρ-parameter has to be very close to its tree-level SM value of unity, as required by the latest data, namely, ρ = 1.0004 +0.0003 −0.0004 [48] for w v. Also the doublet-triplet mixing has to be small and the perturbativity of all quartic couplings at the electroweak scale has to be guaranteed. All the aforementioned constraints drive us to choose the following orders of magnitude for the parameters in the potential: The mass terms for singly-charged scalars in this model are given by where Diagonalization of the matrix should yield one zero eigenvalue, corresponding to the Goldstone boson. The singly-charged mass-squared matrix is whereas the doubly-charged scalar mass is In the limit w v, one obtains Electroweak precision data imply ∆M ≡| m ∆ ±± − m ∆ ± | 50 GeV [49,50] assuming a light SM Higgs boson of mass m h = 125 GeV and top quark mass M t = 173 GeV. Hence, the decay mode ∆ ++ → ∆ + W + is kinematically not allowed with a single triplet scalar. III Extension with two triplets The single-triplet scenario can sometimes turn out to be inadequate. For example, the somewhat novel kind of bi-large mixing in the neutrino sector motivates people to link such a mixing pattern with the neutrino mass matrix itself. The number of arbitrary parameters in such an investigation is reduced, and the mass eigenvalues and mixing angles are linked in a predictive manner, if some elements of this matrix vanish. It is with this in view that various texture zero neutrino mass matrices have been proposed, for example, through the imposition of certain Abelian symmetries. Two-zero textures constitute a popular subset of such models, which have been widely used in various contexts [42,[51][52][53][54][55][56][57][58][59][60][61][62][63][64]. In the specific context of Type II seesaw, however, inconsistencies arise when texture zeros (especially two-zero textures) are attempted [65]. Such inconsistency can be avoided, as already mentioned, when two scalar triplets are present. In such a scenario, one extends the SM with two Y = 2 triplet scalars ∆ 1 , ∆ 2 : The vevs of the scalar triplets are given by The scalar potential in this scenario involving the Higgs doublet and the two triplets can be written as where k, l = 1, 2. As mentioned in the previous section, v, w 1 , w 2 as well as t 1 , t 2 are taken to be real and positive. One can also use for illustration, without any loss in the generality of the results, With We redefine the following 2 × 2 matrices and vectors: The minimization of the potential (12), neglecting all terms quartic in the triplet vevs, yields where we have used t · w = k t k w k . Solving Eq. (15) and Eq. (16) simultaneously, w k (k = 1, 2) are obtained as After diagoalization of different kinds of scalar mass matrices following electroweak symmetry breaking (EWSB), we obtain the charged scalars (H ±± 1 , H ±± 2 ), singly charged Higgs(H ± 1 , H ± 2 ), and the neutral CP-even(h, H 1 , H 2 ) and CP-odd(A 1 , A 2 ) scalars. Among them h is the SM-like Higgs. The mass matrix of the doubly-charged scalars is given by which can be diagonalized by yielding the mass eigenstates: The singly charged scalar mass-squared matrix comes from where Using equations (15) and (16) we get, This serves as a consistency check that the singly charged mass matrix has to have an eigenvector with zero eigenvalue that corresponds to the would-be-Goldstone boson. It is evident from Eq. (13) that b is of the order of v 2 . Therefore, in a rough approximation one can safely ignore the t k and the triplet vevs in the mass matrix M 2 ± . In that limit, also a + cv 2 = 0 and the charged would-be-Goldstone boson is equivalent to φ ± . There is no mixing with the δ ± k . The singly charged mass matrix can be diagonalized by Where G ± is the charged would-be-Goldstone boson. Interactions with the W-boson are given by Here g is the SU (2) L gauge coupling constant. Changing the gauge basis into mass basis allows us to compute the decay rates of H ++ The (∆L = 2) Yukawa interaction Lagrangian involving the triplets and the leptons is where L i denote the left-handed lepton doublets, C is the Dirac charge conjugation matrix, the h (k) ij are the symmetric neutrino Yukawa coupling matrices of the triplets ∆ k , and the i, j = 1, 2, 3 are the summation indices over the three neutrino flavours. 1 When the triplets acquire vev from Eq. (27) one can generate the neutrino mass matrix as : This connects the Yukawa coupling constants h ij and the triplet vevs w 1 , w 2 . In this work we use, once more as illustration, the normal hierarchy of the neutrino mass spectrum and set the lowest neutrino mass eigenvalue to zero. The elements of The neutrino mass matrix M ν can be obtained by using the observed central values of the various lepton mixing angles and by diagonalising as where U is the PMNS matrix given by [66] andM ν is the diagonal matrix of the neutrino masses. We have dropped possible Majorana phases for simplicity. Global analyses of data can be used to resolve the various entries of U [67]. The left-hand side of Eq. (27) is reliably represented, at least in orders of magnitude, by the central values of all angles, including that for θ 13 as obtained from the recent Daya Bay and RENO experiments [68,69]. The actual mass matrix thus constructed has some elements at least one order of magnitude smaller than the others, thus suggesting texture zeros. IV Analysis Let us now look for smoking gun collider signals of doubly charged scalars of this scenario with w k ∼ O(1) GeV. The spectacular l ± l ± decay channels are suppressed in this case. The doubly charged scalars now mainly decay into the following final states: 1 We assume the charged-lepton mass matrix to be already diagonal. [75]. The decay mode as mentioned in Eq.( 31) is absent in the single-triplet model, since there is only one doubly charged scalar particle. Also, the equivalent of Eq.( 33), namely, H ±± 1 → H ± 1 W ± is kinematically disfavored since the mass splitting between singly and doubly charged scalar is restricted by the ρ parameter constraint [48][49][50]. Hence, in order to distinguish between the single triplet and the double triplet scalar model, it is advantageous to investigate channels in Eq.( 31) and Eq.( 33), since the corresponding event topologies cannot be faked by a singletriplet scenario. The production of W ± in association with the SM-like Higgs boson leads to the following final state: where = e, µ, which arise from W ± → ± ν (ν ) and h → bb decay modes. Since the doublet-triplet mixing is small in this model, there is no noticeable difference in production rate in the gluon fusion channel and also the tree-level decay of the SM-like Higgs. However, the presence of H ±± 1 , H ±± 2 and H ± 1 , H ± 2 modify the loop-induced h → γγ decay significantly. Detailed analyses of such modification can be found in [24,49,[70][71][72][73][74]. Here we just ensure that our benchmark points are consistent with the limits on the diphoton signal strength of the Higgs at the 2σ level [75]. IV.1 Benchmark Points Our collider analysis uses three benchmark points. Each of them is determined by thirteen model parameters as defined in the scalar potential in Eq. (12). The choices of these parameters for our three bench-mark points are shown in the Table 1. One can see from the table that the values of some of the parameters are remained fixed while others (B, D, E, F, c) have been varied, since the scalar masses are strongly dependent on the latter. Table 2 lists the corresponding values of neutral, singly charged and doubly charged scalar masses. All three benchmark points are consistent with the observed Higgs signal strengths. Even after fixing fixed all model parameters in Table 1, the Yukawa coupling matrices h (1) and h (2) still remain indeterminate (Eq. 28). We fix the matrix h (2) by choosing one suitable value for all elements of the µ-τ block and keeping the rest of the elements one order smaller. It is emphasized that this ad hoc convention does not affect the generality of our results. One may thus write IV.2 Collider search at the LHC We finally turn to signals of this scenario at the high energy high luminosity (HE-HL) run of the LHC. The production and decay chains leading to Eq.( 35) are We calculate the event rates in Madgraph5(v2.4.3) [76] with the appropriate Feynman rules obtained via FeynRules [77]. The signal as well as all the relevant standard model background events are calculated at the lowest order (LO) with CTEQ6L [78] parton distribution functions, setting the renormalization and factorization scales at M Z . They are subsequently multiplied by the next-to-leading order (NLO) K-factors for the signal and the SM background processes, taken as 1.25 [79] and 1.3 [80][81][82] respectively. For the showering and hadronization of both the signal and the SM background events we use the Pythia(v6.4) [83], and the detector simulation is done in Delphes(v3) [84], where jets are constructed using the anti-K T algorithm [85]. The cut-based analyses are done using the MadAnalysis5 [86]. The production of ttZ, ttW ± and tth constitute the dominant SM background for our signal. While generating events, we select jets and leptons (electron and muon) using the following kinematical acceptance cuts : The presence of four b-jets in the signal makes b-tagging an important issue. For this we comply with the efficiency formula proposed by the ATLAS collaboration [87] for both the signal and background processes as follows: In addition, a mistagging probability of 10% (1%) for charm-jets (light-quark and gluon jets) as a b-jet has been taken into account. For lepton isolation, we abide by the criteria defined in Ref. [88] where the electrons are isolated with the Tight criterion defined in Ref. [89] and the muons are isolated using the Medium criterion defined in reference [90]. One point worth mentioning at this point is that we consider the all inclusive decay channels for both the signal and background event generation, not only the deptonic decay of the SM W ± boson. This is true for all the subsequent analyses. Before reporting the results in detail, let us examine some kinematic distributions relevant for the analysis, starting with transverse momenta (p T ) of the two leading leptons as depicted in Figure 1 for BP1. The signal and background distributions are shown in blue and red respectively. In the signal events, these leptons originate from the decay of the W boson, while for the SM background processes, they come from the decay of W ± or Z-bosons. From the shape of the p T distribution one can see that it is evidently difficult to impose any selection cut on the p T of these two leading leptons to distinguish the signal from the SM backgrounds. This general feature is also found for other two benchmarks points. Hence, we only put the basic acceptance cut on the p T of the leptons, p T > 10 GeV. In Figure 2 we show the P T distributions of four b−jets for both signal and background events. In the signal events, they come from decays of two SM-like scalars each of which is produced via either from H ±± On the other hand, the background b-jets have their sources mostly in tth, ttW ± and ttZ. In the case of ttW ± production, two b-jets come from the top quarks, while a pair of light quark jets may fake as b-jets. The leading b-jet of our signal events is found to be harder than that of those in the background. Thus we demand the leading b-jet to have p T (j 1 ) > 80 GeV, and p T of the subsequent 3 b-jets to be p T (j 2 ) > 60 GeV, p T (j 3 ) > 30 GeV and p T (j 4 ) > 20 GeV. The normalized distribution of the missing transverse energy (E T / ) are in the left panel of Figure 3. One should note that the E T / for both signal and backgrounds is due to either neutrinos or the mismeasurement of the jet and lepton momenta. Consequently, the shape of the distributions for both the signal and the background look very similar. The small rightward shift of the peak of the signal distribution can be attributed to fact that W ± are boosted as they are produced from the decays of much heavier parent scalars. Hence, we find that a moderate requirement of E T / > 30 GeV is sufficient to improve the signal to background ratio. The right panel of Figure 3 shows the normalized distribution of angular separation ∆R( ± ± ) between two same-sign leptons for both the signal and the background events. For our signal events, both leptons come from same-sign W ± . The leptons in the signal tend to have small opening angle due to the spin correlation between the parent W ± W ± pair [91,92]. For the SM background, on the other hand, they come from W ± /Z bosons radiated from top/anti-top quarks, and can have wider separations. An upper cut on ∆R( ± 1 ± 2 ) < 1.5 thus enhances our signal-to-background ratio. The cuts are summarised below, in the order in which they are imposed: • (C-1): We want the leading b-jet to have p T (b 1 ) > 80 GeV. This is motivated by the top-left panel of Figure 2 and it immediately reduces the SM background arising from ttV, V ≡ W ± , Z processes by almost 50%. • (C-2): Given the fact that the second leading b-jet for the signal is not very hard, we demand that p T (b 2 ) > 60 GeV. This cut also enhances the signal to background ratio to a reasonable extent. • (C-4): It is evident from the bottom-right panel of Figure 2 that the fourth b jet is very soft. So, the choice of p T (b 4 ) > 20 GeV ensures the presence of four b jets in the signal. • (C-7): E T / > 30 GeV is imposed. This also takes care of fake E T / . • (C-8): The most effective cut to reduce the SM backgrounds is the angular separation between the same sign dileptons. The requirement of ∆R( ± 1 ± 2 ) < 1.5 considerably improves the signal to noise ratio. With these cuts imposed, we obtain the statistical significance of the signal HE-HL LHC with √ s = 14 TeV and also present some tentative predictions for the proposed FCC-hh with √ s = 100 TeV. IV.2.1 Collider search at the LHC at √ s = 14TeV Table 3 contains the cut flow for √ s = 14 TeV. The statistical significance is given by where n s (n b ) denotes the number of signal (background) events after implementing all the cuts at a specific luminosity. The signal significance can be seen for three benchmark points, assuming an integrated luminosity L int of 3 ab −1 . We obtain the highest statistical significance S = 5. 68 Table 3: Effective cross section obtained after each cut for both signal ( 2( ± ± ) + 4b + E T / ) and background and the respective significance reach at 3 ab −1 integrated luminosity at 14 TeV LHC In Figure 4 we present the doubly charged scalar(H ±± 1 ) mass and the integrated luminosity(L int ) required to reach 5σ(red solid line) and 3σ( blue solid line) significance with 14 TeV. It is evident from this Figure that 'discovery' is not possible beyond m H ±± 1 ∼ 425 GeV even with 3ab −1 . However, one can probe the doubly charged scalar mass up to 470 GeV at 3σ level. ) at the 14 TeV LHC. The red and blue lines correspond to the L int required for the 5σ discovery and 3σ evidence respectively for the signal studied. It is worth mentioning here that, for the single-triplet case with w ∼ 1 GeV, doubly charged scalar masses up to 300 GeV can be explored at the 5σ level at the LHC with an integrated luminosity of 3 ab −1 [41]. We next make a tentative estimate of the potential of the proposed 100 TeV FCC-hh to investigate the two triplet scalar scenario. In our numerical analysis, we apply the set of selection cuts as those used for the 14 TeV LHC. Since the production mode of our signal processes are electroweak processes, the NLO K-factors, too, are taken to be the same, an assumption justifiable in the light of [93]. Table 4 summarises the effect of different selection cuts on the signal and the SM backgrounds for all the three benchmark points. At this energy, all of them can be probed with S ≥ 5σ at 3ab −1 . A comparison with the 14 TeV LHC shows that, while a 5σ reach with (3ab −1 ) is possible only for BP1, a distinct improvement is foreseen for the FCC-hh. This is reflected in Figure 6 which shows that one can probe doubly charged Higgs masses up to 490(525) GeV at the 5σ(3σ) significance level, with 3(2.4)ab −1 . Table 4: Effective cross section obtained after each cut for both signal ( 2( ± ± ) + 4b + E T / ) and background and the respective significance reach at 3 ab −1 integrated luminosity at 100 TeV pp collider Figure 6: Same as in Figure 4 for the 100 TeV FCC-hh. The signal topology studied in this work strongly depends on m H ±± 1 , m H ±± 2 and their mass splitting. This is accentuated in Figure 7 where we show the dependence of the signal significance on these two masses. using colour-codes significance regions once more. Figure 7: Same as in Figure 5 for the 100 TeV FCC-hh. V Conclusion We have extended the Type-II seesaw model with an extra SU (2) L complex triplet scalar and allow a small mixing of the triplets with the SM Higgs doublet as required by electroweak precision measurements. After EWSB, one has a rather rich scalar spectrum including two each of doubly and singly charged scalars. A collider signal distinctly reflecting the presence of two triplets is identified in the form of 2 ± ± + 4b + E T / . This channel can arise both from H ±± 1 → H ±± 2 h and H ±± 1 → H ± 2 W ± following the characteristic decay modes in the case where the Yukawa couplings are too small (large triplet vev) to trigger the dominant same-signdilepton channels. Three benchmark points, consistent with all phenomenological constraints, have been used as illustration. We have estimated the potential at high luminosity LHC and a high energy pp collider 100 TeV in identifying this signal. We conclude after a cut-based analysis that the 5σ discovery reach is possible with 3 ab −1 luminosity at 14 TeV LHC for the mass of the heavier doubly charged scalar up to about 425 GeV. For the proposed FCC-hh collider operating at √ s = 100 TeV, this reach can be extended up to 490 GeV. Though we have not taken into account some experimental issues such as jet faking leptons, lepton charge misidentification and photon conversions into lepton pairs, these effects are unlikely to affect the predictions qualitatively. Therefore, with appropriate refinement, our prediction should help in probing this scenario which is relevant in framing predictive models for neutrino masses and mixing. VI Acknowledgement NG would like to acknowledge the Council of Scientific and Industrial Research (CSIR), Government of India for financial support. BM acknowledges financial support from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. DKG and NG thank RECAPP for hospitality while this work was being carried out. BM would like to thank Indian Association for he Cultivation of Science for hospitality during the concluding part of the project.
6,368.6
2018-08-06T00:00:00.000
[ "Physics" ]
Reconfigurable Intelligent Surface (RIS) in the Sub-6 GHz Band: Design, Implementation, and Real-World Demonstration Here, we first aim to explain practical considerations to design and implement a reconfigurable intelligent surface (RIS) in the sub-6 GHz band and then, to demonstrate its real-world performance. The wave manipulation procedure is explored with a discussion on relevant electromagnetic (EM) concepts and backgrounds. Based on that, the RIS is designed and fabricated to operate at the center frequency of 3.5 GHz. The surface is composed of 2430 unit cells where the engineered reflecting response is obtained by governing the microscopic characteristics of the conductive patches printed on each unit cell. To achieve this goal, the patches are not only geometrically customized to properly reflect the local waves, but also are equipped with specific varactor diodes to be able to reconfigure their response when it is required. An equivalent circuit model is presented to analytically evaluate the unit cell’s performance with a method to measure the unit cell’s characteristics from the macroscopic response of the RIS. The patches are printed on six standard-size substrates which then placed together to make a relatively big aperture with approximate planar dimensions of $120 \times 120$ cm2. The manufactured RIS possesses a control unit with a custom-built system that can control the response of the reflecting surface by regulating the performance of the varactor diode on each printed patch across the structure. Furthermore, with an introduction of our test-bed system, the functionality of the developed RIS in an indoor real-world scenario is assessed. Finally, we showcase the capability of the RIS in hand to reconfigure itself in order to anomalously reflect the incoming EM waves toward the direction of interest in which a receiver could be experiencing poor coverage. I. INTRODUCTION Wireless communication engineers envision a fully connected world where there is a seamless wireless connectivity for Everyone and Everything. Current 5G and future 6G wireless networks will be required to fulfil an ever-increasing demand for connectivity at an unprecedented scale. This will require all future generations of wireless networks to be smart, intelligent and efficient. Traditionally, all the dynamic and adaptive features of a typical mobile network are controlled either by the base The associate editor coordinating the review of this manuscript and approving it for publication was Sandra Costanzo . station or the user equipment, while the wireless propagation environment remains unaware of various communications processes going through it. The existing mobile network operators face a significant challenge of not only ensuring seamless connectivity in harsh propagation environments but of supporting an ever-increasing number of mobile users, which sometimes, are unevenly distributed in a network. Though large-scale antenna systems can fulfil some of these requirements but having several large obstacles like buildings and trees along with multipath can degrade the quality of the received signal severely at certain locations. Relay nodes, which have conventionally been used to mitigate some of these problems, result in an increased power consumption and certain effects on the signal to noise ratio that are undesirable but inevitable. A considerable amount of research has been done to address these challenges, however, it remains an open topic of research and evaluation among the industrial and academic fraternity to design and evaluate technologies that can impart some level of intelligence to the otherwise passive radio propagation environment. Reconfigurable Intelligent Surface (RISs) [1], also referred to as Intelligent reflecting surfaces (IRS) [2], Large Intelligent Surface (LIS) [3], or Hypersurfaces [4], are a promising technology that can address the aforementioned challenges by sensing the environment, recycling the existing radio waves and enabling non-line-of-sight (NLoS) communications. RIS can achieve this by manipulating those electromagnetic (EM) waves which are impinging on it and redirect them to the desired angle with a relatively low power consumption. From the cellular communication points of view, this means a huge saving in resources as operators will not need to invest time and money to install new BSs for coverage provisioning on network's blind spots. RISs are typically composed of a metasurface sheet [5] backed by a control unit. The metasurface is a subgroup of periodic structures [6] where the smallest geometry that is repeated in a fashion is called unit cell. Each unit cell contains a (a number of) conductive printed patch(es), known as scatterer(s), where the size of each scatterer is a small proportion of the wavelength of the operating frequency. The macroscopic effect of these scatterers defines a specific impedance surface [7] and by controlling this impedance surface, the reflected wave from the metasurface sheet can be manipulated. Each individual scatterer or a cluster of them can be tuned in such a way that the whole surface can reconstruct EM waves with desired characteristics. In order to reconfigure the response of the structure, each scatterer must be equipped with a component (which is commonly placed on it) that can be tuned electronically. The control unit is responsible for governing the performance of this tunable component. In EM-engineering, the response tunability can be obtained via different approaches such as by using of the liquid crystals [8], graphene [9], microelectromechanical systems (MEMS), PIN diodes [10], or varactor diodes [11]. A substantial amount of research has been conducted regarding the theoretical modelling of the RIS which of them, some recently published are [12]- [16], but when it comes to an actual implementation, there are very few test-bed systems that have been realized so far to evaluate the realistic functionality of a RIS. In [17], a couple of reflecting surface prototypes are introduced; one for 2.3 GHz and the other for 28.5 GHz. On each unit cell, there are five PIN diodes which makes the manufacturing process relatively complex where a capacitor and an inductor are employed to bias each PIN diode. With this combination, authors were able to modify the tilt angle out of their designed reflecting sheet. In their test-bed system, the transmitter (Tx) is placed close to the reflecting surface's aperture which makes the structure more similar to a reflectarray rather than a RIS. In a realistic mobile network, the base station is typically quite far away from the reflecting surface. In [18], the Tx is placed at a distance of around 80λ 0 far from the surface where λ 0 is the free space wavelength at the operating frequency. This relatively far distance makes the structure more compatible with the concept of RIS. The element response is regulated via four switching states due to the presence of four hardcoded delay-lines. This prototype is operating at f = 5 GHz. A programmable surface is presented in [19], operating at around f = 11 GHz with reconfigurable polarization response and focusing beam. Each unit cell contains a PIN diode and a biasing circuit. In order to obtain various responses out of the surface, the distance between Tx and the surface is adjustable. In line with this, when the structure is aimed to make a focused beam, the Tx should be located close to the surface aperture to enable it to reflect the waves to the desired focal point. On the other hand, a relatively farther Tx makes the dynamic polarization response. Although the authors of [20] did not call their design a RIS or something similar (as this work dates back to the time when this concept was not introduced to telecommunication systems), but their prototype is one of the most fundamental structures related to the topic in hand. With four varactor diodes attaching at four sides of a square-patch scatterer in each unit cell, a tunable impedance surface is obtained, which is forming a beam-reconfigurable reflectarray that operates at 3.5 GHz. After this work, several researches have been carried out on beam-scanning reflectarrays which are listed in [21]. A couple of varactor diodes are employed in [22], [23] on each unit cell to obtain a programmable surface with the operating frequency of f = 4.25 GHz and a dual-polarized structure respectively. In both structures, the Tx is adjusted close to the reflecting surface with the receiver (Rx) at relatively farther distance. So, these structures also resemble a reconfigurable reflectarray in practice. Note that although the traditional reflectarrays [24] and the RISs share many EM backgrounds, there are some considerable conceptual and technical differences between them which are explained in Section II-A. In this paper, a RIS structure is proposed with a relatively straightforward manufacturing process. With the operating frequency of f = 3.5 GHz, the Tx is located more than 350λ 0 away from the surface while the response of the RIS is altered dynamically. We focus on RIS performance in a real-world scenario and present the corresponding key design considerations. Practicable EM wave manipulation is explored by using of the designed and implemented RIS to demonstrate its substantial effect on a wireless link. Specifically, an experimental case study is presented where the Rx is located on the blind-spot of the Tx so that the receiving signal cannot be decoded properly without introducing the RIS to the link. The remainder of this paper is organized as follows. The next section begins with EM concepts and backgrounds on the RIS. Then, we detail practical design considerations to implement a RIS; and based on the discussed considerations, a RIS is designed and introduced. After that, we demonstrate the real-word performance of the developed RIS in an indoor scenario and the paper will be concluded in the final section. II. WAVE MANIPULATION BY THE RIS In this section, we begin with a discussion on the EM-based concepts of the RIS and present the design procedure and considerations. Then, a RIS is proposed and implemented with the operating frequency of f = 3.5 GHz. A. EM CONCEPT AND DESIGN GUIDELINE When a wavepacket impinges an interface, it reflects based on the Snell's law of reflection on the plane of incidence. This means the relative angle between the plane formed by a unit vector normal to the reflecting interface and the vector in the direction of incidence (θ i ) is kept constant for both incoming and outgoing rays so that θ i = θ r where the subscript r indicates the reflected rays. A RIS provides this opportunity to break this phenomenon and incline the reflecting waves to the direction of interest, making θ i = θ r . This concept is the so-called engineered reflection (or anomalous reflection) which is schematically illustrated in Fig. 1. To deliver this kind of reflection, the reflection phase must be linearly dependent on the corresponding coordinate along the surface interface. This can be fulfilled by using the principle of the ''generalized Snell's law'' [25] or the ''holographic technique'' [26]. Here in this work, we apply the first technique to regulate the EM-response of the structure. In order to further elaborate, let us assume a planar wavefront incidences on an electrically large but smooth plate at the angle of θ i . This leads to forming surface-guided waves along the projection of the wave vector to the plate which is denoted by x t . These surface waves are characterized by the ratio between the tangential components of the local electric and magnetic fields. This corresponding ratio is known as surface impedance η s toward the direction of the bounded-travelling waves. The local reflection coefficient at the interface is given by = η s −η 0 η s +η 0 where η 0 is the free space wave impedance of the incident plane wave. The surface impedance is purely imaginary and can be formulated as where r = (sin θ i − sin θ r )k 0 x t with k 0 being the free space wave number. Using a RIS, it is possible to manipulate the η s in a way that for a given values of θ i and θ r , Eq. (1) becomes valid so that the anomalous reflection happens and the reconstructed beam forms toward the angle of interest. Considering the method discussed in [27], it is possible to represent the generalized law formulation as below: with n r and n i indicating the refractive indexes of the surrounding medium (vacuum in our case) and dψ(x)/dx specifies the phase variation across the x axis where the phase is progressed on the plate. Eq. (2) offers a more straightforward process to derive the phase distribution on a given plate with a known θ i and a desired θ r and consequently to synthesis the RIS. It must be noted that the RIS is conceptually different from traditional beamformers [28] in several ways. The most distinguished one is that there is no RF-chain like it is applied in phased arrays [29]. Therefore, the structure is completely on its own in terms of aiming the reflected wave vectors. Based on this concept, RISs are considered as RF-chain-free wireless systems. In electromagnetic theory, a sole EM-aperture can do the beam-forming (without getting assisted by several RF-chains) provided that its physical profile contains a periodicity. In periodic structures, the space domain repetition of a pattern in/on the body of the structure can be expanded to a Fourier series. The spatial Fourier expansion in this regard is known as Floquet spatial harmonic expansion [6]. This is an expansion from spatial properties of structure to wave vector which means controlling the periodicity will regulate the direction of waves, introducing the concept of beam-forming. As the smallest repeating geometry is the unit cell, the periodicity of the structure can indeed be controlled by characterizing the unit cell. There are two common approaches to realize the performance of a unit cell with periodic boundary conditions; the first approach is based on analysing the dispersion diagram [30] and the second one is to find out the reflection/transmission characteristics of a wave illuminating the unit cell [31]. To design a reflecting structures, the second approach is mostly used. RISs are also sometimes compared with reconfigurable reflectarray antennas but the way in which RISs handle the EM-wavefront in a wireless network makes them different. In the reflectarrays, the initial source of EM-waves, which is usually a horn antenna, referred to as the feeder, is kept relatively close to the reflecting aperture. This is done in such a way that the phase center of the feeder is approximately located at the focal point of the equivalent curved reflector which is a counterpart of the reflectarray [32]. Considering the radiation pattern and the relative position of the feeder to the reflecting surface, the aperture efficiency is mainly defined based on two factors i.e. illumination and the spill over efficiencies [33]. Thus, it is possible to adjust the feeder in order to make the aperture reach its maximum possible efficiency so that the reflected beam is formed properly. However, in the case of RIS, this initial source of EM-waves is located far from the reflecting aperture and therefore offers no flexibility to be included into the design procedure. Another important point to be noted is regarding the power-fluxdensity. When the reflecting aperture is far from the source, it will collect a small portion of the EM-wave's energy to manipulate with and to redirect it toward the direction of interest. Under this circumstance, forming a well-shaped reflected beam will be a much more challenging task as compared to that of a conventional reflectarry as there is no specific control on the aperture efficiency. To address this, the reflecting surface can have a substrate with a relatively high permitivity and thickness as it will help the structure to capture the impinging space waves into the substrate more strongly in order to form a better aperture which will ultimately influence the unit cell design procedure. There are three factors that characterize the unit cell performance: the substrate's permitivity, thickness, and the geometry of the printed patch. Since there are some limitations on the first two factors for a RIS, as explained earlier, the geometry of the printed patch plays an important role in obtaining a proper response from the unit cell. With the above-mentioned backgrounds on the RIS, we aim to design a structure which can make the reflected beam and tilt it to the angle of interest at the operating frequency of 3.5 GHz. To achieve this goal, the following pre-considerations must be taken into account: • The fidelity of a reflecting surface's response is directly linked to its size, the larger the surface, the better the performance. At f = 3.5 GHz, the free space wavelength is around λ 0 = 8.5 cm; as a result, the surface must be big enough to contain several wavelengths. • The response of the surface must be reconfigurable. This response is regulated by the unit cell properties which means that the unit cell must have a tunable element. • Other than the above-mentioned substrate's requirements, it must not have a high dielectric loss to prevent the amplitude of the local reflections to drop severely. Among the electronic components that can bring the tunability [8]- [11], the varactor diodes can offer a continuous variation of surface response as the states of scatterer on each unit cell can vary continuously. Moreover, considering the environmental changes that happens in cellular networks, the innate characteristics of varactor diodes will almost undergo no alternation comparing to other aforementioned options presented in Section I. As a result, we use varactor diodes to enable the surface to modify its response. B. IMPLEMENTATION Having the pointed pre-considerations in mind, the design procedure is as below: Based on our studies, a reflecting surface with dimensions of around 120 × 120 cm 2 (the exact size is L x × L y = 1140 × 1116 mm 2 ) can provide a proper response at f = 3.5 GHz. The assembled prototype is presented in Fig. 2 (a) with a zoomed-in view of the unit cell in Fig. 2 (b). As the designed surface has a very large printed area and it is quite difficult to find a PCB prototyping machine that can accept such a big laminate, the surface is divided into six tiles (Tile 1 is highlighted in Fig. 2 (a)) with each tile being carefully aligned side by side to form the complete reflecting surface. This RIS is containing the designed reflecting surface and the developed control unit, with a portable controller (see Fig. 2 (c)) which is designed and implemented to have a remote connection with the control unit via WiFi. The control unit is responsible for tailoring the response of the surface. The unit cell consists of a conducting scatterer composed of two D-shape patches, connecting to each other by a surface mount tuning varactor diode with model number of SMV2202-040LF. The geometrical parameters of the unit cell are w p = 7.13, w i = 4, l u = 12.7, l p = 8.99 mm which are set to optimize the phase and amplitude range of the reflected waves. With a relatively low reverse voltage, the mentioned diode can provide a specific high capacitance ratio which makes it an appropriate candidate to regulate the phase response of the unit cell. Without having this diode on the scatterer, the reflected wave's vector defines a specific constant phase in comparison with that of incident waves. However, by altering the reactance value of the varactor diode, a manipulated phase is added to this constant phase which leads to a tailored range of phase without modifying the physical properties of the scatterer. The reflecting surface comprises of 2430 scatterers; their macroscopic interaction defines the surface response. The DC voltage to the two D-shape patches in each scatterer is regulated via the control unit resulting in a voltage difference of V D across each diode. The diodes are configured in reverse bias mode where V D governs the capacitance value of the diodes which is denoted by C V . The unit cells are printed on an F4BT450 substrate with a thickness of h = 1.524 mm and r = 4.5, backed by a copper ground plane. This material is a micro dispersed ceramic PTFE composite with a woven fibreglass reinforcement procedures. The substrate's permitivity is high enough to properly construct the reflected beam while it is more durable comparing with most of other laminates in the market with the same value of permitivity. The proposed unit cell can be modeled by the equivalent circuit of Fig. 3 (a) where ζ 0 and ζ d (=ζ 0 / √ r ) are representing the characteristic impedance of free space and dielectric slab respectively with the scatterer impedance denoted by Z S . In order to obtain a wider perspective on the performance of the designed unit cell (see Fig. 2 (b)), it is possible to split Z S in two series impedance of Z P and Z D to refer to the printed patch's and the varactor diode's impedance respectively [34], [35]. Under this circumstance, Z P = R P + jX P is governed by the printed layout geometry of the unit cell. With the diode operating at reverse bias, Z D can be estimated by the circuit presented in Fig. 3 (b) with R D = 3 , L D = 0.45 nH, and C D = 0.075 pF [36]. The input impedance (Z in ) of the grounded dielectric slab at normal incidence is: Recording the reflection coefficient (S 11 ) at the reference plane Ref (see Fig. 3 (a)) makes it possible to read Z S as below: Note that Eq. (4) is derived from transmission line theory [37]. Considering Fig. 3 (a), Eq. (4) can lead to calculating Z P as long as Z D = 0. Hence, by making the diode out of function in a full-wave simulator (here we use CST Studio Suite R ), it is possible to read the corresponding S 11 and then apply it to Eq. (4) with Z in = j42.75 (derived by Eq. (3)). This results in characterizing Z P with R P = 3.27 and C P = 45.46 pF at f = 3.5 GHz as the equivalent lumped elements of the proposed unit cell's printed layout. The corresponding circuit in the advanced design system (ADS) simulator software is shown in Fig. 3 (c) with ζ d = ζ 0 / √ r = 177.72 and slab's electrical length of EL = h/λ d = 13.58 • (λ d is the wavelength in the dielectric slab). For different capacitance values of the varactor diode, with the range of C V = 0.5 − 2 pF, the unit cell response is presented in Fig. 4 at f = 3.5 GHz. This figure shows the simulated results of CST Studio Suite R alongside with that of ADS software (of the derived circuit presented in Fig. 3 (c)), as well as the measured data. The results show that the designed unit cell can cover the required variation of phase with a relatively low reflection loss throughout the entire range of capacitance variation. To measure the unit cell response, one approach is to fabricate a cell and attach it into a waveguide to read the phase shift and loss of the reflected waves [38]. In this approach, the periodic boundary condition that is applied in the simulation software cannot be imitated in the measurement setup. Here in our work, we use the test setup proposed in Fig. 5 (a) to read the macroscopic response of the surface which can provide us the characteristics of the unit cell. As the first step, a horn antenna illuminates the RIS when it is switched off and the scattered waves are captured by a vector network analyzer (VNA). In this situation, the VNA is calibrated so that the effects of the environment and the switched-off RIS are cancelled. Then, a constant value of C V (for example 0.5 pF) is applied to all unit cells across the surface. 1 In this case, the VNA shows deviation from the calibrated signal in both amplitude and phase which indeed represents the influence of the applied C V on the surface. After that, C V of all cells are set to the next desirable value and the above-mentioned procedure is repeated to read the response accordingly. The corresponding diffraction geometry is presented in Fig. 5 (b) with the Huygens-Fresnel diffraction integral at a distance r 1 The procedure of converting C V to V D and applying that to the surface is explained in Section III-B. as below: is the E-filed value at the observation point with the aperture on the z = 0 plane. Based on Eq. (5), at a constant location of (x 0 , y 0 , z 0 ), the phase and amplitude of E-filed would vary provided that a filament of − → E (x, y, 0) on the aperture varies. As a result, with the unit cell of dimensions x × y as the smallest building block of the reflective aperture, it is expected that the macroscopic response of the surface will be in line with the unit cell response; this statement is in agreement with the measured curves presented in Fig. 4. Note that in order to anomalously tilt the reflected beam toward the desired angle, the control unit calculates the required ψ(x)/ x ( x is the unit cell length across the x axis) using Eq. (2) and then set the proper voltage difference on each unit cell (to generate the required C V ). More technical details on this procedure are discussed later on Section III. III. PERFORMANCE EVALUATION AND PROOF OF CONCEPT In this section, we first explore how to control the macroscopic response of the surface based on the properties of the proposed unit cell. After that, the designed control unit is presented from the functional points of view. Finally, we demonstrate a real-word performance of the designed RIS in a test-bed where the Tx is located more than 350λ 0 (at the operating frequency) away from the reflecting surface as this was the maximum distance that we could measure in our testing environment. The demonstration replicates a scenario where the Rx lies in a coverage blind spot with insufficient coverage and therefore the receiving signal cannot be demodulated properly. But when the RIS is introduced to the scenario and is configured to reflect the beam to the angle of interest, a strong-enough signal is captured by the Rx. This study shows that such entities can be used in 5G as well as future 6G networks for ensuring an enhanced coverage footprint and a seamless connectivity to the users. A. REGULATING THE MACROSCOPIC RESPONSE OF THE SURFACE As mentioned before, the macroscopic response of the surface is regulated by properly determining the microscopic performance of the unit cells. To achieve this goal, one can consider the following procedure: • First, the relative reflection angle (θ r ) needs to be set as an input data for the control unit. • Then, with a Tx located far from the surface and by considering the angle of incidence θ i (normal to the RIS aperture in our study), the phase distribution can be calculated across each cluster of scatterers using Eq. (2). • Subsequently, it is required to map the calculated phase distribution to the corresponding C V values by using the unit cell response shown in Fig. 4. • Thereafter, the derived C V values must be set in full-wave simulators such as Ansys HFSS or CST Studio Suite R by modifying the reactance value of a lumped element, attached to the scatterer to mimic the effect of diode. • Finally, the whole surface must be simulated, and the radar cross-section (RCS) pattern of the surface should be studied as the surface response for different cases of θ r . Here, we use CST Studio Suite R to simulate the structure. The required phase distribution is derived by Eq. (2) and presented in Fig. 6 (a) the designed surface can properly incline the reflected beam toward the direction of interest. B. CONTROL UNIT To actualize the designed RIS, it is required to convert the calculated capacitance values of C V to specific voltage difference values of V D across the SMV2202-040LF diodes by considering the respective characteristics of the diode on its datasheet [36]. This data set of V D must be loaded as a look-up table in the control unit for each angle of reflection. Then, V D is quantized to 4096 levels by a sequence of 12 bits in a signed-fixed-point format. This quantized data is transferred to a 96-channel digital to analog converter (DAC) with model number of DAC60096 [39] via an ATMega32 microcontroller using synchronous serial peripheral interface (SPI) protocol. The DAC provides ±10.5v unbuffered, bipolar voltage outputs. Hence, a set of operational amplifiers (OpAmps) with model number of OPAx277 are used to achieve a buffered voltage. These OpAmps provide a high common-mode rejection, ultra-low offset drift and voltage, power supply rejection, quad output, and a relatively wide swing of output voltage. The produced buffered V D would then be applied to bias the diodes in order to reconfigure the response of the whole surface. The schematic view of the control unit's main subsections is presented in Fig. 7. C. TEST-BED SETUP AND EXPERIMENTAL RESULTS To validate the performance of the developed RIS, a measurement campaign was conducted to emulate a real-world blockage scenario. The measurement was performed across two rooms where there was no LoS between the Tx and the Rx. 8 shows the respective environmental setup. The Tx system was fixed in Room#1 with a horn antenna at 1.6 m above the ground. This antenna was beamed to Room#2 in such a way that Room#2 was partially under coverage. The RIS was in Room#2 under the coverage area of the Tx. The Rx system, with a horn antenna like that of Tx, was kept in Room#2 where there was not a sufficient level of coverage. We employed a software-defined-radio (SDR) system to stream a video signal over the carrier frequency of 3.5 GHz using QPSK modulation with a forward error correction (FEC) rate of 7/8. It is worth noting that the use of a directional antenna can minimize the scattering effects from the surrounding environment to make sure that the received signals are predominantly reflected beams from the RIS and not from the potential multipath signals. At the Rx side, without the RIS, the signal level was lower than the minimum required sensitivity level for the video stream to be decoded. The designed RIS was then introduced into the scenario such that the surface can ''see'' the Tx. Then, the RIS configures itself in order to direct the reflected waves toward the Rx at a specific angle. As a result, the Rx was able to receive the signal with a strength high enough to decode the video successfully. Fig. 9 shows the received signal characteristics at the SDR demodulator when the RIS is switched-off and when the RIS is configured for the case of θ r = 45 • . In order to analyze the received power pattern of the reflected beams in the measurement scenario, the received power was recorded at steps of 5 • for a constant transmit power. The relative angle between the surface aperture and Rx was changed from 10 • to 90 • while the radial distance between the surface and Rx was kept constant. The relative angle has been precisely tuned by using a laser meter along with the angle measurement tool. Based on the measurements conducted for the prototype RIS, three cases of θ r = {15 • , 30 • , 45 • } are being presented in Fig. 10 to show the measured normalized received power. It can be seen from this figure that the surface has the capability to generate a desired response for different respective configurations. During the measurement campaign, it was observed that though the reflected beams are dominant, there are some occasional strong components caused by the objects in our indoor environment especially at θ = 30 • . The impact of this can be seen in the measurements for all configurations of the RIS, as shown in Fig. 10, including for θ r of 15 • and 45 • and even when the surface is in the off-state. It is worth mentioning that when the surface reconfigures itself corresponding to a specific direction of interest, the signal level at the receiver side was observed to be enhanced by more than 15 dB comparing to the case when the surface is powered off. IV. CONCLUSION The ever-increasing demand for seamless connectivity is a major factor that is driving the evolution of a new ecosystem in wireless communications. As part of the current 5G and future 6G wireless networks, several new technologies are expected to be introduced for this new paradigm. RIS is one among such technologies that can be a game-changer in ensuring coverage provisioning for future wireless networks. In this article, we present a realistic evaluation of this technology along with a real-world demonstration to showcase how this technology can provide connectivity at coverage blind-spots. The proposed RIS consists of a reflecting surface comprising of the designed unit cell and a controlling system to reconfigure the system's response at the operating frequency of 3.5 GHz. The reflecting surface is made up of 2430 conducting scatterers, with a diode mounted on each one of them. The controlling system governs the DC voltage difference across the diodes so that the desired macroscopic responses can be obtained for each configuration. An equivalent circuit model is derived to provide a perspective on the unit cell's performance. Furthermore, a method is proposed to measure the unit cell characteristics using the macroscopic response of the reflecting surface. The fabricated structure can incline the reflected beam toward the direction of interest, based on the principles of the generalized Snell's law. RAHIM TAFAZOLLI (Senior Member, IEEE) has been a Professor of mobile and satellite communications, since April 2000, and the Director of ICS, since January 2010. He is the Founder and the Director with the 5G Innovation Centre, University of Surrey, U.K. He has more than 25 years of experience in digital communications research and teaching. He has authored and coauthored more than 500 research publications. He is co-inventor on more than 30 granted patents, all in the field of digital communications. He is regularly invited to deliver keynote talks and distinguished lectures to international conferences and workshops. In 2011, he was appointed as a fellow of Wireless World Research Forum (WWRF) in recognition of his personal contributions to the wireless world and the heading one of Europe leading research groups. He is regularly invited by many governments for advice on 5G technologies. He was an Advisor to the Mayor of London in regard to the London Infrastructure Investment 2050 Plan, from May to June 2014. He has given many interviews to international media in the form of television, radio interviews, and articles in international press. VOLUME 10, 2022
8,163.4
2022-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Near-Threshold Voltage Design Techniques for Heterogenous Manycore System-on-Chips : Aggressive power supply scaling into the near-threshold voltage (NTV) region holds great potential for applications with strict energy budgets, since the energy e ffi ciency peaks as the supply voltage approaches the threshold voltage (V T ) of the CMOS transistors. The improved silicon energy e ffi ciency promises to fit more cores in a given power envelope. As a result, many-core Near-threshold computing (NTC) has emerged as an attractive paradigm. Realizing energy-e ffi cient heterogenous system on chips (SoCs) necessitates key NTV-optimized ingredients, recipes and IP blocks; including CPUs, graphic vector engines, interconnect fabrics and mm-scale microcontroller (MCU) designs. We discuss application of NTV design techniques, necessary for reliable operation over a wide supply voltage range—from nominal down to the NTV regime, and for a variety of IPs. Evaluation results spanning Intel’s 32-, 22- and 14-nm CMOS technologies across four test chips are presented, confirming substantial energy benefits that scale well with Moore’s law. Introduction Near-threshold computing promises dramatic improvements in energy efficiency. For many CMOS designs, the energy consumption reaches an absolute minimum in the NTV regime that is of the order of magnitude improvement over super-threshold operation [1][2][3]. However, frequency degradation due to aggressive voltage scaling may not be acceptable across all single-threaded or performance-constrained applications. The key challenge is to lock-in this excellent energy efficiency benefit at NTV, while addressing the impacts of (a) loss in silicon frequency, (b) increased performance variations and (c) higher functional failure rates in memory and logic circuits. Enabling digital designs to operate over a wide voltage range is key to achieving the best energy efficiency [2], while satisfying varying application performance demands. To tap the full latent potential of NTC, multi-layered co-optimization approaches that crosscut architecture, devices, design, circuits, tool flows and methodologies, and coupled with fine-grain power management techniques are mandatory to realize NTC circuits and systems in scaled CMOS process nodes. The overarching goal of this work is to advance NTV computing, demonstrate its energy benefits, to quantify and overcome the barriers that have historically relegated ultralow-voltage operation to niche markets. We present four multi-voltage designs across three technology nodes, featuring many-core SoC building blocks. The IPs demonstrate wide dynamic power-performance range, including reliable NTV regime operation for maximum energy efficiency. Key innovations in NTV Packet-switched routers are communication systems of choice for modern many-core SoCs [12,13]. The third NTV design describes a 2 × 2 2-D mesh network-on-chip (NoC) fabric (Figure 1c) which incorporates a 6-port, 2-lane packet-switched input-buffered wormhole router as a key building block [10]. The resilient NTV-NoC and router incorporates end-to-end forward error correction code (ECC) and within router recovery from transient timing failures using error-detection sequential (EDS) circuits and a novel architectural flow control units (FLIT) replay scheme. The router operates across a wide frequency (voltage) range from 1 GHz (0.85 V) to 67 MHz (340 mV), dissipating 28.5 mW The final NTV prototype showcases a wireless sensor node (WSN) platform that integrates a mm-scale, 0.79 mm 2 NTV IA-32 Quark™ microcontroller (Figure 1d) (MCU) [14,15], built using a 14-nm 2nd generation tri-gate CMOS process. The WSN platform includes a solar cell, energy harvester, flash memory, sensors and a Bluetooth Low Energy (BLE) radio, to enable always-on always-sensing (AOAS) and advanced edge computing capabilities in Internet-of-Things (IoT) systems [11]. The MCU features four independent voltage-frequency islands (VFI), a low-leakage SRAM array, an on-die oscillator clock source capable of operating at sub-threshold voltage, power-gating and multiple active/sleep states, managed by an integrated power management unit (PMU). The MCU operates across a wide frequency (voltage) range of 297 MHz (1 V) to 0.5 MHz (308 mV) and achieves 4.8× improvement in energy efficiency at an optimum supply voltage (V OPT ) of 370 mV, operating at 3.5 MHz. The WSN, powered by a solar cell, demonstrates sustained MHz AOAS operation, consuming only 360 µW. This paper is organized as follows: Section 2 describes various NTV design techniques for SRAM and logic circuits. Architecture driven adaptive mechanisms to address higher functional failure rates and variation-tolerant resiliency at NTV for SoC fabrics are described in Section 3. Section 4 presents the tools, flows and recipes for wide-dynamic range design. In addition, solutions for multi-voltage global clock generation and distribution are introduced. Key experimental results from measuring all four prototypes are presented, analyzed, and discussed in Section 5. Finally, Section 6 concludes the paper and suggests future work. NTV Circuit Design Methodology The most common limit to voltage scaling is failure of SRAM and logic circuits. SRAM cells fail at low voltage because device mismatches degrade stability of the bit-cell for read, write or data retention. SRAM cells typically use the smallest transistors. Also, they are the most abundant among all circuit types on a die. Therefore, the V min of the SRAM cell array limits V min of the entire chip. Logic circuits, clocking, and sequentials fail at low voltage because of noise and process variations. Alpha and cosmic ray-induced soft errors cause transient failure of memory, sequentials, and logic at NTV. Frequency starts degrading exponentially as the supply voltage approaches V T . This sets a limit on V min . This limit can be alleviated to some extent by tri-gate transistors. Since they have a steeper sub-threshold swing, they can provide a lower V T for the same leakage current target. Aging degradations cause failure of SRAM cells at low voltages since different transistors in the cell undergo different amounts of V T shift under voltage-temperature stress and thus worsen device mismatches in the bit-cells. All these effects degrade and limit V min . The following sections describe low-voltage design techniques used for SRAM memory, combinational cells, sequentials and voltage level shifters circuits. SRAM Memory and Register File (RF) Optimizations An 8-T SRAM cell (Figure 2a) is commonly used in single-V DD microprocessor cores, particularly in performance critical low-level caches and multi-ported register-file arrays. The 8-T cell offers fast simultaneous read and write, dual-port capability, and generally lower V min than the 6-T cell. With independent read and write ports in the 8-T cell, significantly improved read noise margins can be realized over the traditional 6-T SRAM cell, at an additional area expense. The noise margin improvement is due to the elimination of the read-disturb condition of the internal memory node by the introduction of a separate read port in the SRAM cell. As a result, variability tolerance is greatly enhanced, making it a desirable design choice for ULP SRAM memory operating at lower supply voltages down to NTV and energy-optimum points. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 5 of 23 improvement is due to the elimination of the read-disturb condition of the internal memory node by the introduction of a separate read port in the SRAM cell. As a result, variability tolerance is greatly enhanced, making it a desirable design choice for ULP SRAM memory operating at lower supply voltages down to NTV and energy-optimum points. The 8-T bit-cell is still prone to write failures due to write contention between strong PMOS pullup and a weak NMOS transfer device across PVT variation. This contention becomes worse as VDD is lowered, limiting Vmin. A variation-tolerant dual-ended transmission gate (DETG) cell is implemented on the 22-nm NTV-SIMD register file array by replacing the NMOS transfer devices with full transmission gates (Figure 2b). This design enables a strong "1" and "0" write on both sides of the The 8-T bit-cell is still prone to write failures due to write contention between strong PMOS pull-up and a weak NMOS transfer device across PVT variation. This contention becomes worse as V DD is lowered, limiting V min . A variation-tolerant dual-ended transmission gate (DETG) cell is implemented on the 22-nm NTV-SIMD register file array by replacing the NMOS transfer devices with full transmission gates (Figure 2b). This design enables a strong "1" and "0" write on both sides of the cross-coupled inverter pair. The DETG cell always has two NMOS or two PMOS devices to write a "1" or "0", on nodes bit and bitx. This inherent redundancy averages the random variation effect across the transistors, improving both contention and write-completion. Moreover, the cell is symmetric with respect to PMOS and NMOS skew which reduces the effect of systematic variation. DETG cell simulations show 24% improvement in write delay, allowing a 150 mV reduction in write V min . However, the DETG cell is contention limited at its write V min , which can be reduced by the shared P/N circuits. An always "ON" PMOS and NMOS is shared across the virtual supplies of eight DETG cells (Figure 2e). The shared P/N circuit limits the strength of the cross-coupled inverters across variations reducing write contention by 22%. This circuit optimization results in an additional 125 mV write reduction compared to DETG, enabling an overall 275 mV write V min reduction when compared to the 8-T SRAM cell. Caches in the 32-nm NTV-CPU use a modified, single-ended and fully interruptible 10-T transmission gate (TG) SRAM bit-cell (Figure 2c), which allows for contention-free write operations. This topology enables a 250 mV improvement in write V min over an 8-T bit-cell. With this improvement, bit-cell retention now becomes a key V DD limiter. The simulated retention voltage data for the 10-T TG SRAM, as a function of keeper device size (m9, m10) and in the presence of random variations (5.9σ, slow skew, −25 • C) is shown in Figure 2d. Clearly, larger keeper devices lower the retention voltage. The keeper device is increased from 140-nm to 200-nm to realize a 550 mV retention V min target. For reliable read operation, bit-lines incorporate a scan-controlled, programmable stacked keeper, which can be configured to three or four PMOS device stacks to reduce read contention and improve read V min , across wide operating voltage/frequency range. To achieve low standby power in the WSN, all on-die memories and caches on the 14-nm NTV-MCU use a custom 8-T (Figure 2a), 0.155-µm 2 bit-cell, built using 84-nm gate pitch ultra-low power (ULP) transistors [14]. The 8-T bit-cell provides a well-balanced trade-off in V min and area over the 6-T and 10-T SRAM cells. The ULP transistor optimized memory arrays are designed to provide low standby leakage. However, as summarized in Table 2, a 5× performance slowdown is estimated over standard performance (SP) transistor 8T memory at 500 mV, but is still fast enough for edge compute applications. Context-aware power-gating of each 2 KB array is supported for further leakage reduction with no state retention. The ULP array also enables 26× lower leakage (at 500 mV supply) and has a 55% area cost over an SP-based 8T memory array, drawn on a 70 nm gate pitch. The ULP memory leakage scales from 114 pA at 1 V voltage down to 8.28 pA per bit at the retention limit of 308 mV, as measured at room temperature (25 • C). Process, voltage and temperature (PVT) and aging adaptive on-die boosting of read word-line (RWL) and write word-line (WWL) as a common circuit assist technique for further lowering SRAM V min is described in [16,17]. Boosting RWL enables larger read "ON" current without forcing a larger PMOS keeper. Boosting WWL helps write V min for two reasons-it improves contention without upsizing NMOS pass device size (or lowering its V TH ), and improving write completion by writing a "1" from the other side. At iso-array area, on-die WL boosting achieves twice as much V min reduction over bit-cell upsizing [16]. However, word-line boosting requires an integrated charge-pump, or another method for generating a boosted voltage on die. Combinational Cells Design Criteria Circuits are optimized for robust and reliable ultra-low voltage operation. A variation-aware pruning is performed on the standard cell library to eliminate the circuits which exhibit DC failures or extreme delay degradation at NTV due to reduced transistor on/off current ratios and increased sensitivity to process variations. Simulated 32-nm normalized gate delays (y-axis), as a function of V DD for logic devices in the presence of random variations (6σ) is presented in Figure 3. Complex logic gates with four or more stacked devices and wide transmission-gate multiplexers with four or more inputs are pruned from the library because they exhibit more than 108% and 127% delay degradation compared to three stack or three-wide multiplexers respectively (Figure 3a,b). Critical timing paths are designed using low V T devices because high V T devices indicate 76% higher delay penalty at 300 mV supply, in the presence of variation ( Figure 3c). All minimum-sized gates with transistor widths less than 2× of the process-allowed minimum (Z MIN ) are filtered from the library due to 130% higher variation impact (Figure 3d), and the use of single fin-width devices is limited in 22-nm and 14-nm logic design. Combinational Cells Design Criteria Circuits are optimized for robust and reliable ultra-low voltage operation. A variation-aware pruning is performed on the standard cell library to eliminate the circuits which exhibit DC failures or extreme delay degradation at NTV due to reduced transistor on/off current ratios and increased sensitivity to process variations. Simulated 32-nm normalized gate delays (y-axis), as a function of VDD for logic devices in the presence of random variations (6σ) is presented in Figure 3. Complex logic gates with four or more stacked devices and wide transmission-gate multiplexers with four or more inputs are pruned from the library because they exhibit more than 108% and 127% delay degradation compared to three stack or three-wide multiplexers respectively (Figure 3a,b). Critical timing paths are designed using low VT devices because high VT devices indicate 76% higher delay penalty at 300 mV supply, in the presence of variation ( Figure 3c). All minimum-sized gates with transistor widths less than 2× of the process-allowed minimum (ZMIN) are filtered from the library due to 130% higher variation impact (Figure 3d), and the use of single fin-width devices is limited in 22-nm and 14-nm logic design. Figure 3. Simulated 32-nm normalized gate delays (y-axis) vs. supply voltage for logic devices in the presence of random variations (6σ). To limit excessive gate delays at NTV, the data indicates that: (a) Transistor stack sizes need to be limited to three, including; (b) Wide pass-gate multiplexers; (c) High VT devices have 76% higher delay penalty over nominal VT flavors due to variations; and (d) Minimum width (1×, ZMIN) devices show 130% higher delay at 500 mV, requiring restricted use. Sequential Circuit Optimizations At lower supply voltages, degradation in transistor Ion/Ioff ratio, random and systematic process variations, affect the stability of storage nodes in flip-flops. Conventional transmission gate based master-slave flip-flop circuits typically have weak keepers for state nodes and larger transmission gates. During the state retention phase, the on-current of weak keeper contends with the off-current of the strong transmission gate affecting state node stability. Additionally, charge-sharing between Figure 3. Simulated 32-nm normalized gate delays (y-axis) vs. supply voltage for logic devices in the presence of random variations (6σ). To limit excessive gate delays at NTV, the data indicates that: (a) Transistor stack sizes need to be limited to three, including; (b) Wide pass-gate multiplexers; (c) High V T devices have 76% higher delay penalty over nominal V T flavors due to variations; and (d) Minimum width (1×, Z MIN ) devices show 130% higher delay at 500 mV, requiring restricted use. Sequential Circuit Optimizations At lower supply voltages, degradation in transistor I on /I off ratio, random and systematic process variations, affect the stability of storage nodes in flip-flops. Conventional transmission gate based master-slave flip-flop circuits typically have weak keepers for state nodes and larger transmission gates. During the state retention phase, the on-current of weak keeper contends with the off-current of the strong transmission gate affecting state node stability. Additionally, charge-sharing between the internal master and slave nodes (write-back glitch) can result in state bit-flip due to reduced noise margins at low V DD . The NTV-CPU employs custom sequential circuits to ensure robust operation at lower voltages under process variations. A clocked CMOS-style flip-flop design ( Figure 4) replaces master and slave transmission gates with clocked inverters, thereby eliminating the risk of data write-back through the pass gates. In addition, keepers are upsized to improve state node retention and are made fully interruptible to avoid contention during the write phase of the clock, thus improving V min . J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 8 of 23 the internal master and slave nodes (write-back glitch) can result in state bit-flip due to reduced noise margins at low VDD. The NTV-CPU employs custom sequential circuits to ensure robust operation at lower voltages under process variations. A clocked CMOS-style flip-flop design ( Figure 4) replaces master and slave transmission gates with clocked inverters, thereby eliminating the risk of data writeback through the pass gates. In addition, keepers are upsized to improve state node retention and are made fully interruptible to avoid contention during the write phase of the clock, thus improving Vmin. the internal master and slave nodes (write-back glitch) can result in state bit-flip due to reduced noise margins at low VDD. The NTV-CPU employs custom sequential circuits to ensure robust operation at lower voltages under process variations. A clocked CMOS-style flip-flop design ( Figure 4) replaces master and slave transmission gates with clocked inverters, thereby eliminating the risk of data writeback through the pass gates. In addition, keepers are upsized to improve state node retention and are made fully interruptible to avoid contention during the write phase of the clock, thus improving Vmin. Level Shifter Circuit Optimizations NTV designs, operating at low supply voltages require level shifters to communicate with circuits at the higher voltages (e.g., I/O). Similar to register file writes, conventional CVSL level shifters are inherently contention circuits. The need for wide range, ultra-low voltage level shifter to a high supply voltage further exacerbates this contention. The ultra-low voltage split-output, or ULVS, level shifter decouples the CVSL stage from the output driver stage and interrupts the contention devices, thus improving V min by 125 mV ( Figure 6). Full interruption of contention devices occurs for voltages V in ≥ V out , while for voltages V in < V out the contention devices are only partially interrupted, but still is beneficial at low voltages. For equal fan-in and fan-out, the ULVS level shifter weakens contention devices, thereby reducing power by 25% to 32%. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 9 of 23 NTV designs, operating at low supply voltages require level shifters to communicate with circuits at the higher voltages (e.g., I/O). Similar to register file writes, conventional CVSL level shifters are inherently contention circuits. The need for wide range, ultra-low voltage level shifter to a high supply voltage further exacerbates this contention. The ultra-low voltage split-output, or ULVS, level shifter decouples the CVSL stage from the output driver stage and interrupts the contention devices, thus improving Vmin by 125 mV (Figure 6). Full interruption of contention devices occurs for voltages Vin ≥ Vout, while for voltages Vin < Vout the contention devices are only partially interrupted, but still is beneficial at low voltages. For equal fan-in and fan-out, the ULVS level shifter weakens contention devices, thereby reducing power by 25% to 32%. Architecture Driven NTV Resilient NoC Fabrics Architectural techniques can help regain some of the performance loss from engaging aggressive VDD reduction. The limits to NTC-based parallelism to reclaim performance have been discussed in NTV designs, operating at low supply voltages require level shifters to communicate with circuits at the higher voltages (e.g., I/O). Similar to register file writes, conventional CVSL level shifters are inherently contention circuits. The need for wide range, ultra-low voltage level shifter to a high supply voltage further exacerbates this contention. The ultra-low voltage split-output, or ULVS, level shifter decouples the CVSL stage from the output driver stage and interrupts the contention devices, thus improving Vmin by 125 mV (Figure 6). Full interruption of contention devices occurs for voltages Vin ≥ Vout, while for voltages Vin < Vout the contention devices are only partially interrupted, but still is beneficial at low voltages. For equal fan-in and fan-out, the ULVS level shifter weakens contention devices, thereby reducing power by 25% to 32%. Architecture Driven NTV Resilient NoC Fabrics Architectural techniques can help regain some of the performance loss from engaging aggressive VDD reduction. The limits to NTC-based parallelism to reclaim performance have been discussed in Architecture Driven NTV Resilient NoC Fabrics Architectural techniques can help regain some of the performance loss from engaging aggressive V DD reduction. The limits to NTC-based parallelism to reclaim performance have been discussed in [18]. Dynamic adaptation techniques have been shown to monitor the available timing margin and guard bands in the design and dynamically modulate the voltage/frequency (V/F), thus preventing occurrence of timing errors [19]. Architecture-assisted resilient techniques, on the other hand, are more aggressive with the V/F push. In this case, the errors are allowed to happen, they are detected and then corrected using appropriate replay mechanisms. Replica path-based methods such as tunable replica circuits (TRC) have been proposed [20] for error detection in flip-flop based static CMOS logic blocks. In this approach, a set of replica circuits are calibrated to match the critical path pipeline stage delay and timing errors are detected by double-sampling the TRC outputs. The key requirement is that the TRC must always fail before the critical path fails. The TRC is an area-efficient and non-intrusive technique, but it cannot leverage the probabilities of critical path activation, multiple simultaneous switching at inputs of complex gates, or worst case coupling from adjacent signal lines. An alternative in-situ approach for timing error detection uses error detection sequentials (EDS) in the critical paths of the pipeline stage. Timing errors are detected by a double-sampling mechanism using a flip-flop and a latch (Figure 8b) [21]. Errors are corrected by performing a replay operation at higher V or lower F. The V/F can also be adapted by monitoring the error rate and accounting for error recovery overheads. V/F used to guarantee error-free operation limit achievable energy efficiency and performance at VOPT. While error-correction codes (ECC) have been previously used to mitigate transient failures in routers [22], the associated performance and energy overheads can be significant for detection and correction of multi-bit failures. Timing error detection using EDS has been used for processor pipelines with minimal overhead [21]. An NTV router, designed in a 22-nm node and enhanced with EDS and a FLIT replay scheme, provides resilience to multi-bit timing failures for on-die communication. The goal is to evaluate the performance and energy benefits of single-error correction double-error detection (SECDED) ECC method over an EDS-based approach, from nominal VDD down to NTV. Resilient Router Architecture and Design The 6-port packet-switched router in the 2 × 2 2-D mesh NoC fabric communicates with the traffic generator (TG) via two local ports and with neighboring routers using four bidirectional, 36bit 1.5 mm long on-die links (Figure 1c). Inbound router FLITs are buffered in a 16-entry 36-bit wide FIFO (Figure 8a). The most critical timing path in the router consists of request generation, lane and port arbitration, FIFO read, followed by a fully non-blocking crossbar (XBAR) traversal. Any failure in this timing path is detected by the EDS circuit (Figure 8b) embedded in the output pipe stage (STG 2). The two-cycle EDS enhanced router can be run in two modes, with and without error detection. The TG contains SECDED logic which appends or retrieves nine ECC bits from a packet's tail FLIT, thus allowing end-to-end detection and correction of errors in the payload. A programmable noise injector [21] is introduced at each node on VNoC supply to induce noise events during packet transmission. NoCs have rapidly become the accepted method for connecting a large number of on-chip components. Packet-switched routers are key building blocks of NoCs [13]. Margins for operating V/F used to guarantee error-free operation limit achievable energy efficiency and performance at V OPT . While error-correction codes (ECC) have been previously used to mitigate transient failures in routers [22], the associated performance and energy overheads can be significant for detection and correction of multi-bit failures. Timing error detection using EDS has been used for processor pipelines with minimal overhead [21]. An NTV router, designed in a 22-nm node and enhanced with EDS and a FLIT replay scheme, provides resilience to multi-bit timing failures for on-die communication. The goal is to evaluate the performance and energy benefits of single-error correction double-error detection (SECDED) ECC method over an EDS-based approach, from nominal V DD down to NTV. Resilient Router Architecture and Design The 6-port packet-switched router in the 2 × 2 2-D mesh NoC fabric communicates with the traffic generator (TG) via two local ports and with neighboring routers using four bidirectional, 36-bit 1.5 mm long on-die links (Figure 1c). Inbound router FLITs are buffered in a 16-entry 36-bit wide FIFO (Figure 8a). The most critical timing path in the router consists of request generation, lane and port arbitration, FIFO read, followed by a fully non-blocking crossbar (XBAR) traversal. Any failure in this timing path is detected by the EDS circuit (Figure 8b) embedded in the output pipe stage (STG 2). The two-cycle EDS enhanced router can be run in two modes, with and without error detection. The TG contains SECDED logic which appends or retrieves nine ECC bits from a packet's tail FLIT, thus allowing end-to-end detection and correction of errors in the payload. A programmable noise injector [21] is introduced at each node on V NoC supply to induce noise events during packet transmission. The router control logic recovers from timing failures by saving critical states for the last two FLIT transmissions (Figure 9a). In the event of a timing failure, the Error signal generated by the EDS circuit in STG 2 is captured along with the erroneous FLIT in the recipient's FIFO, modified to accommodate an additional error bit as shown. Forward error correction is achieved by qualifying the FIFO output with the Error flag. In the router with the timing failure, the Error signal is latched to mitigate metastability. This synchronized Error flag is then used to roll-back the arbiters and FIFO read pointers to the previous functionally correct state. The current FLIT is again forwarded as part of replay. Error synchronization and roll-back incur two clock cycles of delay between an error event and successful recovery (Figure 9b). To avoid min-delay failures at STG 2, a clock with scan-tunable duty cycle control is implemented for the EDS latches. Additional min-delay buffers are inserted in the crossbar data path for added hold margin at a 2.4% area cost. In addition, the resilient router incurs the following overheads: (a) About 2.5% of router sequentials are converted to EDS; (b) Enabling replay causes a 10.5% increase in sequential count with 1.6% area overhead; and (c) The power overhead for the entire router is 8.7% with a 2.8% area cost. The router control logic recovers from timing failures by saving critical states for the last two FLIT transmissions (Figure 9a). In the event of a timing failure, the Error signal generated by the EDS circuit in STG 2 is captured along with the erroneous FLIT in the recipient's FIFO, modified to accommodate an additional error bit as shown. Forward error correction is achieved by qualifying the FIFO output with the Error flag. In the router with the timing failure, the Error signal is latched to mitigate metastability. This synchronized Error flag is then used to roll-back the arbiters and FIFO read pointers to the previous functionally correct state. The current FLIT is again forwarded as part of replay. Error synchronization and roll-back incur two clock cycles of delay between an error event and successful recovery (Figure 9b). To avoid min-delay failures at STG 2, a clock with scan-tunable duty cycle control is implemented for the EDS latches. Additional min-delay buffers are inserted in the crossbar data path for added hold margin at a 2.4% area cost. In addition, the resilient router incurs the following overheads: (a) About 2.5% of router sequentials are converted to EDS; (b) Enabling replay causes a 10.5% increase in sequential count with 1.6% area overhead; and (c) The power overhead for the entire router is 8.7% with a 2.8% area cost. Designing for Wide-Dynamic Range: Tools, Flows and Methodologies Device optimizations need to work in concert with automated CAD design flows for optimal results. The 14-nm NTV-WSN design uses HP, standard-performance (SP), ULP, and thick-gate (TG)-all four transistor families in 14-nm second-generation tri-gate SoC platform technology [14]. To minimize variation induced skews, the clock distribution is completely designed using HP devices. The lower threshold voltage (VT) of the HP devices allows improved delay predictability on the clock paths at NTV. SP devices are used for 100% of logic cells to achieve sufficient speeds during active mode of operation, with memory using ULP transistors for low standby power. The bidirectional CMOS IO circuits are designed using high voltage (1.8 V) TG transistors. The optimized cell library for wide operational range is characterized at 0.5 V, 0.75 V and 1.05 V VDD corners for design synthesis and timing convergence and are optimized for robust and reliable ultra-low voltage operation. Statistical static timing analysis (SSTA) is employed-a method which replaces the normal deterministic timing of gates and interconnects with probability distributions Designing for Wide-Dynamic Range: Tools, Flows and Methodologies Device optimizations need to work in concert with automated CAD design flows for optimal results. The 14-nm NTV-WSN design uses HP, standard-performance (SP), ULP, and thick-gate (TG)-all four transistor families in 14-nm second-generation tri-gate SoC platform technology [14]. To minimize variation induced skews, the clock distribution is completely designed using HP devices. The lower threshold voltage (V T ) of the HP devices allows improved delay predictability on the clock paths at NTV. SP devices are used for 100% of logic cells to achieve sufficient speeds during active mode of operation, with memory using ULP transistors for low standby power. The bidirectional CMOS IO circuits are designed using high voltage (1.8 V) TG transistors. The optimized cell library for wide operational range is characterized at 0.5 V, 0.75 V and 1.05 V V DD corners for design synthesis and timing convergence and are optimized for robust and reliable ultra-low voltage operation. Statistical static timing analysis (SSTA) is employed-a method which replaces the normal deterministic timing of gates and interconnects with probability distributions and provides a distribution of possible circuit outcomes [23,24]. As discussed in Section 2.2, variation-aware SSTA study is performed on the standard cell library to eliminate the circuits which exhibit DC failures or extreme delay degradation due to reduced transistor on/off current ratios and increased sensitivity to process variations. As a result, the standard cell library was conservatively constrained for use in the NTV optimized designs. Achieving the performance targets across the entire voltage range is challenging since critical path characteristics change considerably due to non-linear scaling of device delay and a disproportionate scaling of device versus interconnect (wire) delay. It is critical to identify an optimal design point such that the targeted power and performance are achieved at a given corner without a significant compromise at the other corner. Synthesis corner evaluations for the NTV-CPU (Figure 10a) suggest that 0.5 V, 80 MHz synthesis achieves the target frequency at both 0.5 V (80 MHz) and 1.05 V (650 MHz). In comparison, it is observed that 1.05 V synthesis does not sufficiently size up the device dominated data paths which become critical at lower voltages, resulting in 40% lower performance at 0.5 V. Although 1.05 V synthesis achieves lower leakage and better design area, the 0.5 V corner was selected for final design synthesis of the NTV prototypes, considering its low voltage performance benefits and promise for wide operational range. Performance, area and power metrics at the two extreme design corners in a 32-nm node are presented in Figure 10b. For subsequent NTV prototypes, a multi-corner design performance verification (PV) methodology that simultaneously co-optimizes timing slack across all the three performance corners was developed. This PV approach ensures that performance targets are met across the wide voltage operational range. The method accounts for non-linear scaling of device delays in the critical path versus interconnect delay scaling across wide V DD . At low voltages, severe effects of process variations result in path delay uncertainties and may cause setup (max) or hold (min) violations. Setup violations can be corrected by frequency binning. However, hold violations can cause critical functional failures. The design timing convergence methodology is enhanced to consider the effect of random variations and provide enough variation-aware hold margin guard-bands for robust NTV operation. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 12 of 23 and provides a distribution of possible circuit outcomes [23,24]. As discussed in Section 2.2, variationaware SSTA study is performed on the standard cell library to eliminate the circuits which exhibit DC failures or extreme delay degradation due to reduced transistor on/off current ratios and increased sensitivity to process variations. As a result, the standard cell library was conservatively constrained for use in the NTV optimized designs. Achieving the performance targets across the entire voltage range is challenging since critical path characteristics change considerably due to non-linear scaling of device delay and a disproportionate scaling of device versus interconnect (wire) delay. It is critical to identify an optimal design point such that the targeted power and performance are achieved at a given corner without a significant compromise at the other corner. Synthesis corner evaluations for the NTV-CPU ( Figure 10a) suggest that 0.5 V, 80 MHz synthesis achieves the target frequency at both 0.5 V (80 MHz) and 1.05 V (650 MHz). In comparison, it is observed that 1.05 V synthesis does not sufficiently size up the device dominated data paths which become critical at lower voltages, resulting in 40% lower performance at 0.5 V. Although 1.05 V synthesis achieves lower leakage and better design area, the 0.5 V corner was selected for final design synthesis of the NTV prototypes, considering its low voltage performance benefits and promise for wide operational range. Performance, area and power metrics at the two extreme design corners in a 32-nm node are presented in Figure 10b. For subsequent NTV prototypes, a multi-corner design performance verification (PV) methodology that simultaneously co-optimizes timing slack across all the three performance corners was developed. This PV approach ensures that performance targets are met across the wide voltage operational range. The method accounts for non-linear scaling of device delays in the critical path versus interconnect delay scaling across wide VDD. At low voltages, severe effects of process variations result in path delay uncertainties and may cause setup (max) or hold (min) violations. Setup violations can be corrected by frequency binning. However, hold violations can cause critical functional failures. The design timing convergence methodology is enhanced to consider the effect of random variations and provide enough variation-aware hold margin guard-bands for robust NTV operation. NTV Clocking Architecture A calibrated ring oscillator (CRO) serves as a low-power on-chip high-frequency (MHz) clock source for the 14-nm NTV-MCU. The CRO is a frequency-locked loop (Figure 11a) that uses an RTC as a reference to generate a MHz clock output. Internally, the CRO tracks the frequency of oscillation from a ring oscillator and generates a delay code that adjusts the oscillation frequency to closely match the target frequency based on the reference clock. The CRO can operate in (1) closed-loop mode, where it accurately tracks the target frequency, as well as in (2) open loop mode at ultralow voltages, producing clock with tens of KHz frequency, enough for always-on (AON) sensing operation on the MCU. Silicon characterization data for the CRO is presented in Figure 11b. The ondie CRO locks to a wide range of target frequencies from 1 V down to 0.4 V. The CRO dissipates 60 NTV Clocking Architecture A calibrated ring oscillator (CRO) serves as a low-power on-chip high-frequency (MHz) clock source for the 14-nm NTV-MCU. The CRO is a frequency-locked loop (Figure 11a) that uses an RTC as a reference to generate a MHz clock output. Internally, the CRO tracks the frequency of oscillation from a ring oscillator and generates a delay code that adjusts the oscillation frequency to closely match the target frequency based on the reference clock. The CRO can operate in (1) closed-loop mode, where it accurately tracks the target frequency, as well as in (2) open loop mode at ultralow voltages, producing clock with tens of KHz frequency, enough for always-on (AON) sensing operation on the MCU. Silicon characterization data for the CRO is presented in Figure 11b. The on-die CRO locks to a wide range of target frequencies from 1 V down to 0.4 V. The CRO dissipates 60 µW (450 mV) while generating a 16 MHz output to clock the MCU at V OPT . In open-loop condition, the CRO is functional down to a deep sub-threshold voltage of 128 mV, dissipating 3.8 µW, while generating a 7-kHz clock output. The CRO achieves a measured clock period jitter of 4.6 ps at 400-MHz operation. The low-V DD global clock distribution network on the NTV-CPU (Figure 11c) is designed with low-V T devices to minimize clock skew across logic and memory voltage domain crossings, across the entire operating voltage range, and considers the effect of random variations. The clock tree incorporates two-stage level shifters and programmable delay buffers in the clock path. The level shifters in the clock path track the delay in the data-path level shifters. In addition, programmable lookup table based delay buffers can be tuned to compensate for any inter-block skew variations. SSTA (6σ) variation analysis shows 50% skew reduction at 0.5 V from clock delay tuning (Figure 11d). The low-VDD global clock distribution network on the NTV-CPU (Figure 11c) is designed with low-VT devices to minimize clock skew across logic and memory voltage domain crossings, across the entire operating voltage range, and considers the effect of random variations. The clock tree incorporates two-stage level shifters and programmable delay buffers in the clock path. The level shifters in the clock path track the delay in the data-path level shifters. In addition, programmable lookup table based delay buffers can be tuned to compensate for any inter-block skew variations. SSTA (6σ) variation analysis shows 50% skew reduction at 0.5 V from clock delay tuning (Figure 11d). NTV-CPU Results The NTV Processor is fabricated in a 32-nm CMOS process technology with nine layers of copper interconnect. Figure 12a shows the IA-32 die and core micrographs with a core area of 2 mm 2 . Figure 12b shows the packaged IA processor and the solar cell (1 square inch area) used to power the core. The IA core is operational over a wide voltage range from 280 mV to 1.2 V. Figure 13 shows the measured total core power and maximum operational frequency (Fmax) across the voltage range, measured while running the Pentium Built-In Self-Test (BIST) in a continuous loop mode. Starting at 1.2V and 915MHz, core voltage and performance scales down to 280 mV and 3 MHz, reducing total power consumption from 737 mW to a mere 2 mW. With a dual-VDD design, memories stay at its measured VDD-min of 0.55 V while allowing IA core logic to scale further down till 280 mV. NTV-CPU Results The NTV Processor is fabricated in a 32-nm CMOS process technology with nine layers of copper interconnect. Figure 12a shows the IA-32 die and core micrographs with a core area of 2 mm 2 . Figure 12b shows the packaged IA processor and the solar cell (1 square inch area) used to power the core. The IA core is operational over a wide voltage range from 280 mV to 1.2 V. Figure 13 shows the measured total core power and maximum operational frequency (F max ) across the voltage range, measured while running the Pentium Built-In Self-Test (BIST) in a continuous loop mode. Starting at 1.2 V and 915 MHz, core voltage and performance scales down to 280 mV and 3 MHz, reducing total power consumption from 737 mW to a mere 2 mW. With a dual-V DD design, memories stay at its measured V DD -min of 0.55 V while allowing IA core logic to scale further down till 280 mV. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 14 of 23 Figure 13 also plots the measured total energy per cycle across the wide voltage range along with its dynamic and leakage components. Minimum energy operation is achieved at NTV, with the total energy reaching minima of 170 pJ/cycle at 450 mV (VOPT), demonstrating 4.7× improvement in energy efficiency compared to the VDD-max (1.2V) corner. The pie-charts in Figure 14 shows a total core power breakup across super-threshold, nearthreshold and sub-threshold regions. The contribution of logic dynamic power reduces drastically from 81% at VDD-max to only 4% at VDD-min (280 mV). Chip leakage power contribution as a proportion of total power starts increasing in the near-threshold voltage region and accounts for 42% Figure 13 also plots the measured total energy per cycle across the wide voltage range along with its dynamic and leakage components. Minimum energy operation is achieved at NTV, with the total energy reaching minima of 170 pJ/cycle at 450 mV (VOPT), demonstrating 4.7× improvement in energy efficiency compared to the VDD-max (1.2V) corner. The pie-charts in Figure 14 shows a total core power breakup across super-threshold, nearthreshold and sub-threshold regions. The contribution of logic dynamic power reduces drastically from 81% at VDD-max to only 4% at VDD-min (280 mV). Chip leakage power contribution as a proportion of total power starts increasing in the near-threshold voltage region and accounts for 42% The pie-charts in Figure 14 shows a total core power breakup across super-threshold, near-threshold and sub-threshold regions. The contribution of logic dynamic power reduces drastically from 81% at V DD -max to only 4% at V DD -min (280 mV). Chip leakage power contribution as a proportion of total power starts increasing in the near-threshold voltage region and accounts for 42% of the total core power at V DD -opt. At V DD -min point, memories continue to stay at a higher V DD than logic (550 mV), thus contributing 63% of the total core power. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 15 of 23 of the total core power at VDD-opt. At VDD-min point, memories continue to stay at a higher VDD than logic (550 mV), thus contributing 63% of the total core power. Figure 14. Measured NTV-CPU power breakdown across wide voltage range. Note that the memory supply scales down to 550mV, while the core logic operates well into the sub-threshold regime. NTV-SIMD Engine Results The SIMD permutation engine operates at a nominal supply voltage of 0.9 V and is implemented in a 22-nm tri-gate bulk CMOS technology featuring high-k metal-gate transistors and strained silicon technology. Figure 15 shows the die micrograph of the chip with a total compute die area of 0.048 mm 2 . The permutation engine with 2-dimensional shuffle results in 36% to 63% fewer register file reads, writes, and permutes compared to a conventional 256b shuffle-based implementation. The SIMD engine contains 439,000 transistors. Frequency and power measurements for the SIMD engine components are presented in Figure 16, obtained by sweeping the supply voltage from 280 mV to 1.1 V in a temperature-stabilized environment of 50 °C. Chip measurements show that the register file and crossbar operate from 3 GHz (1.1 V) down to 10 MHz (280 mV). The register file dissipates 227 mW (1.1 V) and 108 µW (280 mV) respectively, while the permute crossbar consumes 69 mW-19 µW over the same VDD range. The maximum energy efficiency of 154 GOPS/W (1 OP = three 256b reads and one 256b write) is obtained at a supply voltage of 280 mV (VOPT) and is 9× higher than the efficiency at nominal voltage. The 256b byte-wise any-to-any permute crossbar executes horizontal shuffle operations down to supply voltages of 240 mV. Peak energy efficiency of 585 GOPS/W (1 OP = one 32-way 256b permutation) is achieved at a supply voltage of 260 mV, also with 9× better energy efficiency. Figure 14. Measured NTV-CPU power breakdown across wide voltage range. Note that the memory supply scales down to 550mV, while the core logic operates well into the sub-threshold regime. NTV-SIMD Engine Results The SIMD permutation engine operates at a nominal supply voltage of 0.9 V and is implemented in a 22-nm tri-gate bulk CMOS technology featuring high-k metal-gate transistors and strained silicon technology. Figure 15 shows the die micrograph of the chip with a total compute die area of 0.048 mm 2 . The permutation engine with 2-dimensional shuffle results in 36% to 63% fewer register file reads, writes, and permutes compared to a conventional 256b shuffle-based implementation. The SIMD engine contains 439,000 transistors. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 15 of 23 of the total core power at VDD-opt. At VDD-min point, memories continue to stay at a higher VDD than logic (550 mV), thus contributing 63% of the total core power. Figure 14. Measured NTV-CPU power breakdown across wide voltage range. Note that the memory supply scales down to 550mV, while the core logic operates well into the sub-threshold regime. NTV-SIMD Engine Results The SIMD permutation engine operates at a nominal supply voltage of 0.9 V and is implemented in a 22-nm tri-gate bulk CMOS technology featuring high-k metal-gate transistors and strained silicon technology. Figure 15 shows the die micrograph of the chip with a total compute die area of 0.048 mm 2 . The permutation engine with 2-dimensional shuffle results in 36% to 63% fewer register file reads, writes, and permutes compared to a conventional 256b shuffle-based implementation. The SIMD engine contains 439,000 transistors. Frequency and power measurements for the SIMD engine components are presented in Figure 16, obtained by sweeping the supply voltage from 280 mV to 1.1 V in a temperature-stabilized environment of 50 °C. Chip measurements show that the register file and crossbar operate from 3 GHz (1.1 V) down to 10 MHz (280 mV). The register file dissipates 227 mW (1.1 V) and 108 µW (280 mV) respectively, while the permute crossbar consumes 69 mW-19 µW over the same VDD range. The maximum energy efficiency of 154 GOPS/W (1 OP = three 256b reads and one 256b write) is obtained at a supply voltage of 280 mV (VOPT) and is 9× higher than the efficiency at nominal voltage. The 256b byte-wise any-to-any permute crossbar executes horizontal shuffle operations down to supply voltages of 240 mV. Peak energy efficiency of 585 GOPS/W (1 OP = one 32-way 256b permutation) is achieved at a supply voltage of 260 mV, also with 9× better energy efficiency. Frequency and power measurements for the SIMD engine components are presented in Figure 16, obtained by sweeping the supply voltage from 280 mV to 1.1 V in a temperature-stabilized environment of 50 • C. Chip measurements show that the register file and crossbar operate from 3 GHz (1.1 V) down to 10 MHz (280 mV). The register file dissipates 227 mW (1.1 V) and 108 µW (280 mV) respectively, while the permute crossbar consumes 69 mW-19 µW over the same V DD range. The maximum energy efficiency of 154 GOPS/W (1 OP = three 256b reads and one 256b write) is obtained at a supply voltage of 280 mV (V OPT ) and is 9× higher than the efficiency at nominal voltage. The 256b byte-wise any-to-any permute crossbar executes horizontal shuffle operations down to supply voltages of 240 mV. Peak energy efficiency of 585 GOPS/W (1 OP = one 32-way 256b permutation) is achieved at a supply voltage of 260 mV, also with 9× better energy efficiency. NTV-NoC Measurement Results and Learnings The 2 × 2 2-D mesh-based resilient NoC prototype is fabricated in a 22 nm, 9-metal layer technology. Each router port features bidirectional, 36-bit wide, 1.5 mm long on-die links. The die area is 2.4 mm 2 with a NoC area of 0.927 mm 2 and a router area being 0.051 mm 2 , as highlighted in the NoC die and NoC layout photographs (Figure 17a,c). There are approximately 31,400 cells in each router. The experimental setup and key design characteristics are shown in Figure 17b,d. Silicon measurements are performed at 25 °C for a representative NoC traffic pattern with FLIT injection at each router port every clock cycle at 10% data activity. The 2 × 2 NoC is functional over a wide operating range ( Figure 18) with maximum frequency (FMAX) of 1 GHz (0.85 V), 734 MHz (0.7 V), 151 MHz (400 mV), scaling down to 67 MHz (340 mV). A 3.3X improvement in energy-efficiency is achieved at a VOPT of 400mV with an aggregate router bandwidth (BW) of 3.6 GB/s. The measured NoC silicon logic analyzer trace ( Figure 19) shows a supply noise-induced timing failure on the control bits of the packet header FLIT, followed by two cycles of bubble (null) FLITS and persistent retransmission (replay) of the FLIT until successful recovery. As shown, timing error synchronization and roll-back incurs a 2-cycle delay between an error event and successful recovery. Figure 20 plots the measured BW for the resilient router at 400 mV, in the presence of a 10% VNoC droop induced by the on-die noise injectors. The number of erroneous FLIT increases exponentially with FCLK. To account for such droop, a non-resilient router must operate with 28% (700 mV) and 63% (400 mV) FCLK margins, respectively, thus limiting FMAX. The resilient router reclaims these margins and offers near-ideal BW improvement until higher error rates and FLIT replay overheads limit overall BW gains. Past the point-of-first failure (PoFF), both control and data bits are corrupted. While ECC can identify data bit failures, control bit failures can invalidate the entire FLIT, rendering any ECC scheme ineffective. If control paths are designed with enough timing margins such that the control bits do not fail, the FCLK gain from SECDED ECC is only 7% beyond PoFF, since several data bits fail simultaneously. In contrast, at 400 mV, the EDS scheme provides tolerance to multi-bit failures over a 9X wider FCLK range, past PoFF. Compared to a conventional router implementation, the resilient router offers 28% higher bandwidth for 5.7% energy overhead at 700 mV and 63% higher bandwidth with 14.6% energy improvement at 400 mV. NTV-NoC Measurement Results and Learnings The 2 × 2 2-D mesh-based resilient NoC prototype is fabricated in a 22 nm, 9-metal layer technology. Each router port features bidirectional, 36-bit wide, 1.5 mm long on-die links. The die area is 2.4 mm 2 with a NoC area of 0.927 mm 2 and a router area being 0.051 mm 2 , as highlighted in the NoC die and NoC layout photographs (Figure 17a Silicon measurements are performed at 25 • C for a representative NoC traffic pattern with FLIT injection at each router port every clock cycle at 10% data activity. The 2 × 2 NoC is functional over a wide operating range ( Figure 18) with maximum frequency (F MAX ) of 1 GHz (0.85 V), 734 MHz (0.7 V), 151 MHz (400 mV), scaling down to 67 MHz (340 mV). A 3.3X improvement in energy-efficiency is achieved at a V OPT of 400mV with an aggregate router bandwidth (BW) of 3.6 GB/s. The measured NoC silicon logic analyzer trace ( Figure 19) shows a supply noise-induced timing failure on the control bits of the packet header FLIT, followed by two cycles of bubble (null) FLITS and persistent retransmission (replay) of the FLIT until successful recovery. As shown, timing error synchronization and roll-back incurs a 2-cycle delay between an error event and successful recovery. To account for such droop, a non-resilient router must operate with 28% (700 mV) and 63% (400 mV) F CLK margins, respectively, thus limiting F MAX . The resilient router reclaims these margins and offers near-ideal BW improvement until higher error rates and FLIT replay overheads limit overall BW gains. Past the point-of-first failure (PoFF), both control and data bits are corrupted. While ECC can identify data bit failures, control bit failures can invalidate the entire FLIT, rendering any ECC scheme ineffective. If control paths are designed with enough timing margins such that the control bits do not fail, the F CLK gain from SECDED ECC is only 7% beyond PoFF, since several data bits fail simultaneously. In contrast, at 400 mV, the EDS scheme provides tolerance to multi-bit failures over a 9X wider F CLK range, past PoFF. Compared to a conventional router implementation, the resilient router offers 28% higher bandwidth for 5.7% energy overhead at 700 mV and 63% higher bandwidth with 14.6% energy improvement at 400 mV. Resilience to Inverse Temperature Dependence Effects As the supply voltage approaches VT, elevated (lowered) silicon temperature results in increased (decreased) device currents. This phenomenon is generally known as Inverse Temperature Dependence (ITD) [25]. With process scaling and the introduction of high-κ/metal-gate, devices exhibit higher (negative) temperature coefficient along with weaker mobility temperature sensitivity [26]. This inverses the impact of temperature rise on delay, particularly as VDD is lowered, where a small change in VT results in a large current change-requiring large timing margins for NTV designs. As device and VDD scaling exacerbates ITD, the need for characterizing and understanding ITD, and incorporating adaptive architectures becomes even more imperative. Measurements on the 22-nm NoC prototype indicate that ITD effects are observed at NTV, with router timing failures increasing as the die temperature decreases. Data in Figure 21 shows at 400 mV operation, a 30 °C temperature decrease (from 40 °C → 10 °C) causes the percentage of failing FLITs to rapidly increase. However, the resilient router recovers from transient timing failures due to EDS circuit error detection and the FLIT replay mechanism. This improves BW and FCLK margins by 50% at 10 °C, when compared to a non-resilient router design. Resilience to Inverse Temperature Dependence Effects As the supply voltage approaches V T , elevated (lowered) silicon temperature results in increased (decreased) device currents. This phenomenon is generally known as Inverse Temperature Dependence (ITD) [25]. With process scaling and the introduction of high-κ/metal-gate, devices exhibit higher (negative) temperature coefficient along with weaker mobility temperature sensitivity [26]. This inverses the impact of temperature rise on delay, particularly as V DD is lowered, where a small change in V T results in a large current change-requiring large timing margins for NTV designs. As device and V DD scaling exacerbates ITD, the need for characterizing and understanding ITD, and incorporating adaptive architectures becomes even more imperative. Measurements on the 22-nm NoC prototype indicate that ITD effects are observed at NTV, with router timing failures increasing as the die temperature decreases. Data in Figure 21 shows at 400 mV operation, a 30 • C temperature decrease (from 40 • C → 10 • C) causes the percentage of failing FLITs to rapidly increase. However, the resilient router recovers from transient timing failures due to EDS circuit error detection and the FLIT replay mechanism. This improves BW and F CLK margins by 50% at 10 • C, when compared to a non-resilient router design. Resilience to Inverse Temperature Dependence Effects As the supply voltage approaches VT, elevated (lowered) silicon temperature results in increased (decreased) device currents. This phenomenon is generally known as Inverse Temperature Dependence (ITD) [25]. With process scaling and the introduction of high-κ/metal-gate, devices exhibit higher (negative) temperature coefficient along with weaker mobility temperature sensitivity [26]. This inverses the impact of temperature rise on delay, particularly as VDD is lowered, where a small change in VT results in a large current change-requiring large timing margins for NTV designs. As device and VDD scaling exacerbates ITD, the need for characterizing and understanding ITD, and incorporating adaptive architectures becomes even more imperative. Measurements on the 22-nm NoC prototype indicate that ITD effects are observed at NTV, with router timing failures increasing as the die temperature decreases. Data in Figure 21 shows at 400 mV operation, a 30 °C temperature decrease (from 40 °C → 10 °C) causes the percentage of failing FLITs to rapidly increase. However, the resilient router recovers from transient timing failures due to EDS circuit error detection and the FLIT replay mechanism. This improves BW and FCLK margins by 50% at 10 °C, when compared to a non-resilient router design. NTV-MCU Measurement Results and WSN Operation The MCU is fabricated using 14-nm tri-gate CMOS technology with nine metal interconnect layers ( Figure 22). The MCU cell count is approximately 160 K and the die area is 0.79 mm 2 (0.56 mm × 1.42 mm). The surface-mount ball grid array (BGA) package has 24 pins with an area of 4.08 mm 2 (2.46 mm × 1.66 mm). The die photograph with key IP blocks identified and design characteristics are highlighted in Figure 23. The diminutive low-power MCU can serve as a key component for future autonomous, self-powered "smart dust" WSNs [27]-which can sense, compute, and wirelessly relay real-time information about the ambient. NTV-MCU Measurement Results and WSN Operation The MCU is fabricated using 14-nm tri-gate CMOS technology with nine metal interconnect layers ( Figure 22). The MCU cell count is approximately 160 K and the die area is 0.79 mm 2 (0.56 mm × 1.42 mm). The surface-mount ball grid array (BGA) package has 24 pins with an area of 4.08 mm 2 (2.46 mm × 1.66 mm). The die photograph with key IP blocks identified and design characteristics are highlighted in Figure 23. The diminutive low-power MCU can serve as a key component for future autonomous, self-powered "smart dust" WSNs [27]-which can sense, compute, and wirelessly relay real-time information about the ambient. The IA MCU is functional over a wide operating range ( Figure 24) from 297 MHz (1 V) scaling down to 0.5 MHz (308 mV) at 25 °C. While the entire MCU is functional down to 308 mV, SMEM functionality was validated down to 300 mV by independently writing and reading to it via the TAP debug interface. The ROM and the AHB logic are found to be functional down to 297 mV. With the MCU continuously executing a data encryption workload (AES-128), the minimum energy point is observed at 370 mV (VOPT) at T = 25 °C. At VOPT, the MCU operates at 3.5 MHz and dissipates 58 µW power, which translates to an energy-efficiency metric of 17.18 pJ/cycle. Compared to superthreshold operation at 1 V, NTV operation at VOPT achieves 4.8× improvement in energy efficiency. NTV-MCU Measurement Results and WSN Operation The MCU is fabricated using 14-nm tri-gate CMOS technology with nine metal interconnect layers ( Figure 22). The MCU cell count is approximately 160 K and the die area is 0.79 mm 2 (0.56 mm × 1.42 mm). The surface-mount ball grid array (BGA) package has 24 pins with an area of 4.08 mm 2 (2.46 mm × 1.66 mm). The die photograph with key IP blocks identified and design characteristics are highlighted in Figure 23. The diminutive low-power MCU can serve as a key component for future autonomous, self-powered "smart dust" WSNs [27]-which can sense, compute, and wirelessly relay real-time information about the ambient. The IA MCU is functional over a wide operating range ( Figure 24) from 297 MHz (1 V) scaling down to 0.5 MHz (308 mV) at 25 °C. While the entire MCU is functional down to 308 mV, SMEM functionality was validated down to 300 mV by independently writing and reading to it via the TAP debug interface. The ROM and the AHB logic are found to be functional down to 297 mV. With the MCU continuously executing a data encryption workload (AES-128), the minimum energy point is observed at 370 mV (VOPT) at T = 25 °C. At VOPT, the MCU operates at 3.5 MHz and dissipates 58 µW power, which translates to an energy-efficiency metric of 17.18 pJ/cycle. Compared to superthreshold operation at 1 V, NTV operation at VOPT achieves 4.8× improvement in energy efficiency. The IA MCU is functional over a wide operating range ( Figure 24) from 297 MHz (1 V) scaling down to 0.5 MHz (308 mV) at 25 • C. While the entire MCU is functional down to 308 mV, SMEM functionality was validated down to 300 mV by independently writing and reading to it via the TAP debug interface. The ROM and the AHB logic are found to be functional down to 297 mV. With the MCU continuously executing a data encryption workload (AES-128), the minimum energy point is observed at 370 mV (V OPT ) at T = 25 • C. At V OPT , the MCU operates at 3.5 MHz and dissipates 58 µW power, which translates to an energy-efficiency metric of 17.18 pJ/cycle. Compared to super-threshold operation at 1 V, NTV operation at V OPT achieves 4.8× improvement in energy efficiency. The MCU integrates 8 KB of Instruction cache (I$) and 8 KB data tightly coupled memory (DTCM). DTCM functions as a local scratch-pad memory, offering low latency (single cycle) and deterministic access, particularly valuable for data-intensive workloads. For typical WSN workloads with code footprint ~16 KB, MCU energy can be further improved by enabling I$ and DTCM. Enabling I$ and DTCM helps to exploit any code and data locality present in the application, thereby reducing the active power consumed in AHB interconnect and large SMEM (64 KB) access. Our experiments show 40% energy improvement is achievable from enabling both I$ and DTCM. The WSN incorporating the NTV CPU operates continuously using the energy harvested by a 1cm 2 solar cell from indoor light (1000 lux), with sensor data transmitted over BLE radio. The measured WSN power profile in AOAS mode over a 4-min interval is shown in Figure 25. In the AOAS operating mode (with BLE advertising + sensor polling every four seconds), average power (PAVG) for the entire WSN is 360 µW, with the MCU contributing 290 µW (13 MHz, 0.45 V). The MCU power further drops to 120 µW in deep sleep state. In the deep sleep state, the core (IA + AHB) and CRO domains are power gated. The AON logic is still powered-ON and driven by RTC clock. The MCU integrates 8 KB of Instruction cache (I$) and 8 KB data tightly coupled memory (DTCM). DTCM functions as a local scratch-pad memory, offering low latency (single cycle) and deterministic access, particularly valuable for data-intensive workloads. For typical WSN workloads with code footprint~16 KB, MCU energy can be further improved by enabling I$ and DTCM. Enabling I$ and DTCM helps to exploit any code and data locality present in the application, thereby reducing the active power consumed in AHB interconnect and large SMEM (64 KB) access. Our experiments show 40% energy improvement is achievable from enabling both I$ and DTCM. The WSN incorporating the NTV CPU operates continuously using the energy harvested by a 1 cm 2 solar cell from indoor light (1000 lux), with sensor data transmitted over BLE radio. The measured WSN power profile in AOAS mode over a 4-min interval is shown in Figure 25. In the AOAS operating mode (with BLE advertising + sensor polling every four seconds), average power (P AVG ) for the entire WSN is 360 µW, with the MCU contributing 290 µW (13 MHz, 0.45 V). The MCU power further drops to 120 µW in deep sleep state. In the deep sleep state, the core (IA + AHB) and CRO domains are power gated. The AON logic is still powered-ON and driven by RTC clock. J. Low Power Electron. Appl. 2020, 10, x FOR PEER REVIEW 20 of 23 The MCU integrates 8 KB of Instruction cache (I$) and 8 KB data tightly coupled memory (DTCM). DTCM functions as a local scratch-pad memory, offering low latency (single cycle) and deterministic access, particularly valuable for data-intensive workloads. For typical WSN workloads with code footprint ~16 KB, MCU energy can be further improved by enabling I$ and DTCM. Enabling I$ and DTCM helps to exploit any code and data locality present in the application, thereby reducing the active power consumed in AHB interconnect and large SMEM (64 KB) access. Our experiments show 40% energy improvement is achievable from enabling both I$ and DTCM. The WSN incorporating the NTV CPU operates continuously using the energy harvested by a 1cm 2 solar cell from indoor light (1000 lux), with sensor data transmitted over BLE radio. The measured WSN power profile in AOAS mode over a 4-min interval is shown in Figure 25. In the AOAS operating mode (with BLE advertising + sensor polling every four seconds), average power (PAVG) for the entire WSN is 360 µW, with the MCU contributing 290 µW (13 MHz, 0.45 V). The MCU power further drops to 120 µW in deep sleep state. In the deep sleep state, the core (IA + AHB) and CRO domains are power gated. The AON logic is still powered-ON and driven by RTC clock. Conclusions and Future Work NTV computing with wide dynamic operational range offers the flexibility to provide the performance on demand for a variety of workloads while minimizing energy consumption. The technology has the potential to permeate the entire range of computing-from ultra energy-efficient servers, personal and mobile computing to self-powered WSNs. It allows us to exploit the advantages of continued Moore's law to provide highest energy efficiency for throughput-oriented parallel workloads without compromising performance. The overheads of NTV design techniques in complex SoCs must be carefully balanced against impacts on power-performance at the higher end of the operating regime. Adaptive designs with in-situ monitoring circuitry can help detect and fix timing errors dynamically, but at an added cost. Four case-studies highlighting novel resilient architecture and circuit techniques, multi-voltage designs, and variation-aware design methodologies are presented for realizing robust NTV SoCs in scaled CMOS process nodes. In general, designs can tradeoff performance for reduced leakage power to realize better energy gains at NTV. The results demonstrate 3-9× energy benefits at NTV and the proposed design automation methodology can indeed help achieve greater energy reduction. As a future work project, we intend to build unified reliability models for NTC circuits and systems and validate the model against experimental data obtained across a wide voltage range.
15,507.8
2020-05-14T00:00:00.000
[ "Engineering", "Computer Science" ]
On the Effects of Social Class on Language Use : A Fresh Look at Bernstein ' s Theory Basil Bernstein (1971) introduced the notion of the Restricted and the Elaborated code, claiming that working-class speakers have access only to the former but middle-class members to both. In an attempt to test this theory in the Iranian context and to investigate the effect of social class on the quality of students language use, we examined the use of six grammatical categories including noun, pronoun, adjective, adverb, preposition and conjunction by 20 workingclass and 20 middle-class elementary students. The results of Chi-square operations at p<.05 corroborated Bernstein’s theory and showed that workingclass students were different from middle-class ones in their language use. Being consistent with Bernstein’s theory, the results obtained for the use of personal pronouns indicated that middle-class students were more person-oriented and working-class ones more position-oriented. Findings, thus, call for teachers' deliberate attention to learners’ sociocultural variation to enhance mutual understanding and pragmatic success. Introduction The relationship between language and social class is both theoretically and empirically a key issue in critical discourse studies and sociolinguistic research.A major concern in the analysis of language and social class has been how language variation acts as a marker and instrument for social and racial stratification.As a result, language has been analyzed variously by linguists and sociologists.In the1970s, the British sociologist, Basil Bernstein conducted a study of working-and middle-class children.He argued for the existence of two quite distinct varieties of language use in society: the elaborated code and the restricted code, which he claimed to account for the relatively poor performance of working-class pupils in language-based subjects while they were scoring just well as their middle-class peers in mathematical subjects.According to Atherton (2002), the essence of the distinction between the two codes is in what language is suited for.The restricted code works better than the elaborated code in situations where there is a great deal of shared and takenfor-granted knowledge in the group of speakers.This code is economical and rich, conveying a vast amount of meaning with few words, each of which has a complex set of connotations and acts like an index, pointing the hearer to a lot more information which remains unsaid.On the contrary, the elaborated code spells everything out, not because it is better, but because it is necessary so that everyone (can) understand it.It has to elaborate because the circumstances do not allow the speaker to condense.The elaborated code works well in situations where there is no prior or shared understanding and knowledge, where more thorough explanation is required.If one is saying something new to someone s/he has never met before, s/he would most certainly communicate it in the elaborated code.Spring (2002).The sections that follow aim at shedding more light on Bernstein's theory through analyzing the effects of social class on language use in general and on his proposed dichotomies between the two linguistic codes and modes of socialization (personal and positional) in particular.2. Theoretical Framework Bernstein's (1971) theory can be explained in terms of three basic concepts of language codes, class, and control.He reformulated Restricted and Elaborated codes.The restricted code "employs short, grammatically simple, and often unfinished sentences of poor syntactic form; uses few conjunctions simply and repetitively; employs little subordination; tends toward a dislocated presentation of information; is rigid and limited in the use of adjectives and adverbs, makes infrequent use of impersonal subject pronouns; confounds reasons and conclusions; uses idioms frequently and makes frequent appeals to "sympathetic circularity" (Wardhaugh, 1992: 317).In contrast, the elaborated code "makes use of accurate grammatical order and syntax to regulate what is said; uses complex sentences that employ a range of devices for conjunction and subordination; employs prepositions to show relationships of both a temporal and logical nature; shows frequent use of the pronoun I; uses with care a wide range of adjectives and adverbs; is likely to arise in a social relationship which raises the tension in its members to select from their linguistic resources a verbal arrangement which closely fits specific referents" (Wardhaugh, 1992: 317). 'Control' refers to the role of families and their social control, the way of decision making in families and the relationship among the members.Bernstein (1972b) made a distinction between position-oriented and person-oriented families.In the former, language use is closely related to such matters as close physical contact among the members, a set of shared assumptions, and a preference for implicit rather than explicit meaning in communication.In personoriented families, on the other hand, language use depends on these factors less, and communication is more explicit and context-free.That is, it is less dependent for interpretation on such matters as physical surroundings.According to Bernstein, position orientation leads to a strong sense of social identity with some loss of personal autonomy, whereas person orientation fosters personal autonomy.Wardhaugh (1992, P. 360) Finally, Bernstein used Brandis's (1970) Social Class Index through which he analyzed the working-class and the middle-class by considering the frequencies of use of grammatical categories.The present study also uses these concepts and frameworks in its investigation of the relationship between language use and one's social class. Review of the Literature Bernstein's theory of language codes is perhaps one of the most challenging theories in sociolinguistics in that it received both support and criticism in the field.Influenced by his ideas, many researchers have commented on the different ways in which adults from various social classes respond linguistically to their children.Hess and Shipman (1965) studied middle-class and lower working-class mothers, helping their four-year-old children in either blocksorting tasks or the use of Etch-A-Sketch.The study revealed important differences, with the middle-class mothers far better able to help or instruct their children than the lower working-class ones, who were unable to offer much assistance to their children.Robinson and Rackstraw (1967) also found that middle-class mothers, far more often than the lower working-class mothers, tried to answer their children's Wh-questions (which are considered as information seeking questions) with genuine explanations.Bernstein and Henderson (1969) reported social class differences in the emphasis placed on the use of language in two areas of children's socialization: interpersonal relationships and the acquisition of basic skills.The results showed that middle-class mothers placed much greater emphasis on the use of language in the person area, relative to their working class counterparts, whereas working-class mothers put greater emphasis on the use of language in the transmission of basic skills.Newson and Newson (1970) found that working class mothers invoke authority figures such as police officers in threatening their children.Cook (1971) found that lower working-class mothers used more commands to their young children and often relied on their positional authority to get their way than did middle-class mothers, who preferred to direct their children attention to the consequences of what they were doing.To search for a relationship between social class and mothers ' speech, Henderson (1972) investigated the language used by a hundred mothers to their seven-year-old children.The mothers were divided into middle-class and working-class groups.He reported that relative to the working-class mothers, the middle-class mothers favored the use of abstract definitions, explicit rather than implicit definitions, and information giving strategies in answering children's questions.They also used language to transmit moral principles and to indicate feelings.In Jay, Routh and Brantley's (1980) study twenty-five mothers of all social class levels were asked to tell, as if to a six-year-old child, stories suggested by several cartoon picture sequences.These stories were then played to a hundred six-year-old children of high and low social class levels, who were then asked standard comprehension questions about their content.An analysis of the comprehension scores revealed a significant main effect of the social class of the adult speakers and of the social class of the child listeners.In a more recent study, Rodríguez and Hines Montiel (2009) tried to describe and compare the communication behaviors and interactive reading strategies used by Mexican American mothers of low and middle socioeconomic status (SES) backgrounds during shared book reading with their preschool children.Significant differences between different SES groups regarding the frequency of specific communication behaviors were revealed.Middle-SES mothers used positive feedback and yes/no questions more often than did low-SES mothers.Mexican American mothers also used a variety of interactive reading strategies with varying frequencies, as measured by the Adult/Child Interactive Reading Inventory.They enhanced attention to text some of the time, but rarely promoted interactive reading/supported comprehension or used literacy strategies.All the above-mentioned studies were concerned with how adults from different social classes respond linguistically to their children.The results of these studies are consistent with that of Bernstein's.Moreover, reference can be made to many studies and programs which addressed the language for children and socialization.Likewise, in the available literature, references have been made to the studies that differentiated between restricted and elaborated language codes and addressed the consequences they hold for those who use them.Williams (1969) tried to determine whether statistically reliable social class differences could be found in the degrees and types of syntactic elaboration in the speech of selected Negro and White, male and female, fifth-and-sixth-grade children from whom language samples had been obtained in the Detroit Dialect study.The corpus of some 24,000 words represented the speech of children selected from relatively low and middle ranges of a socioeconomic scale used in the original study.A quantitative description of syntactic elaboration was obtained by using a modified immediate constituents procedure which provided coding of the structural divisions of English sentences.The results indicated that children from the higher-status sample tended to employ more, and more elaborated, syntactic patterns.Such status differences generally prevailed across the sexes, but did vary across the levels of a topical variable and the race variable.Lareau (2002) examined the effects of social class on the interaction inside the home upon ten-year-old black and white children.The results showed that middle-class parents emphasized concerted cultivation through efforts to foster children's talents via organized leisure activities and extensive reasoning.Working-class and poor parents appeared to accept the accomplishment of natural growth, providing conditions under which children can grow but leaving leisure activities to children themselves.These parents also used directives rather than reasoning.Middle-class children, both white and black, were gaining an emerging sense of entitlement from their family life.Working-class and poor children did not display the same sense of entitlement or advantages.Aarefi (2008) investigated the difference between linguistic-cognitive skills in Turkish and Kurdish students with Farsi as their mother tongue from different economical-social backgrounds, using Vygotsky's theory of general cognitive development and Bernstein's theory of social class and differences in speech quality.She found that the average number of words the middle socioeconomic children level used was far higher than the average number of words the children from low socioeconomic class used.The language skill in using words by the Turkish and Kurdish speaking children had no relationship with their cultural backgrounds.There was also a significant difference between the parents' level of education; children whose parents had a higher level of education used more words in writing.Aliakbari et al. (2012) conducted a research project on fifth graders in Tehran, Iran and analyzed both the language and the social class data.The results of the correlation analyses indicated a significant relationship between the total social class scores and certain grammatical categories.The relationships between the language data and the social class factors also displayed a similar trend.They, thus, concluded that their findings supported Bernstein's theory to a great extent.In spite of the fact that many studies confirmed Bernstein's ideas, there are also some critics in the literature.Rosen (1972) criticized Bernstein on the grounds that he had not looked closely enough at working-class life and language.Labove (1972) argued that one cannot reason from the kind of data presented by Bernstein that there is a qualitative difference between the two kinds of speech Bernstein describes, let alone a qualitative difference that would result in cognitive and intellectual differences.Cooper (1976) examined aspects of Basil Bernstein's sociolinguistic account of educational failure empirically.Two groups of students from the first year of an upper school in England, one with primarily non-manual backgrounds, the other with primarily manual backgrounds, were observed in math and science classrooms, through informal discussions with teachers, and through school records and reports, to determine which of Bernstein's two codes appeared to underlie the disciplinary and pedagogic technique of the teachers of the classes observed.The findings showed that in terms of indicators for both regulative and instructional content, the observed math and science curricula appeared to be predicated on a restricted rather than an elaborated code for both classes of students.He concluded that Bernstein's emphasis on certain pupils lacking an elaborated code accounting for working-class failure and middle-class success is misplaced.Thorlindsson (1987) also made an attempt to test Bernstein's sociolinguistic model empirically.The relationship was examined among all the major variables of the model including social class, family interaction, linguistic elaboration, IQ, and school performance.The correlations among social class, family interaction, IQ, and school performance were along the lines hypothesized by Bernstein, whereas linguistic elaborations did not play their predicted role.The empirical results indicated that an important revision of the model was needed.Findings, thus, suggested that a clear distinction should be made between cognitive and pragmatic aspects of the sociolinguistic codes, and between macro and micro elements of social structure.Bolander ( 2009), assessing the relevance of Bernstein's theory for German-speaking Switzerland, showed that the uptake of Bernstein's outlook was and continues to be minimal for the Swiss German context and explores reasons for this conclusion.Acknowledging that certain aspects of Bernstein's theoretical outlook are potentially relevant for the Swiss German context in light of the contemporary studies which highlight a connection between social background and differential school achievement, he concludes that they need to be reassessed in light of the awareness of the variety of interdependent factors which can and do influence the performance of children and adolescents at school.As posited earlier and is clearly understood from the literature reviewed, Bernstein's theory has attracted the attention of many researchers and sociolinguists.Yet, in spite of all these studies, one cannot determine with certainty how social class affects language use. Focus of the Study Bernstein claims that working class students have access only to restricted codes and middle class students to both restricted and elaborated codes, because middle-class members are geographically, socially, and culturally mobile.His theory has inspired a good number of studies.In order to take a different measure in this relation, the present study intends to investigate the use of grammatical categories of noun, pronoun, adjective, adverb, preposition and conjunction among working-class and middle-class children.The result of this study is hoped to raise teachers' understanding of the effect of social class on students' language use and determine whether they should consider it in their educational programs or not. Research Questions This study seeks answer to the following questions: 1-Does social class affect ones use of grammatical categories in L1 writing?2-How different are middle-and working-class students in their social control with reference to their use of personal pronouns?6. Methodology 6.1 Participants 100 female students aged between 9 and 11 took part in the study.They were third or fourth grade elementary students in the city of Eivan in the province of Ilam, in western Iran.The reason for selecting students at these levels was that practicing writing tasks, which is the channel of instrumentation in this study, is part of their educational programs of these levels.Of these 100 participants, based on a social class questionnaire, 20 middle class and 20 working class students were selected. Instruments In conducting the present study two instruments have been adopted to collect the data.To determine students' social class, a converted version of Wilftang's (1990) questionnaire was administered.Different views on factors to be included in determining one's social class were considered and to make it suitable for the context of the study several open-ended questions were added.After translation and revision, it was piloted, re-examined and finally administered as an 11-item social class questionnaire (a copy of which is provided in Appendix A), comprising 10 multiple-choice questions with a variable number of choices and one open-ended question (each choice is indicative of a different level of social class).The questionnaire was completed by the students themselves.Because some students avoided expressing their fathers' job, in order to be sure of the correctness of their answers, it was completed by their parents as well.Another measure used in this study was Picture sequences which required the students to write a story in an equal time space to examine their language use differences.It was the same picture sequences used by Bernstein in his original study (a copy is provided in appendix B).Such an analysis was used in Bernstein's studies but instead of written description he used a verbal description of the picture cards. Data Collection The social class questionnaire was administered to the students who were already familiar with writing tasks.They, then, received selected pictures and wrote their stories in an equal time space.All grammatical categories of noun, pronoun, adjective, adverb, preposition and conjunction were counted manually by the researchers.To ensure the reliability of the scoring, correlation coefficient was measured for each category.The result which ranged from .79 to .88 was evaluated as moderate reliability, in line with Farhady, Ja 'farpur, and Birjandi (2006).To check whether the differences between the frequencies of grammatical categories for working-class and middle-class groups were significant, separate chi-square tests were run.Moreover, to determine subjects' social control, uses of personal pronouns by both groups were compared and their frequencies computed as well. Results Using SPSS software, descriptive statistics including frequency, mean and standard deviation of each category were computed for two groups of participants.As can be seen in Table 1 below, the means and the standard deviation of both groups differed.In order to answer the question of the study, first, all linguistic categories in students' writings of both groups were counted.Then, 6 chi-square tests were run to compare the differences between the frequencies of the grammatical categories.As is noticeable from the results in Table 2, for all six grammatical categories, the observed χ² is greater than critical χ².Accordingly, it can be claimed that the participants' social class has influenced their language use.To determine students' social control and answer the second question, the use of personal pronouns between the two social classes was analyzed.As Table 3 indicates, the frequency of the use of personal pronouns by the middle-class subjects is higher than that of the working-class participants.The use of the third-person plural pronoun 'they' and the first person singular 'I' had the highest frequencies among middle-class students.The second person plural 'you' and the third person singular 'he/she' had the lowest frequencies.For the working-class members, the most frequently used pronouns were 'they' and the first person plural 'we'.In order to find out whether differences between the uses of the personal pronouns were significant, six chi-square tests were run.The difference was significant only for the use of the first person singular 'I'.These results somehow corroborate Bernstein's theory, which maintains that users of the elaborated code make frequent use of the pronoun 'I' and are person-oriented while users of the restricted code are position oriented.The working-class participants gave more importance to the third person plural and the first person plural, which signifies that they paid more attention to group work and shared assumptions and were more positionoriented.The frequency of using the first person singular pronoun 'I' among the middle-class subjects indicated that they are more person-oriented. Discussion and Conclusion This study took a fresh look at Bernstein's theory and the question whether social class differences can produce different language use.To this aim, frequency of the use of grammatical categories of noun, pronoun, adjective, adverb, proposition and conjunction by 20 working class and 20 middle class elementary students were compared.Chi-square results corroborated Bernstein's theory regarding the effect of social class on language use.The findings of the study can be explained by referring to Bernstein's Elaborated and Restricted codes.Working-class students have access to the restricted codes, the ones they reveal in the socialization process where the values reinforce such codes but middle class have access to both restricted and elaborated codes.Another question of this study was related to the social control of the middle-and working-class students based on their use of personal pronouns.The most outstanding result in the use of personal pronouns was the use of the first person singular pronoun 'I' by middle class students.The results again certified Bernstein's theory on the grounds that the working-class members are more position-oriented and give more attention to group work and shared assumption and that middle-class students are far more person-oriented and tend towards personal autonomy. The results accordingly corroborated Bernstein's theory in that restricted and elaborated codes are indicative of different social classes.It also shows how complex the educational matters are that teachers should consider.It implies that teachers and program developers should consider learners' social class differences, design correct curriculum to help working class students achieve elaborated codes, and look for ways to hinder the waste of student's talent in the lower social class. Table 1 . Descriptive statistics for use of grammatical categories among two social classes Table 2 . Chi-square results for comparing the frequencies of grammatical categories of the groups Table 3 . Frequency of the use of personal pronouns among the groups
4,914
2014-06-01T00:00:00.000
[ "Linguistics" ]
Microbiologically influenced corrosion—more than just microorganisms Abstract Microbiologically influenced corrosion (MIC) is a phenomenon of increasing concern that affects various materials and sectors of society. MIC describes the effects, often negative, that a material can experience due to the presence of microorganisms. Unfortunately, although several research groups and industrial actors worldwide have already addressed MIC, discussions are fragmented, while information sharing and willingness to reach out to other disciplines are limited. A truly interdisciplinary approach, which would be logical for this material/biology/chemistry-related challenge, is rarely taken. In this review, we highlight critical non-biological aspects of MIC that can sometimes be overlooked by microbiologists working on MIC but are highly relevant for an overall understanding of this phenomenon. Here, we identify gaps, methods, and approaches to help solve MIC-related challenges, with an emphasis on the MIC of metals. We also discuss the application of existing tools and approaches for managing MIC and propose ideas to promote an improved understanding of MIC. Furthermore, we highlight areas where the insights and expertise of microbiologists are needed to help progress this field. Introduction Engineered materials are essential to our society to ensure our curr ent pr osperity and sustainability in the future.A wide range of matters such as energy supply, food, transportation, housing, and various other of life's fundamentals r el y upon engineer ed materials.To support this, a variety of materials are used that can have limited lifetimes; thus, repair or replacement is inevitable in the long run.Very often, ho w e v er, the expected lifetimes of materials are not achieved.In many cases, damage occurs much earlier than expected, causing costly production downtime and repair expenses, and in some cases, e v en major environmental problems.More often than thought, microorganisms play a major role in this dama ge, and micr obiologists ar e k e y to help impr ov e our understanding of the associated problems as well as potential solutions. It is scientifically proven that many metallic and non-metallic materials (e .g. concrete , wood, and plastic) can be deteriorated by micr oor ganisms, with detrimental (e.g.asset failure) or beneficial (e.g.biodegradation of plastic) consequences.Ho w e v er, biodeterioration of metals in our built environment can result in significant issues of production loss , en vironmental disasters , and/or asset safety (Jacobson 2007 ).While the topic of micr obiologicall y influenced corrosion (MIC) has been known and studied for decades, its understanding is limited, as are the methods for pr e v ention and monitoring, and it poses gr eat c hallenges in man y industrial settings.Fundamentall y, a collabor ativ e effort fr om v arious scientific and technical disciplines, including microbiology, material science, pr ocess and electr oc hemistry, bioc hemistry, corr osion engineering, and integrity management, is needed to help pr ogr ess the understanding of MIC.This combination of knowledge is critical to determine the root causes of a failure associated with MIC and to de v elop effectiv e long-term mitigation and monitoring str ategies specificall y ada pted to the associated system (Silva et al. 2021 , Ec kert and Sk ovhus 2022 ). In recent decades, our understanding about the micr oor ganisms , the mechanisms , and the factors related to MIC has grown enormousl y, yet man y questions remain unanswered.In general, de v elopment in this field has been slo w ed due to poor communications between academia and industry, as well as amongst different disciplines within academia working on corrosion (abiotic/biotic).Unjustifiably, MIC is still considered a questionable mechanism in many industrial sectors, and in some cases, its existence is e v en denied, hindering important knowledge transfer and the de v elopment of envir onmentall y a ppr opriate solutions for this problem.In academia, some microbiologists working on MIC can be hampered by limited access to, or knowledge of, important aspects of abiotic corrosion or methods used by engineers and materials scientists .T he lac k of information exc hange between industry and academy means that scientists are not aware of the actual needs of the industry, thus academic r esearc h can lac k pr actical r ele v ance. A k e y historical problem with MIC studies is the often-siloed nature of scientific disciplines working on the topic.While the number of MIC-related research articles has grown significantly in the past two decades, many of these articles only cover one or two aspects of this multidisciplinary topic.For example, Hashemi et al. ( 2017 ) demonstrated that a large proportion of the r esearc h published on MIC was siloed separ atel y within the corrosion/materials science and microbiology areas, despite the multidisciplinary nature of MIC.With such siloing of knowledge, valuable information can become isolated within one particular discipline instead of spreading amongst the wider MIC comm unity.This consequentl y delays pr ogr ess and innovation, despite the huge economic and environmental impact of MIC. The focus of the current review is to directly tackle one aspect, the siloed nature of MIC resear ch, b y providing important bac kgr ound information on the non-micr obiological aspects of MIC to microbiologists.While microbiologists are experts in their fields, they typically have limited understanding of the corr osion/metallur gical and/or chemical aspects of MIC.This r e vie w, whic h has an emphasis on the degradation of metals, will provide information to help avoid mistakes that can be made when experts in one field start working on a m ultidisciplinary topic suc h as MIC.By pr oviding this information, we aim to encour a ge interdisciplinary and intersector al collabor ations to ease the entr ance of biological scientists , including microbiologists , into the complex field of MIC and, ultimatel y, to sha pe the next er a of MIC r esearc h and management. MIC mechanisms and clarification of terminology MIC has been defined by NACE and ASTM as "corrosion affected by the presence or activity, or both, of micr oor ganisms" (ASTM G193 2022 ), as adapted from Little and Lee ( 2007 ).Several terms are used to describe this phenomenon (micr obiall y influenced corr osion, MIC, biodeterior ation, and biocorr osion), with the r ange of different terms often resulting in confusion.Here we aim to clarify some of the k e y terminology used in relation to the phenomenon itself or to its various mechanisms. MIC terminology A broader term that defines the microbial degradation of metallic and non-metallic materials is biodeterioration , i.e. "any undesirable change in the properties of a material caused by the vital activities of or ganisms" (Huec k 1965 ).The fundamental cycling pr ocesses involved in the biodeterioration of stone and metal have recently been r e vie w ed b y Gaylar de and Little ( 2022 ).MIC is commonly associated with the biodeterioration of materials such as metals and concr ete.In Eur ope and in some international standards, the term corrosion is used only for metallic material (ISO 8044-2020 ), but the International Union of Pure and Applied Chemistry (IUPAC) ( 1997 ) pr ovides a br oader and widel y accepted definition, i.e. "corr osion is an irr e v ersible interfacial reaction of a material (metal, ceramic, or polymer) with its environment that results in consumption of the material or in the dissolution into the material of a component of the environment." The term biocorrosion is incr easingl y used as a synonym of MIC, although this term can create confusion as, in the US, it primaril y r efers to the corr osion of medical implants due to both biotic and abiotic processes (Little et al. 2020b ).It has been suggested that the term microbial corrosion hints that micr oor ganisms ar e the main cause of the corrosion (Gu 2012 ), while others use this term as a synonym for MIC.The ISO 8044 standard describes the term bacterial corrosion as a synonym for MIC if it is solely due to the activity of bacteria; ho w e v er, as arc haea or e v en fungi can be involved in the deterioration process, this term should only be used in unequivocal cases. MIC is pr obabl y the most widely used term to describe the many ways in which microorganisms can affect corrosion processes.While some use the w or d induced instead of influenced, the presence/activity of certain microorganisms has also been known to reduce the rates of corrosion (Videla andHerrera 2009 , Kip andV an V een 2015 ) and so, the term induced is not as br oadl y a pplicable .T he adjectiv e "micr obiologicall y" in the term of MIC is gr ammaticall y incorr ect as it refers to corrosion caused by microbiology instead of micr oor ganisms; nonetheless, at the CORROSION/90 conference in Las Vegas, Ne v ada, NACE's Publication Committee supported its use for future NACE documents (Brooke 1990 ).Since then, many other associations and standards have adopted the term MIC , e.g.(GRI 1990, AMPP 2018 ); thus, we prefer to use this term in this r e vie w to align with these standar ds.Ho w e v er, r eaders ar e fr ee to decide their pr efer ence, as MIC can be the acr on ym for both microbiologicall y and micr obiall y influenced corr osion as well as for micr obial corr osion, and all thr ee terms ar e suitable to name the phenomenon.If one prefers, biocorrosion can also be used as long as it is pr operl y defined what users mean under the term. Biofouling can lead to MIC, but the term cannot be used as a synonym for MIC.Biofouling is the accumulation and growth of various organisms, including microorganisms (microfouling), plants, and/or animals (e .g. algae , barnacles) (macrofouling), on a surface (AMPP 2023 ).While, in some cases, biofouling can be associated with corrosion, this is not always the case.Indeed, other problems associated with biofouling, including increased water resistance on ships, increased fuel consumption due to drag (Callo w and Callo w 2011 , Tulcidas et al. 2015 ), or the introduction of non-indigenous species (Li and Ning 2019 ), can be more of an issue .T hus , using biofouling as a dir ect synon ym for MIC is not recommended. MIC mechanisms of metals The term MIC does not describe a single mechanism for corrosion, it is rather a collective term for a variety of different mechanisms thr ough whic h micr oor ganisms alter the kinetics of corr osion r eactions by their presence or activity (Lee et al. 2022 ).For MIC to occur, the specific interplay of the "three M's" is required: microorganisms, media, and metals (Little et al. 2020b ).The combination of these interactions defines the various mechanisms that can change the rate of metal deterioration, either directly or indirectly.T here ha ve been many reviews specifically describing MIC mechanisms, but unfortunately, there are many inconsistencies with the terminologies used, making it difficult to navigate among them. Here we aim to clarify the terminology used for MIC mechanisms without going very deep into details, with the goal of providing a common and easily understood language. MIC due to surface deposition Micr oor ganisms ar e involv ed in a r ange of pr ocesses that can lead to the formation of deposits on the surfaces of materials .T hey can form single or m ultispecies comm unities attac hed to a surface, known as biofilms, which are often embedded in a self-produced matrix of extracellular polymeric substances (EPS) (Flemming et al. 2016 ).Alternativ el y, metabolic pr ocesses due to some types of micr oor ganisms, suc h as metal-oxidizing bacteria, can lead to metal products being deposited on a surface (Lee and Little 2019 ).As discussed below, these deposits can affect and, in some cases, acceler ate corr osion. MIC mec hanisms ar e often classified based on oxygen presence and/or availability in a given environment.Ho w ever, in real-life conditions , oxygen ma y intermittently be a vailable and/or consumed by micr oor ganisms that can react directly with the metal surface .T hus , instead of a strictl y aer obic or anaer obic envir onment, an oxygen gradient is often present that can v ary ov er time.This clashes somewhat with many laboratory-based MIC experiments, which aim to operate under strictly aerobic or anaerobic conditions.Indeed, there is potential scope for more experiments to be performed that look at the effect of alternating oxygen in MIC tests .In one example , Lee et al. ( 2004 ) sho w ed that SRB corr osion r ates incr eased by a factor of three if oxygen was intermittentl y pr esent, when compar ed to either strictl y aer obic or anaerobic conditions. The growth of a biofilm itself can result in MIC in aerobic fluid environments when biofilms are formed in a patchy arrangement, cr eating o xygen concentr a tion or differential aer a tion cells between the anodic and the cathodic areas of the surface.Fundamentall y, the mec hanism of corr osion involv es electr on flow through the metal from the anode to the cathode, where (under aerobic conditions) oxygen is the electron acceptor (Hamilton 2003 ).The biofilm, which defines the anode, pr e v ents oxygen r eac hing the metal surface while the metabolism of aerobic bacteria uses up the oxygen present in the biofilm.The cathodic site ends up being the area uncovered by the biofilm that is exposed to oxygen.Roe et al. ( 1996 ) have shown that cell-free EPS alone can initiate corrosion. Other surface deposits, such as metal oxides, can form oxygen concentration cells, resulting in under deposit corrosion or oxygen gradient corrosion .The two most studied groups of MICr elated aer obic metal-depositing bacteria ar e ir on-oxidizing bacteria (FeOB) and manganese-oxidizing bacteria (MnOB), which have been reviewed by Lee and Little ( 2019 ).For example , F eOB oxidize Fe 2 + into Fe 3 + in an oxygen-rich envir onment wher e the area underneath the accumulated iron oxides is depleted of oxygen and a small anode is formed r elativ e to the surr ounding lar ge, oxygen-satur ated cathode.The differ ence in dissolv ed oxygen concentr ation cr eates a potential differ ence r esulting in oxygen concentration cells or, alternatively, can cause a form of galv anic corr osion.These deposits can lead to pitting corr osion ["localized corr osion r esulting in pits , i.e .ca vities extending from the surface into the metal" (ISO 8044) (Table 1 )].These pits, or the complex geometries of the deposited metals that form, can create areas shielded from the bulk fluid/electrolyte.Subsequent hydrolysis of metal ions creates an acidic medium and attracts chargeneutr alizing ions suc h as c hloride and sulfate, r esulting in selfsustaining pitting that, if it occurs in cr e vices, is called crevice corrosion (Table 1 ). Some materials, such as corrosion-resistant steels , ha ve enhanced resistance to oxygen corrosion as their passivated surface (thin metal-oxide layer; see the Materials and MIC section below) pr ovides pr otection.Ho w e v er, some biofilms can destr oy this pr o-tectiv e layer, r esulting in pitting corr osion (Yuan and Pehk onen 2007, Li et al. 2016, Dong et al. 2018, Cui et al. 2022 ).In addition to the formation of a biofilm, the r espir ation of aer obic bacteria within a biofilm can reduce the oxygen content, creating an anaer obic envir onment that can support the growth of anaerobes suc h as sulfate-r educing pr okaryotes (SRP) and nitr ate-r educing prokaryotes (NRP). Electrical MIC (EMIC) EPS secreted by microbial cells have many components with redox properties and electrochemical (EC) activity that play crucial r oles in micr obial r espir ation as well as in corrosion.For example, sessile cells in a biofilm can use metal, such as elemental iron, as an electron donor if thermodynamically more favorable electron donors ar e lac king (Philips et al. 2018 ).In anoxic en vironments , the terminal electron acceptor is an oxidizing agent such as sulfate or nitr ate.While the r eduction of the electr on acce ptor tak es place inside the cell, the oxidation of the electron donor happens outside the cell, so the extracellular electron from outside must enter the cell.This electr on tr ansport acr oss the cell wall is called extracellular electron transfer (EET), and the ov er all mec hanism thr ough which the associated corrosion of metal is achieved is called EMIC (Enning and Garrelfs 2014 ), alternative names include type I MIC (Gu 2012 ) and EET-MIC (EET MIC) (Jia et al. 2019 ).If the electron donor is an organic carbon source that can diffuse into the cell, there is no need for an electron transport mechanism because the electrons released are already located in the cytoplasm.Insoluble metals, such as elemental iron, cannot pass through the cell membr ane, and an extr acellular electr on tr ansport is r equir ed for it to be used as an electron source .T he electron can be transported into the cell in tw o w a ys , namely by direct or indirect mechanisms (Table 1 ). In direct EMIC , direct extracellular electron transfer (DEET) (Lovley 2011 ) occurs, the cell can have direct contact with the metal surface and dir ectl y accept electr ons fr om the metal.Until r ecentl y, this mechanism had only been inferred, but Tang et al. ( 2019 ) provided evidence for iron as a direct electron donor.Electrons are taken up by cell surface enzymes , structures , or membrane redox pr oteins, suc h as c-type cytoc hr omes (P aquete et al. 2022 ), facilitating EMIC.Alternativ el y, the cell can attac h to the metal b y "nano wires ," e .g. electrically conductive pili in bacteria (Lovley 2017 ) or archaella in archaea (Walker et al. 2019 ), by which it can transfer electrons .T he exact mechanism for electron transfer thr ough electricall y conductiv e cell a ppenda ges is still debated, and Little et al. ( 2020a ) argue that any electronic transport via pili is unlikely to significantly contribute to MIC.T hus , further research is needed to resolve the role of electrically conductive pili in MIC. From an EC standpoint, the term "direct electron transfer" is possibly not strictly correct as each mechanism described above r equir es a redox mediator.Blackwood ( 2018 ) has contested that true dir ect electr on tr ansport does not and cannot happen, as dir ect electr on tr ansfer between aqueous species cannot occur over distances of > 2 nm.This is a typical example of different disciplines using different language to describe a phenomenon, increasing the chance of misunderstanding and confusion.This confusion is e v en further incr eased by alternativ e terms and their abbr e viations for direct EMIC, including DET-MIC (direct electron tr ansport MIC) (Lekbac h et al. 2021 ) or DIMET (dir ect ir on-tomicr oor ganism electr on tr ansfer), if dir ect electr on tr ansport occurs fr om Fe 0 (Lekbac h et al. 2021 ), along with the other alternative terms for EMIC as indicated abo ve .Despite the controversies Table 1.Brief description of the main mechanisms associated with MIC of metals. Under deposit corrosion, oxygen gradient corrosion A type of "localized corrosion associated with, and taking place under, or immediatel y ar ound, a deposit of corr osion pr oducts or other substance" (ISO 2020), e.g.biofilm or metal de position by metal-o xidizing bacteria that is formed in a patchy arrangement. Crevice corrosion A type of "localized corrosion associated with, and taking place in, or immediatel y ar ound, a narr ow a pertur e or clear ance formed between the metal surface and another surface (metallic or non-metallic)."(ISO 8044) The accumulation of chloride and other aggressive anions in the pit acceler ates corr osion. Electrical MIC (EMIC) Corrosion caused by extracellular electron transfer by microorganisms. Direct EMIC Corrosion of metals achieved by extracellular electron transfer by micr oor ganisms in direct contact with the metal surface.Electrons are taken up by cell surface enzymes or membrane redox proteins. Indirect EMIC Corrosion of metals accelerated by soluble electron transfer mediators r eleased fr om micr oor ganisms that use the electr ons gained fr om the metal for r espir ation. Metabolite MIC (MMIC) Corrosion of metal achieved directly or indirectly by metabolites released by micr oor ganisms both in aer obic and anaer obic conditions.and mixed terminologies on the different categories of direct MIC, we use the umbrella term "direct EMIC" here when referring to micr oor ganisms ca pable of dir ectl y inter acting with the metal surface by one of the various proposed mechanisms. In indirect EMIC, soluble electr on tr ansfer mediators (Huang et al. 2018, Tsurumaru et al. 2018 ) are released from the cell, oxidized at the anode, and r eturn bac k to the cell to be used in r espir ation (Kato 2016 ).Alternative abbreviations also exist for this mechanism [ MEET (mediated EET (Little et al. 2020a ); MET-MIC (mediated electr on tr ansport MIC) (Gu 2012 ); and SIMET (shuttle-mediated ir on-to-micr oor ganism electr on tr ansfer), if ir on is the electr on source (Lekbach et al. 2021 )]. Metabolite MIC (MMIC) In metabolite-MIC, micr oor ganisms influence corr osion thr ough the creation of corrosive metabolites, such as protons, organic acids , or sulfur species .T hese metabolites ar e r educed on the metal surface, and a biocatalyst is not r equir ed for the process, as opposed to EMIC.At a sufficiently low pH, pr oton r eduction can be coupled with metal oxidation.This type of corrosion mechanism is also an EC pr ocess.Alternativ e terms for MMIC include type II MIC (Gu 2012 ) and chemical MIC (CMIC) (Enning and Garrelfs 2014 ).Li et al. ( 2018 ) argued that the term "metabolite-MIC" is preferable ov er "c hemical MIC," because c hemical corr osion is the dir ect r eaction of a metal with an oxidant, usually at high temperatures, with no separable oxidation and r eduction r eactions, as opposed to EC corrosion. The historical cathodic depolarization theory Historicall y, man y MIC mec hanistic studies in the absence of oxygen were reported in relation to sulfate-reducing bacteria (SRB), and the associated se v er e corr osion was often explained by the cathodic depolarization (CDP) theory first proposed by von Wolzogen Kühr and Van der Vlugt ( 1934) (translated into English in 1964).According to the CDP theory, the rate of iron corrosion by SRB is increased by the removal of H 2 from the cathode by hydrogenasecontaining SRB.To be precise, in the absence of oxygen, the electron acceptors for iron oxidation are protons derived from dissociated water, whereas in the cathodic reaction, the proton is reduced to H 2 .According to the theory, the H 2 formed on the metal surface is consumed by SRB and thereby further accelerating iron oxidation. The CDP theory has been criticized for decades and been discr edited by man y (Hardy 1983, Cr olet 1992, Dinh et al. 2004, Mori et al. 2010 ), and r e vie wed in detail (Enning andGarr elfs 2014 , Blac kwood 2018 ).In short, the rate-limiting cathodic reaction in metal corrosion is the adsorption of protons to the metal, not the desorption (r emov al or dissolution) of H 2 fr om the surface , i.e . in abiotic cultur es, low corr osion r ates ar e due to the limited av ailability of protons and, thus, slo w H 2 formation on iron.It has been sho wn that the consumption of cathodic hydrogen by SRB did not significantl y incr ease ir on corr osion in the presence of iron as the sole electron donor (Venzlaff et al. 2013 ).This does not rule out the possibility that the utilization of hydrogen by microorganisms ma y still pla y a role in MIC, but not as it was proposed/intended by the CDP theory.Ov er all, it is recommended that in the future, the CDP theory should only be mentioned if needed for historical purposes, and if it is mentioned, the contr ov ersy and criticisms of this explanation for MIC should be acknowledged. To summarize, MIC is not a single corr osion mec hanism.Instead, se v er al differ ent mec hanisms can contribute to MIC.Howe v er, the two main common features that are similar in all MIC cases are that (1) microorganisms play a role and (2) MIC is an EC process. Siloed scientific fields and the need for interdisciplinary dialogue While MIC by definition encompasses the fields of microbiology and corrosion, the involvement of other disciplines such as electr oc hemistry, pr oduction c hemistry, metallur gy and materials science, process engineering, fluid mechanics, and others is essential to getting a clear picture of the environments and operating conditions that support MIC.As early as 1934, while leading a group of scientists in the field study of a sphagnum bog, Baas Becking (Baas Becking and Nicolai 1934 ) observed that simply classifying the micr oor ganisms pr esent would yield "less satisfaction to the investigator" than it could have with additional scientific insights from geologists , geneticists , and ecologists .Decades later, a r e vie w of MIC state-of-the-art in 2005 by Videla and Herr er a ( 2005 ) noted that until the late 1970s there was poor transfer of knowledge between disciplines, including metallurgy, electr oc hemistry, micr obiology, and c hemical engineering, that prevented the study of MIC from going m uc h beyond a focus on SRB/SRP .Today , it is becoming mor e widel y understood that any inv estigation of MIC r equir es a m ultidisciplinary focus on m ultiple lines of evidence (MLOE), as reflected in Sharma et al. ( 2022 ), where data from molecular methods were analyzed and conclusions drawn by a multidisciplinary team.Yet, when industries today are attempting to understand the impact of MIC on their assets, many do not have experts from multiple disciplines on hand to guide their sampling, testing, and data integration to help them solve complex MIC issues.Further, while there are numerous standards available to guide specific types of testing, ther e ar e none that identify a trul y unified m ultidisciplinary a ppr oac h for combining MLOE to ultimately direct MIC management activities. Diagnosing MIC requires MLOE The diagnosis of MIC requires MLOE for a number of reasons, but perha ps for emost is the fact that there exists no singular test or assay that can conclusiv el y identify that MIC has occurred or is presently occurring, although some current works, e.g.Lahme et al. ( 2021 ), have suggested that [NiFe] hydrogenases in methanogenic archaea can be potential MIC biomarkers under specific conditions.Early MIC work in the oil and gas industry was heavily focused on the use of culturing methods (such as the most probable number-MPN-technique) and relating the likelihood of MIC to culturable cell counts in various types of media.Gr aduall y, and with the incr eased a pplication of molecular microbiological methods (MMM), asset owners discovered that while micr oor ganisms wer e pr esent nearl y e v erywher e, the cell counts poorl y corr elated with actual MIC dama ge (Zintel et al. 2003 ). Another reason why MLOE are used for MIC diagnosis is the lack of a unifying model or equation to calculate corrosion rates due to MIC, which is made onl y mor e difficult by the fact that, and as explained in the pr e vious section, micr oor ganisms and biofilms can affect corrosion reactions in a variety of wa ys .In addition, MIC can at times be linked with corrosion caused by abiotic factors.The MLOE a ppr oac h is common to man y other scientific fields, including but not limited to the study of sediments, microbial fuel cells , and bioremediation.T he approach is a "systems" view of microbial ecology, including the roles of the chemical environment and the role of the material being degraded or deteriorated.Fig-ure 1 shows an example of the four MLOE categories often used for MIC in vestigations , including examples of parameters that are included in each category.The more pieces of the puzzle that can be provided, the better the picture/understanding of what is happening will be. The best diagnosis of MIC requires MLOE from as many of the four categories shown in the puzzle as possible.Evidence from more categories provides increased confidence.At present, this is still a work in pr ogr ess, and ther e ar e no definitive guidelines on which tests or combination of tests provide the best evidence.According to Lee and Little ( 2017 ), the goal is to collect independent types of measurements that are consistent with a MIC mechanism. To obtain MLOE, the investigation of corrosion in an asset would ideally be based upon proper characterization of (1) the conditions that ar e pr esent in the "bulk" envir onment (e.g.soil, water, process fluid); (2) any biofilm or other material at the metal/environment interface and associated physicochemical conditions, most likely to be involved in MIC; and the (3) the metal surface itself, both wher e corr osion has formed and wher e it has not.These thr ee environments can be quite different from one another, even though they are present at the same time in the same place.Wrangham and Summer ( 2013 ) sho w ed that the types and numbers of micr oor ganisms pr esent in a bulk fluid phase can be quite different than those located within a biofilm on a surface.Deposits on a metal surface can also vary in composition and physical pr operties thr oughout their thic kness and later all y, as demonstrated by Larsen et al. ( 2010 ), who showed significant differences in both corrosion product composition and microbiology in thick deposits inside of pipework on an offshore oil and gas production platform. Obtaining as m uc h information as possible from a combination of the different environments present is important to get the most accurate understanding of the overall processes taking place.If only limited testing is a vailable , it is recommended the focus of testing should be on the metal interface, as this is where the k e y corr osion inter actions ar e likel y taking place. The roles of engineering design and operations in MIC assessment It is valuable for microbiologists to have an understanding of the ov er all oper ation for assets wher e MIC is being assessed.Engineers , operators , maintenance personnel, production chemists, and c hemical v endors can pr ovide v aluable insights that r e v eal when and why environmental changes occur.A simple example is the oper ating temper atur e. Oper ations may r eport that the crude oil pr oduction temper atur e is 80 • C, leading micr obiologists to look for thermophiles as a possible cause of MIC; ho w e v er, it would also be important to know that the process only runs once a month for a day, then cools down to ambient temper atur e.An y factor in the design, operation, or maintenance of an asset that can affect the chemical and microbiological environment should be an ar ea of inter est, and micr obiologists may need to prompt other experts to obtain this type of information; it may not be volunteered otherwise.It is particularly important to understand the types and doses of various treatment chemicals that may be used, and pr oduction c hemists and corr osion engineers can gener all y provide this information.Changes to asset design, the fluids being r eceiv ed or pr ocessed, the corr osion mitigation measur es being a pplied, incr eases in ne wl y found corr osion, etc. can all pr ovide important insights to microbiologists working to solve a MIC issue.Leak and failur e histories, particularl y if r oot cause anal ysis has been performed, can also provide useful context when assessing MIC (Borenstein and Lindsay 2002, Eckert and Skovhus 2018, G ősi et al. 2022 ). Engineering and design information is also valuable when assessing the potential for MIC and abiotic corrosion mechanisms.Such information includes the type and grade of materials used for construction, fabrication, and testing history; circuits and systems identified on engineering dra wings , process flow diagrams , and mass balance sheets; identification of dead legs (where flow infr equentl y occurs); clean-out ca pabilities for lar ge v essels; flow controls; and utilities supporting various processes in the assets.Often, MIC is found in piping and assets with no flow or stratified flo w, which allo ws solids and w ater to accum ulate and pr omote the growth of biofilms (Sharma et al. 2022 ).A r e vie w of oper ation and design parameters can help to identify such areas to guide inspection and mitigation acti vities.Ad ditionally, older assets may no longer conform to the operating conditions used as the basis for design, which affects the type and severity of likely corrosion threats (Wei et al. 2022 ).Table 2 provides some examples of operational and engineering information that can help support MIC threat assessments. Chemistry: assessing the chemical environment Non-micr obiologists gener all y do not have the same perspective on the chemical environment as microbiologists , e .g. the significance of different electron acceptors, energy sources, pH, redox potential, salinity, and other factors that affect microbial ecology (Skovhus et al. 2017 ).The concepts of exponential growth, the widespread diversity of microbiomes, the essential inputs to (and end products from) microbial metabolism, and the important roles of biofilms and EPS ar e some what for eign to experts in other disciplines (Wade et al. 2023 ).Microbiologists ma y ha ve the opportunity to help other disciplines to view information related to MIC through the "lens" of microbial ecology.Likewise, a dialog with c hemists, corr osion engineers, and oper ators can bring new insights to those focused on the microbiological environment (Hashemi et al. 2017 ).It is imper ativ e that all parties in the multidisciplinary con versation ha ve a clear understanding of the technical terms that are being used, as each discipline typically has its own technical vocabulary (Eckert and Skovhus 2018 ). The chemical composition as well as the physical parameters of the environment in which MIC or abiotic corrosion occurs are very significant, in that the chemistry of the bulk phase environment and of surface films/deposits impacts both electr oc hemistry and micr obiological pr ocesses.While sampling and chemical/physical analysis of the bulk phase (e.g.aqueous) is fairly straightforw ar d, analysis of chemical conditions at the metal surface, particularly beneath solid particles and biofilms is considerabl y mor e complex (Phull andAbdullahi 2017 , Kromer et al. 2022 ).When anal yzing c hemical composition data, it is imper ativ e to k ee p this distinction in mind; e.g. the pH in the bulk phase may be consider abl y differ ent fr om the pH beneath a biofilm containing acid-pr oducing micr oor ganisms (Lee et al. 1993, Dexter and Chandr asekar an 2000, Phull and Abdullahi 2017 ).There are many commonl y used anal ytical methods for water composition, dissolv ed gas, and headspace gas analysis, and some examples of these are detailed here. pH pH is an essential measurement parameter in the aqueous phase of a system affected by MIC (Ibrahim et al. 2018 ).Changes in the pH may indicate the growth of acid-producing microorganisms, Initially identified corrosion mitigation measures to be applied Inspection and maintenance records, integrity assessment records Means of pre-commissioning testing; hydrostatic test records, pr ocedur es, actual test media used Pr ocess upsets, emer genc y shut do wn recor ds-Test such as acetogens, or partial pressure variations, such as dissolved O 2 and CO 2 concentrations (Lee et al. 1993, Mand et al. 2014, Kato 2016 ).In a given field environment, the local pH condition has a direct impact on the microbial community and activity; e.g. the corr osiv e acetogen Sporomusa sphaeroides thrives in pH ranging between 6.4 and 7.6 and can tolerate up to 8.7 (Philips et al. 2019 ), whereas some SRB can grow up to pH 9.5 but with an optimal pH r ange ar ound 7 (Ibr ahim et al. 2018 ).One k e y challenge is understanding the actual pH in a giv en envir onment, as pH is affected by temper atur e and pr essur e (Phull and Abdullahi 2017 ).For example, a sudden shift in system pr essur e, suc h as liquid withdr awal fr om a pr essurized system, will alter the pH measur ed (Ibrahim et al. 2018 ).According to the Henry's Law, gas solubility is dir ectl y pr oportional to the partial pr essur e, whic h is particu-larly important for CO 2 and H 2 S. Changes in their solubility will also influence the pH of the aqueous environment, that subsequently impact microbial growth and corrosion product formation (Ibrahim et al. 2018 ).Metal dissolution and corrosion product formation are intertwined with biofilm and influenced next to others by pH.For example, the formation of FeCO 3 (siderite) is more stable in higher pH, since the concentrations of HCO 3 − and CO 3 2 − are higher than the respective iron ions, thus favoring the crystallization of siderite (J oshi 2016 ).Furthermore , pH can act as an indicator for MIC mitigation.The topic of MIC mitigation is discussed below in the Corrosion Management section. Concentr a tion of dissolved ions Concentrations of cations and anions in the aqueous phase can also indicate possible microbial acti vity.For example, de pletions in the concentrations of electron acceptors such as nitrate and sulfate indicate the activities of NRP and SRP, r espectiv el y.The mass balance between the cations and anions of a given envir onment pr ovides useful e vidence for e v aluating the ov er all MIC process, including the metabolic process, corrosion product deposition, and metal dissolution.For example, the concentration of sulfur species in the aqueous phase, including S 2 O 3 2-, SO 4 2 − , HS − , SO 3 2 − , and S 0 , is closel y r elated to the oxygen concentration in a system and micr oor ganisms suc h as sulfur-oxidizing bacteria and SRB (Ibrahim et al. 2018 ).Correct dosages of biocide and nitrate injection to combat MIC also require close monitoring of cations/anions (Gieg et al. 2011, Ibrahim et al. 2018 ).For example, in the oil and gas industry, for nitrate injection to be successful, the concentration of NO 2 − needs to remain stable in the system to inhibit the activities of SRB, as the further reduced compounds of NO 2 − in the metabolic pathway of NRB, namely N 2 and ammonia, ar e ineffectiv e a gainst SRB.T hus , a close monitoring of the anions NO 3 − , NO 2 − , HS − , and SO 4 2 − will provide a detailed ov ervie w on the efficacy of nitrate injection on the activities of SRB.One k e y challenge is the timely measurement of the associated ions; e.g.HS − ions ar e highl y volatile and can quic kl y esca pe into the atmosphere post-sampling (Tangerman 2009 ).It is important to ensur e on-site r eadil y av ailable measur ements when conducting analyses of k e y ions.In ad dition, the differ ences in the le v els of cations and anions provide important evidence for the corrosion product formation process.For example, a decrease in the level of carbonate ion and Fe 2 + in the aqueous solution may indicate the formation of FeCO 3 , and the r espectiv e concentr ations of the ions are used for calculating the supersaturation index (SS) (Joshi 2016 ): where the K sp is the solubility product constant for FeCO 3 and an SS value of above 1 indicates that the solution is satur ated (Joshi 2016 ).Ov er all, the mass balance between the various cations and anion species is a strong indicator of the MIC process. Gas production Se v er al known micr oor ganisms associated with MIC pr oduce biogenic gas .For example , the corr osiv e methanogens pr oduce methane using the electrons from the metal surface (Beese-Vasbender et al. 2015, An et al. 2020, Tamisier et al. 2022 ), whereas SRB activities lead to the production of H 2 S. Gas chromatographs (GC) equipped with a thermal conductivity detector or flame ionization detector are typically used for gas analyses (Grob and Kaiser 1982 ).In the field of MIC, it is noteworthy that multiple bio-genic gases may need to be monitored, including CO 2 , CH 4 , H 2 S, H 2 , O 2 , N 2 , etc. Hydrogen is one of the k e y gases of special importance to MIC, as se v er al corr osiv e species ar e dependent on H 2 for their growth.In addition to GC, various handheld and in-line H 2 sensors are commercially available that allow field monitoring of H 2 and ar e particularl y useful during field sampling (Bosha gh and Rostami 2020 ).Ho w e v er, suc h de vices can be limited in r esolution, and their reliability can be affected due to contamination by other gases or wrong handling in the field.For extr emel y local envir onments, such as within the metal-biofilm interface, monitoring of the H 2 gradient can be performed using microsensors (Cai et al. 2020b ).While the current H 2 microsensor technologies are still e volving to r educe interfer ence fr om H 2 S and other compounds (Nielsen et al. 2015 ), local monitoring of H 2 remains an important line of evidence during MIC investigations and monitoring. Microbiology: assessing microbiological composition and activity The micr oor ganisms associated with corr osion ar e , of course , str ongl y linked to the chemical and physical environmental conditions present, but microbiological activities also affect the local environment in terms of organic or mineral acid production, sulfide production, or the formation of occluded areas and concentration cells on the metal surface.Little et al. ( 1996 ) demonstrated that microorganisms in biofilms can do both: create local anodic areas and are also "attracted" to existing anodic sites previously unaffected by microorganisms.Non-microbiologists can easily become lost in the complexity of interactions that could be occurring between various microorganisms in biofilms, their metabolic capabilities, and the kinetics that are driving reactions in one direction or the other.This is compounded by a lack of comprehension of the strengths and limitations of different micr obiological c har acterization methods/tec hnologies, the issues of interference, primer cov er a ge, biases, sensitivity, etc.It is often stated that microbiological conditions may be described in terms of di versity, en umeration, and acti vity.Engineers are generally not aware of the difficulty in determining the specific microbial activities that are occurring in a given en vironment, e .g. using RT-qPCR or metabolomics, and that these activities ar e dynamic, c hanging in parallel with environmental conditions .Microbiologists , with expertise on these and other associated topics , can pro vide essential insights on such matters to non-microbiologists. Ov er the years, v arious tec hniques hav e e volv ed and been gr aduall y r eplaced by mor e adv anced tec hnologies to inv estigate MIC, all the way down to the molecular le v el (as r e vie wed in Little et al. 2006, Beale et al. 2016, Trif et al. 2018, Kotu et al. 2019 ).In support of this observation, Puentes-Cala et al. ( 2022) ov ervie wed the MIC liter atur e published in the last 12 years, which showed that a ppr oximatel y thr ee-quarters of the studies used molecular micr obiological a ppr oac hes to c har acterize micr obial comm unities in field samples.Table 3 summarizes some of the traditional as well as more advanced methods that can be used to obtain microbiological data, highlighting their pros and cons to aid decisionmaking during MIC in vestigations .All of these tec hniques ar e suitable to use for both field and laboratory studies if handling is done pr operl y, as described else wher e [e.g. in Eckert (2022g. in Eckert ( ) et al. ( 2022 ) ) or AMPP Standard TM21465 (under pr epar ation)]. It is imper ativ e to emphasize that the limitations of each microbiological method should always be considered, as , e .g. the detection of micr oor ganisms that have been associated with corrosion by itself is not diagnostic for MIC (Little et al. 2006 ).Also, the choice of methods should carefully be evaluated in light of the Materials and MIC Ther e ar e a fe w high-le v el points for micr obiologists to consider when thinking about corrosion mechanisms and electrochemistry.The first is that abiotic or non-biological corrosion reactions need to be considered in e v ery MIC e v aluation.Abiotic corr osion may be present separately from, or in conjunction with, MIC.Micr oor ganisms , e .g. could be forming biofilms that simpl y cr eate mor e cr e vices for differ ential aer ation corr osion cells, leading to localized pitting on passive materials such as stainless steel (SS) (Table 4 ).The second consideration is that microbial activity in a biofilm may simply enhance the effects of existing and wellknown abiotic metallurgical conditions that can promote localized corr osion, suc h as the effects of manganese sulfide inclusions forming microscopic anodic corrosion initiation sites or galvanic corr osion occurring wher e metals having differing native potentials are joined (e.g.carbon to SS).It is important for microbiologists to understand these abiotic contributors to corrosion when examining the role of micr oor ganisms in the corrosion of a given material, and metallurgists and materials scientists can readily explain these contributors.Finally, and probably one of the more elusiv e c hallenges, is de v eloping an understanding of how micr obiological metabolism facilitates or enhances the kinetics of anodic and cathodic corr osion r eactions that m ust be occurring for corrosion to take place.As one electr oc hemist r ecentl y stated in an MIC symposium, "I need to know wher e the electr ons ar e going!".This is an area where there is significant room for improvement in our understanding of MIC; ho w e v er, one that will r equir e a serious collabor ativ e effort between materials scientists and microbiologists to make significant progress. In addition to the c hemical, micr obiological, and physical environment, the potential for MIC depends on the composition and metallurgical properties of the material being affected by these parameters.Carbon steel (CS) and concrete are two of the most predominant materials used in the construction of engineered assets, including pipelines, sewer and water lines, marine structures , ships , offshore energy generation, and infrastructur e, suc h as bridges and highways.Concrete and the CS reinforcing used within the concr ete ar e often subject to corrosion, although the percentage of this corrosion resulting from MIC other than in sewer lines (Wu et al. 2020 ) is not well understood.T here is , ho w ever, a long history of research and information published about the interaction of metals with biofilms.The aim of this section is to provide a general introduction to materials and, specifically, metal pr operties that ar e r ele v ant to MIC.Metals can gener all y be br oken do wn into tw o categories , i.e .passi ve and acti ve metals, de pending on the metal and the environment to which it is exposed.Passive metals , e .g. corrosionresistant alloys (CRA) like SS, form a protective metal oxide when exposed to aqueous environments containing o xygen.Acti ve metals, such as CS, do not form this protective layer when exposed to aer ated water.Typicall y, passiv e metals perform better in relation to corrosion; ho w ever, this is not always the case.Many metals used in industrial applications are alloys (a combination of elements), where small changes of compositions can make significant performance differences .T he processes used for manufacturing metals (e.g.temper atur es, mec hanical pr ocessing) can also affect the micr ostructur e of essentiall y the same alloy, which can affect corrosion.In addition, construction and fabrication processes such as welding can also adversely change the properties of metals to make them more susceptible to corr osion.Eac h of these factors can also affect the likelihood and magnitude of MIC that may occur. Table 4 shows some examples of alloy categories and typical a pplications (Pierr e R. Rober ge 2012 ), along with r efer ences wher e MIC case studies of these materials can be found. Since MIC most often results in localized corrosion (pitting), a metal's resistance to pitting is important to engineers and designers seeking to pr e v ent MIC.One indicator of pitting resistance in SSs is the pitting resistance equi valent n umber (PREN), which is based on a calculation using the amount of c hr omium, mol ybdenum, and nitr ogen pr esent in an alloy.PREN is used to compare the relative resistance of alloys to pitting corrosion in chloride-containing aqueous en vironments .Alloys with a PREN of 32 or greater are generally considered to be resistant to pitting corrosion in ambient-temperature seawater.A material's PREN value may also provide some level of insight in determining its r elativ e r esistance to MIC (Eckert andAmend 2017 et al. 2017 ); ho w e v er, car e needs to be taken not to overinter pr et this v alue (Cr aig 2020 ).A gener al r e vie w of the liter atur e in whic h MIC is cited as the cause of corrosion will show that as PREN increases, the frequency of MIC case studies decr eases.MIC is fr equentl y r eported for CSs and some what less fr equentl y for SS.For duplex (DSS) and super-DSS SSs, nickelbased and titanium alloys the incidence of reported MIC is fairly r ar e. In laboratory studies using Desulfovibrio desulfuricans, 2205 DSS was r eported (Anton y et al. 2007 ) to experience etching, pitting, and cr e vice attac k after 40 days exposur e in a c hloride-containing medium.Another study (Machuca et al. 2012 ) of DSS in natural sea water showed cr e vice corr osion onl y occurr ed in samples that wer e electr oc hemicall y polarized (Mac huca et al. 2012 ).Nic kelc hr omium-mol ybdenum alloys and titanium have not been reported as being susceptible to MIC under field conditions, at least based on the liter atur e r e vie w performed her e .T here is , ho w e v er, one exception to this in environments containing oxygen.In surface waters and sediments containing oxygen, se v er al micr oor ganisms can oxidize dissolved manganese to form enric hed miner al-biopol ymer de posits.De posits of manganese o xides, when formed on SS and CRA, are highly cathodic and result in localized potential differences that can drive severe corrosion (Lew ando wski and Hamilton 2002 ).These deposits can be thin and brittle, resulting in fine cracks in the scale that act as crevices wher e corr osion is driv en by the lar ge corr osion potentials (E corr ) between manganese oxides and the exposed metal.Although the corrosion in this example is not directly caused by microorgan-isms, the mineral scales resulting from their activity resulted in localized corrosion by shifting E corr . Copper-nickel and nickel-based alloys have been used successfully in flowing, aerated seawater service, although MIC has been reported in some cases, particularly where flow is stopped for extended periods of time (Javed et al. 2016a ).One study (Little et al. 1990 ) discussed se v er e corr osion of copper-nic kel (88.5% copper, 10% nickel, and 1.5% iron) piping after 1 year of service and nickel alloy (66.5% nickel, 31.5% copper, and 1.25% iron) after six months of service in stagnant estuarine water from the Gulf of Mexico.In both cases, localized corrosion was found under biofilms containing SRB. The susceptibility of different materials to MIC has been investigated by many researchers under laboratory conditions .Ja ved et al. ( 2020) r e vie wed 26 pa pers wher e MIC pitting was claimed to hav e been observ ed in labor atory tests on SS alloys, including 304, 316, 2205, and other alloys .T he work concluded that the pits that formed as a result of the dissolution of inclusions (during cleaning) were comparable in shape , size , and depth to the pits that have been reported (possibly incorrectly) in the literature as indications that MIC attack had taken place on SSs.In another study, Javed et al. ( 2016b ) demonstrated that the chemical composition and micr ostructur e of differ ent gr ades of CS influenced initial bacterial attachment and subsequent corrosion in the presence of E. coli .The w ork sho w ed that the number of attached bacterial cells was different for different grades of CS and decreased with increasing pearlite phase content of the CS. Another topic worth noting is the potential for metallurgical featur es suc h as inclusion content and surface roughness to affect biofilm establishment and corrosion rates.One industry study (Blythe and Gauger 2000 ) of welded CS found that: r No corr elations wer e found r egarding the effects of surface finish on the se v erity of MIC and the relationship between colonization versus the inclusion content and composition of the steels tested. r Steels with lo w er inclusion content and few er sulfide inclu- sions consistently sho w ed lo w er corrosion rates in the testing, e v en though colonization was similar to other steels. r Micr oor ganisms did NOT pr efer entiall y attac k MnS inclusions in the test. r SRB wer e not r equir ed to cause MIC, although they increased the se v erity of the attac k. Other w ork, ho w e v er, has indicated a link between the location of manganese sulfide inclusions in CS and localized pitting attack when samples were exposed to SRB (Avci et al. 2013, Avci et al. 2018 ). While the use of CRA with a high resistance to localized pitting is a possible a ppr oac h to help avoid MIC, it is not economical in most cases.As a result, most oil and gas operations rely on CS as the primary material of construction.Some adv anta ge can be gained, ho w e v er, in selectiv el y a ppl ying CRA wher e the threat of MIC is the highest.It is fairly well established, e.g. that areas of dead legs in piping are more susceptible to MIC than pipeline sections that normally experience flow.A number of schemes for assessing and ranking the threat of MIC have been published (Wolodko et al. 2018 ).The threat of MIC in dead legs can be managed by material selection in the design sta ge, r etr ofitting CS with CRA, or eliminating the environment that promotes MIC.Produced w ater, seaw ater, and fir e water systems ar e also highl y susceptible to MIC.Non-metallic components (i.e.epoxy composite piping, etc.) can be considered where pressures , stresses , and fir e r esistance r equir ements allow alternativ es to metals .Abo veground piping for saltwater disposal systems , e .g. is sometimes constructed using fiber-reinforced plastic (FRP). The application of CRA in equipment or piping that is highly susceptible to MIC can be made more economical using CRA-clad CS or limiting the use of CRA to only the most susceptible locations that cannot be tempor aril y isolated, cleaned, and c hemicall y treated.Limiting the extent of MIC-susceptible equipment that cannot be cleaned, flushed, and treated is another way to reduce the need for CRA. Internal coatings and linings of CS equipment are other appr oac hes that can be used to avoid contact between the environment and material, at least for a finite period, i.e. the life of the coating.The use of high-density polyethylene liners in short sections of piping may also be a viable alternative .P otential issues with internal coatings and linings are damage from heat or rapid depr essurization, mec hanical dama ge during oper ation or maintenance, the absence of coating on tie-in welds, and a lack of insight for inspection site selection due to the presence of coating. Welds and MIC One of the well-documented failure modes for MIC is the r a pid attack of weld regions, with widespread reports of throughthickness pitting in the timescale of months .T here are many examples of such failures, which often manifest as small pinholes on the surface with a large cavity in the weld region underneath, e.g.(Kearns and Borenstein 1991, Borenstein 1991a, 1991b, Jenkins and Doman 1993, Kobrin et al. 1997, Borenstein and Lindsay 2002 ).While some early reports suggested that the associated surface morphology may have been unique to MIC and hence a way of diagnosing the failure cause, other work has shown that similar surface pitting can be observed for non-biological corrosion (Thomas and Chung 1999 ).Problems have been reported with welds of different metal types, including SS , CS , and alumin um (Walsh 1999a ).A n umber of causes have been attributed to the accelerated corrosion of welds, including associated mi-cr ostructur e (Walsh et al. 1993, Sreekumari et al. 2001 ) and composition (Walsh 1999b, Shi et al. 2020 ), while there is some debate about how/whether surface roughness might be involved (Walsh 1999a, Sreekumari et al. 2001, Amaya et al. 2002, Liduino et al. 2018 ).The micr oor ganisms most associated with weld MIC ar e metal-oxidizing bacteria and SRB (Licina and Cubicciotti 1989, Ray et al. 2010, Liduino et al. 2018 , Lee and Little 2019 ).There have been some reports that weld post treatment, including annealing and a voiding/remo val of heat-tinted scale (e.g.gas shielding during welding and pickling), can help to reduce these problems (Stein 1991, Borenstein 1991a, Pytlewski et al. 2001, Davis 2006, Ehrnstén et al. 2019 ).While these measures may help avoid MIC problems, it is important to note that there can be some practical difficulties in implementation (Hurh et al. 1999, Ehrnstén et al. 2019 ). Lastl y, ther e ar e a number of important points to r emember in relation to metals when performing laboratory studies of MIC.As discussed abo ve , there are numerous factors that can affect the likelihood and extent of MIC for a particular metal type .T his includes (but is not limited to) surface finish, specific chemical composition, and microstructure.Researchers should be conscious of these factors and make sure that they design tests accordingly and provide detailed information on these aspects so that the tests can be compared appr opriatel y and ar e r epeatable.A list of examples of tec hniques that can be used to provide important information on metallurgicall y r ele v ant pr operties r elated to MIC studies is provided in Table 5 . As discussed earlier, MLOE is r equir ed (micr obiological, metallur gical, and media c hemistry) to be able to distinguish between MIC and abiotic corr osion.Ther e ar e no specific rules about whic h exact analysis methods need to be used, and will likely depend upon what methods/instruments ar e av ailable, costs, and an y specific information needed that might be related to particular corr osion pr ocesses of inter est.In gener al, the use of m ultiple tec hniques to anal yze eac h of the micr obiological, metallur gical, and chemistry aspects can be beneficial; ho w ever, care and skill are needed to ensure that each test type is performed and anal yzed corr ectl y.Finall y, contr ol tests should be considered as a baseline comparison where possible .For example , it is critical to perform the same tests for a site with similar environmental conditions that has no signs of MIC as the location where MIC is suspected. Silos-overcoming barriers to interdisciplinarity in MIC studies Ther e ar e a n umber of barriers that mak e ac hie ving true interdisciplinarity in MIC studies a challenge.As described earlier, each discipline typically exists in a relatively siloed environment where other disciplines are acknowledged but with whom regular dialog is r elativ el y limited.Eac h discipline has its own unique langua ge and worldvie w, whic h complicates tr anslation between differ ent disciplines.Ev en differ ent sectors within a discipline may exist in silos , e .g. microbiology in human health vs. microbiology in industrial settings.For example, there has been very little translation of learnings from the biodeterioration of medical implants to microbial corrosion under non-medical conditions.Different disciplines and sectors also have different motivators and available resources that drive research and collaboration.On the industrial side , e .g. oilfield microbiological research around souring and corrosion has historically received much greater financial support than microbiological issues in, e.g. the pulp and A case study by Dubilier et al. ( 2015 ) discussed a global effort by scientists studying the Earth's micr obiome, wher e after ten years of work it was found that most of the data collected from different labs were not comparable because of differences in the test platforms used, the PCR primers selected, reporting formats , etc. T his demonstr ates that e v en for high-priority pr ojects with a great deal of potential to impr ov e human health, ther e is a great challenge to get all the various participants on the same page to achieve a successful conclusion.Ledford ( 2015 ) discussed one cause for the general lack of interdisciplinarity as being organizations' "un-derestimating the depth of commitment and personal relationships needed for a successful interdisciplinary project."It is likely that anyone who has experienced research projects that were run successfully and collaboratively can identify a core group of leaders in the project who promoted open technical exchange and w orked w ell together as a team because of their personal commitment and value placed on relationships.Advancing interdisciplinary collaboration in the area of MIC will be essential to futur e pr ogr ess in mana ging this integrity threat and increasing the sustainability of assets, particularly as used in r ene wable ener gy production. Labor a tory models for microbial corrosion studies MIC has been studied for over a century, with an explosion of publications emerging in the past 20 years (Lekbach et al. 2021 ).As micr oor ganisms ar e essentiall y e v erywher e, including associated with man-made infr astructur e, MIC has been studied across many sectors that include marine systems (e.g.shipping and marine infr astructur e), ener gy systems (e.g. oil and gas), and in both domestic and industrial water and w astew ater systems.As suc h, differ ent models hav e been used for studying MIC and its potential threat to infr astructur e (Fig. 2 ).It m ust be noted that the outcome of tests with such models will be influenced by multiple factors related to the test set-up and micr oor ganisms used; as indicated in se v er al sections abo ve , the micr oor ganisms , metal types , chemical en vironments , and operating conditions will affect whether MIC occurs .T he effects of experimental conditions have been discussed by a number of authors pr e viousl y (e.g.Wade et al. 2017, Salgar-Cha parr o et al. 2020a ,b ).The focus of this section, ho w e v er, is to r eview how the choice of microorganism(s) used may influence MIC tests. By far, most laboratory-based MIC studies have used pure cultur es of micr oor ganisms (Lekbac h et al. 2021 ), but mor e and mor e studies are emerging wherein defined mixed cultures and complex field samples are also being studied to help ground-truth pur e cultur e studies (Salgar-Cha parr o et al. 2020a, Puentes-Cala et al. 2022, Sharma et al. 2022 ).Whether a pure culture, a defined mixed culture, or a complex model system is used to study MIC depends lar gel y on the goals of the study.It m ust be emphasized that all a ppr oac hes can yield v aluable information but ar e also associated with limitations that should always be k e pt in mind when making conclusions about MIC. Single species models A list of ∼50 different pure microorganisms associated with metal corr osion (primaril y using CS or SS) was r ecentl y tabulated (Lekbach et al. 2021 ), and while many more are likely to be identified, it gives an indication of the diversity of taxa (both aerobic and anaerobic) that can be involved in MIC.For example, under aerobic conditions, Pseudomonas sp. has been studied the most fr equentl y, while under anaerobic conditions, strains of sulfate-reducing micr oor ganisms suc h as Desulfovibrio sp. have been the most widely used (Lekbach et al. 2021 ).While the major limitation in using pur e cultur es to study MIC is that they ar e not necessaril y r eflective of, nor participants in, r eal-world corr osion scenarios, studying MIC using pur e or ganisms allows for highl y contr olled studies to better understand the behaviors and mechanisms of MIC.For example, experimental systems of any type (e.g. using EC techniques , bioreactors , weight loss experiments, etc.) can be estab-lished in the presence and absence of the pure culture of interest, and differences in metabolic indicators (such as electron donors and acceptors), EC signals, corr osion pr oducts, surface anal yses, etc. can be determined between the live and control incubations (e.g.Tsurumaru et al. 2018, Tang et al. 2019, Lekbach et al. 2021 and r efer ences ther ein). As discussed earlier, obtaining MLOE e v en in pure culture MIC studies helps to provide the strongest case of whether microorganisms contributed to a corrosion scenario.Notably, pure culture studies also allow for the simplest inter pr etations of any MMM that may be used to tr ac k micr obial metabolism in a corrosion case, such as through transcriptomic , proteomic , or metabolomic a ppr oac hes, a gain compar ed to a non-corr osion scenario.These types of a ppr oac hes can potentially help to elucidate a target gene, protein, or metabolite that may be indicative of MIC.For example, if specific genes are upregulated during a corrosion versus a non-corrosion scenario, the expression of these genes may be important for MIC to occur.Ultimatel y, cr eating m utants wher ein these genes are deleted and corrosion no longer occurs is a strategy that might be able to be used for linking specific genes/gene expression to MIC (Lekbach et al. 2021 ).For example, a gene deletion a ppr oac h was used to help pr ovide e vidence that a corr osiv e methanogen ( Methanococcus maripaludis strain OS7) uses an extracellular [NiFe] hydrogenase in MIC (Tsurumaru et al. 2018 ).Subsequently, a qPCR assay was developed to quantify this gene ( micH ), which could be detected in corrosive but not in non-corrosive biofilms established from oil field samples (Lahme et al. 2021 ).A gene deletion a ppr oac h was also used to help pinpoint that Geobacter sulfurreducens could corrode Fe 0 by using it as a sole electron donor (Tang et al. 2019 ), as well as to suggest that Shewanella oneidensis strain MR-1 can corrode CS both directly and through hydr ogen-mediated electr on tr ansfer (Hernández-Santana et al. Defined mixed species In the r eal world, micr oor ganisms exist in most environments in the form of complex multispecies consortia.MIC can occur due to both planktonic and surface-attached microorganisms and their metabolic by-products .T he compositions of the microbial consortia in a particular location will be affected by a variety of biotic and abiotic par ameters (e.g.temper atur e and other physicoc hemical pr operties, nutrient suppl y, fluid mixing, etc.) (Fuhrman et al. 2015 , Dang andLovell 2016 ).In relation to the attached/biofilm versions of microbial consortia, it is generally acknowledged that the micr oor ganisms attac h and form a biofilm in a sequence and that the creation of a biofilm can offer overall benefits to the community, such as enhanced resistance to stress and disinfectants (Bridier et al. 2011, Schwering et al. 2013, Burmølle et al. 2014 ).The presence of different microbial species in a consortium can lead to interspecies cooper ation wher e , e .g. certain species ma y pro vide nutrients or create habitats that are essential for other species.In relation to MIC, aerobic biofilm formers may attach early and cr eate anaer obic nic hes that ar e suitable for anaer obic species (such as Desulfovibrio sp.) that have been implicated in accelerated corrosion.Multispecies models have been developed to simulate envir onments suc h as or al biofilms (Kommer ein et al. 2018 ); ho w e v er, ther e ar e man y c hallenges involv ed, suc h as determining whic h species/c har acteristics should be included, the order of inoculation and nutrients, and other environmental conditions (e.g.flow and redox poising) (Foster and Kolenbrander 2004, Røder et al. 2016, Tan et al. 2017, Olsen et al. 2019 ).There has been some work performed on defined multispecies models for MIC studies, but aside from a few cases, it has typically been limited to combinations of two bacterial species (Phan et al. 2021 , and r efer ences therein).This is an understudied area with potential for much futur e r esearc h to better understand the fundamental processes involved in MIC when more than one microorganism is present and to pr oduce m ulti-species models that better sim ulate the r ates and types of acceler ated micr obial corr osion observ ed in the field. Real-world consortia The final type of model system that can be used to study MIC is one that uses samples tak en, or contin uousl y sampled, fr om the field as the test medium or as inoculant for the test system.This a ppr oac h can pr ovide conditions most closel y r epr esenting the real world.An example of this is the work of Lee et al. ( 2004 ), wher e natur al seaw ater w as used as the test medium for an MIC study.Changing the test conditions in this example system (creating a sta gnant anaer obic solution) resulted in increased numbers of SRB present and led to more aggressive corrosion.In another example, Marty et al. ( 2014 ) reported a corrosion test reactor system that utilized natural marine microbial consortia, was capable of simulating tidal changes, and was able to supply a continuous flow through the test system of natural seawater.Changes in test conditions (e .g. pro viding an initial pulse of organic matter) w ere sho wn to lead to incr eases in localized corr osion r ates, and the identification of similar bacterial populations to those identified in accelerated lo w-w ater corrosion suggests that the system can be used to simulate real-world marine conditions.In another example, Wade and Blackall ( 2018 ) used samples of accelerated lo w-w ater corr osion pr oducts as the micr obial inoculum in corrosion tests and varied the testing conditions .T he results obtained sho w ed ho w changing the specific test conditions (e.g.b y ad ding n utrients) can affect both the magnitude of corrosion that takes place and the microbial community that develops.A k e y issue from these types of studies is that taking the microbial samples out of the field changes the environmental conditions and hence affects the test outcome in some way.Salgar-Cha parr o et al. ( 2020b ) sho w ed for tests using microbial consortia sampled from floating production storage and offloading facilities that changes in supplied nutrients affected biofilm properties and subsequent corrosion.Studies using real-world consortia with minimal alter-ation have the least control over the specific microbial species present and suffer from increased difficulties in terms of reproducibility.Additional studies that minimally alter the conditions of the samples being tested (e .g. by a voiding nutrient additions or changing the water-to-solids/biofilm ratio) are also needed to help better understand MIC under r ealistic, r eal-world conditions in multiple environments (Wade et al. 2017 ). Laboratory models are an integral part of the overall efforts to tackle the challenges associated with MIC.They can provide k e y information on critical aspects such as the fundamental processes and microorganisms involved, the performance of materials and mitigation methods, as well as a means for MIC dia gnosis.Micr obiologists ar e well placed to offer leadership and guidance on many facets of future MIC laboratory model de v elopment. Field (meta)data collection and standardization To date, there has been limited success on predicting MIC problems and e v aluating potential mitigation str ategies.Significantl y more work will be required to achieve effective and tailored anti-MIC measures.An example of one of the k e y challenges that remains to be addressed and overcome is the lack of r eadil y av ailable k e y data re positories , i.e .field (meta)data collections r ele v ant to industrial a pplications, suc h as biobanks of biofilm samples and MIC samples (e.g.materials with MIC, environmental, and metallurgical data).This requires the development of standard data collection and assessment protocols to ensure consistency and allow a ppr opriate anal ysis and comparisons to be made.Suc h r epositories ar e critical for incr easing knowledge and pr omoting ne w adv ances in the field, which potentially may be enhanced by integrating artificial intelligence and machine learning techniques (Goodswen et al. 2021 ).Suc h tools ar e essential for modeling and predicting MIC scenarios , disco vering MIC markers and biosensors, and de v eloping standards. In this context, de v eloping standardization of measur ement pr ocedur es, r ele v ant pr otocols (e.g.sample pr eserv ation), v alidation tests, and methodologies is an essential step to w ar ds impr ov ed MIC mitigation.Standardization helps to ensure that MICrelated assessment tests are accurately cataloged, allowing them to become comparable or able to be correlated, thus leading to a more comprehensive understanding of MIC and MIC control str ategies.Unfortunatel y, ga ps in the field continue to delay the de v elopment of universal standards.For example, there is still a significant lack of translation of small-scale r esearc h labor atory experiments to a field scale .Likewise , there ha ve been only limited efforts to de v elop well-v alidated models (physical and theor etical) that sim ulate the complex r eal-world conditions .T hese efforts are essential tools for standardization and the development and assessment of mitigation solutions, which could save time and resources before the final validation stage.Further work is also r equir ed to de v elop standards r ele v ant to or adopted by legislation or regulatory assessment (e.g.standards to assess the efficiency and effectiveness of biocidal mitigation strategies) that mor e closel y matc h r eal-world conditions .Efforts ha ve been made in specific fields to overcome this gap (Skovhus 2014, Silva et al. 2021 ), particularly with the introduction of MMM (Skovhus 2014 ).Ev en so, most MIC r esearc hers use pr otocols or methodologies ada pted fr om inaccessible or expensiv e or ganizational standards (e.g.ISO, ASTM, and NACE) to e v aluate their a ppr oac hes or tec hnologies, whereas industry uses the available organizational standards or de v elops its own (Skovhus et al. 2017, Silva et al. 2019, Wade et al. 2023 ). Corrosion management MIC is regarded as a difficult-to-treat industrial "cancer" (World Corr osion Or ganization (WCO) Shen yang Declar ation, 2019 ), resulting in se v er e economic losses and underestimating long-term environmental and societal impacts (Usher et al. 2014, Conley et al. 2016, Di Pippo et al. 2018, Jia et al. 2019, Stamps et al. 2020, Little et al. 2020b, Lou et al. 2021 ).It has undoubtedly become vital to not only understand the MIC phenomenon but also how to control it effectively.To date, a range of methodologies and technologies have been designed, developed, and implemented to control microbial activity and thus reduce the threat of MIC (Fig. 3 ).The c har acteristics of eac h system and field envir onment will dictate the selection of a specific countermeasure, whether based on removal and/or preventive strategies. In industry, the corrosion control process typically consists of three primary activities: (1) identifying the r ele v ant corr osion threats; (2) identifying preventive and mitigative measures to address those threats; and (3) monitoring the effectiveness of the response .T he cycle of activities is continuous, with each of the three activities providing input to the subsequent activity.Information about a system's microbiology is typically needed in each of the thr ee corr osion contr ol activities, and MMM is incr easingly being used to provide that information.Ho w ever, corrosion engineers also need a way to corr elate micr obiological information with other r ele v ant information, suc h as data fr om corr osion monitoring (e .g. coupons , probes , and inspection), operating conditions (e.g.pr essur e, temper atur e, and fluid velocity), fluid composition and chemistry, mitigation measures, etc.Such an a ppr oac h is consistent with the use of MLOE, as described earlier in this r e vie w.The following section briefly describes each of the thr ee corr osion mana gement activities that ar e emplo y ed to manage internal corrosion on various types of assets in different sectors. T hrea t assessment During the corrosion threat assessment stage, the potential for each plausible corrosion threat mechanism is evaluated.Corrosion engineers typicall y r e vie w data about the asset design and ov er all pr ocess, oper ation, c hemical tr eatment, corr osion moni-toring data, and leak/failure history data to help identify corrosion threats .T he potential damage rate of some threats, such as corrosion caused by acid gases, can be estimated using mathematical models; ho w e v er, ther e ar e pr esentl y no widel y accepted corr osion rate models for MIC since microorganisms can influence corrosion in man y differ ent wa ys .In assessing the potential for MIC, the corr osion engineer typicall y looks for a r elationship between the microbiological and chemical conditions and any observed corrosion information.Data produced using MMM are used in this step to c har acterize baseline microbiological conditions in the asset and to look for associations between biofilm community distribution, chemical composition, and the frequency, distribution, and severity of localized corrosion.The threat assessment may also seek to relate biofilm and corrosion characteristics to operating conditions, such as changes in flow (e.g.periods of no flow), temperature, or fluid composition (e.g.increases in nutrients or electron acceptors).Significant operating condition changes may affect the initiation and/or pr opa gation of MIC.A number of in vestigators , such as Skovhus et al. ( 2010 ), Eckert et al. ( 2012 ), and Larsen and Hilbert ( 2014 ), have demonstrated the utility of MMM in forensic corr osion inv estigation, wher e methods suc h as next-gener ation sequencing and metagenomics could provide insights. Mitigation and prevention Based on the threat assessment, the preventive and mitigative measures needed to manage the applicable corrosion threats are selected.Options for internal corrosion mitigation in pipelines include the use of biocides, corrosion inhibitors, or oxygen scavengers; v elocity contr ol; mec hanical cleaning (e.g.pigging or flushing); ultr aviolet r adiation; fluid pr ocess v essels (e .g. filters , separ ators, etc.); or contr ol of fluid quality (or sources) to the extent possible.Larsen et al. ( 2010 ) demonstrated ho w MMM w ere beneficial for e v aluating the effectiv eness of ne w c hemical tr eatments when corrosion incidence rate and severity are linked with observations about the types , numbers , and activities of microorganisms after the treatment is applied.One of the most significant challenges to this process is the collection of biofilm samples from the asset being treated and the processing/analyzing the samples in a timely manner so that genetic information is not lost.Another significant challenge is the current lack of standards to assess the efficiency and effectiveness of MIC mitigation strategies based on micr oor ganisms in biofilms or MIC diagnosis and monitoring methodologies, as the conditions promoting MIC may be quite different from system to system. In terms of MIC mitigation, most conventional strategies comprise physical and/or chemical methods.Mechanical removal or cleaning of surfaces is the most straightforw ar d physical appr oac h, comprising an y method able to r emov e the biofilm attached on a surface , in volving those using mechanical forces (e.g.pigging, flushing, ultr asonic tr eatment).Ho w e v er, this is not the optimal a ppr oac h for MIC contr ol as it does not pr e v ent further biofilm formation, demanding costly ongoing maintenance and r etr ofitting measur es .For example , once a surface is in contact with seawater, a biofilm can form in minutes and pr ogr ess to macrofouling in just a few da ys , which would require frequent maintenance, rendering it an unsustainable mitigation strategy (Omar et al. 2021, Silva et al. 2021, Yazdi et al. 2022 ). The most effective countermeasures currently adopted to control biofilm development and minimize MIC on industrial surfaces r el y on a c hemical str ategy that comprises the dir ect or contr olled r elease of biocides onto the contaminated surface .T heir use is promoted on the basis that disinfection, or killing microbial cells, will solve the problem.Ho w ever, inefficient cleaning of organic matter remaining on the surface and inadequate monitoring strategies, allied to a lack of skilled MIC professionals, can actuall y pr omote an incr ease of MIC pr oblems .T hus , chemical strategies are generally integrated with other methods, such as pr otectiv e pol ymeric coatings, cathodic pr otection (CP), UV irr adiation, mec hanical cleaning, or ultr asonic tr eatment.Among those, antifouling coatings containing active agents , i.e .biocides and corrosion inhibitors are one of the most well-established prev entiv e measur es (Abdolahi et al. 2014, Cai et al. 2020a, Chen et al. 2022, Lamin et al. 2022, Wen and Li 2022 ).A significant disadv anta ge of these coatings, ho w e v er, is the continuous release of toxic and persistent c hemicals, r esulting in shorter protection periods and potential ecological problems (Rosenberg et al. 2019 Other greener or less to xic alternati ves with enhanced effects have also emerged.From coating strategies based on the development of polymer structures to create or improve properties such as hydrophilicity , amphiphilicity , surface topogr a phy, non-biocide-r elease mec hanisms and/or the incor por ation of bioactive nanoparticles to generate nanocomposite coatings (Selim et al. 2020 ;Gu et al. 2020 ;Kumar et al. 2021 ;Sousa-Cardoso et al. 2022 ), to the search for natur e-inspir ed biomimetic and synthetic agents to natural bioactive compounds or extracts (e.g.metabolites from marine organisms, molecules of microbial origin, plants) (Vilas-Boas et al. 2021 ; Lav an y a 2021 ).Ho w e v er, the full exploitation of these gr eener a gents is limited by long synthesis processes, low yields, the scarce availability of some natur al sources, the lac k of pr oof of concept in real-world conditions, the absence of an environmental impact assessment, as well as the need for significant funding and time for a ppr ov al by regulatory agencies (Qian et al. 2009, Brinch et al. 2016, Pai et al. 2022 ). Another of the commonly discussed methods for MIC mitigation, used for a range of structures such as buried and submerged pipelines, stor a ge tanks, and sheet piling, is the application of CP (Wilson and Jac k 2017, Ac kland and Dylejko 2019, Angst 2019 ).This technique involves the application of a direct current (via a galvanic or impressed current system) to lo w er and maintain the potential of the metal sufficientl y negativ e with r espect to the environment.CP is a w ell-kno wn and widel y a pplied method for abiotic corrosion, and it is often discussed that a further lo w ering of the protection potential from that used for abiotic corrosion ma y pro vide protection against MIC.While field tests and anecdotal reports indicate CP may be capable of pr e v enting acceler ated corrosion due to microorganisms, the conclusions of laboratory studies ar e m uc h less certain, and ther e is r oom for m uc h mor e work on this topic to understand the mec hanisms involv ed and how to optimize its use for avoiding/minimizing MIC (Thompson et al. 2022 ). MIC mitigation challenges Similar to MIC r esearc h in gener al, mitigation str ategy de v elopment is also gr eatl y affected from the initial design stage to the final implementation by the siloed nature of this field.Figure 4 summarizes and highlights some of the most important challenges that need to be overcome in order to allow successful MIC mitigation str ategy de v elopment and implementation.To ac hie v e this, the following k e y questions need to be answered: (i) What are the challenges/knowledge gaps to control MIC? (ii) What are the current needs for the de v elopment/implementation of anti-MIC strategies?(iii) What tests and metrics are appropriate to evaluate the effectiveness of an anti-MIC strategy? Understanding the biofilm community interactions with the environment and surfaces Biofilm and subsequent MIC are driven by environmental conditions, either natural or under industrial operating conditions, involving ecological and engineering factors.Understanding and identifying the role of micr oor ganisms in MIC is a big challenge, as the composition of the biofilm matrix and its dynamic structur e will v ary depending on those conditions, and contact with differ ent infr astructur e materials, r esulting in metabolic ada ptations in response to their long-term survival under external stress conditions (Jia et al. 2019 ).Cells incor por ated within a biofilm, e.g.show a high tolerance to treatment compared to planktonic cells. In an extreme scenario, this can result in an increase of resistance to antimicrobial agents of 1000 times (Mah et al. 2003 ).It is also recognized that the response of the biofilm community to environmental conditions cannot be predicted by studying free-living bacteria or single-species biofilms alone (Flemming et al. 2016 ). For the initial step of designing a mitigation strategy, these studies are nevertheless useful for a screening task, but multi-species studies are even more critical to improve the design.This fundamental understanding of the complex properties of biofilm communities and their interaction/development with different environments, including biota and conditions fluctuations from static and quasi-static to dynamic flow conditions (Toyofuku et al. 2016 ), remains limited, and further advances in the design of effective mitigation countermeasur es ar e desir ed.This pr ogr ess is hampered further by a lack of understanding of surface-biofilm interactions and their heter ogeneity, whic h pr omote localized gr adients and micr oenvir onments acr oss the surface (Ren et al. 2018 ) and may involve multiple microbial mechanisms. Understanding how surface and biofilm structures and their physicoc hemical pr operties inter act, e.g.pr ovide answers on which factors contribute to biofilm structure and composition and how the multi-species system interacts is critical for developing a better strategy against MIC and avoiding the implementation of mitigation actions when they are not needed (Skovhus et al. 2022 ). Limited and fragmented knowledge on mitiga tion str a tegies Problems due to the resistance of biofilms to treatment can hamper the effectiveness of mitigation strategies, particularly those involving the release of active agents such as corrosion inhibitors and bioactiv e a gents (e.g.biocides and biocide-r elease coatings).Biofilm resistance is related to the complex three-dimensional functional structure biofilms, which limits the penetration of bioactiv e a gents and pr e v ents them fr om inter acting with other cells, particularl y for matur e biofilms (Bas et al. 2017 , Merc hel Piov esan Per eir a et al. 2021 ).This is complicated by the complex processes by which bioactive agents interact with biofilms , in volving biological and physicochemical factors, and the exact degree, fr equency, and mec hanisms that give rise to r esistance ar e still unclear. Bioactiv e a gents ar e commonl y selected based on the following criteria: the spectrum of action/efficacy , toxicity , biodegradability , cost-effectiveness , en vironment safety, and compatibility with the system, i.e. allow the maintenance of fluids and materials under oper ational conditions.Furthermor e, the mode of action depends on the type and dose of bioactive agent used (Sharma et al. 2018, Capita et al. 2019 ).Ho w ever, their long-term use can promote the r esistance of micr oor ganisms, leading to an ineffectiv e inhibition effect. The use of corrosion inhibitors is another simple and potentially efficient mitigation strategy.A diverse range of chemical molecules acting as corrosion inhibitors has been exploited, mainly including surfactants and heterocyclic organic compounds containing electricall y ric h heter oatoms (N, O, and S) or gr oups with π -shar ed electr ons (Feng et al. 2022 ).Those primarily inhibit corrosion by adsorbing on metal surfaces by physical adsorption (V ander W aals force adsorption) or c hemisor ption (c hemical bonding), creating a physical or chemical barrier between the surface and the corr osiv e media, hence inhibiting cell adhesion and subsequent biofilm formation (Migahed and Al-Sabagh 2009, Kokalj 2022, Ma et al. 2022 ). Combining corrosion inhibitors with bioactive agents has also been a common strategy to find synergistic effects (Greene et al. 2006, Pinnock et al. 2018, Anandkumar et al. 2023 ).Ho w e v er, it can sometimes lead to interferences affecting agents' performance, suc h as c hemical incompatibility (e.g.c hemical and physical inter actions, pH r ange of action) and competitive function (e.g.adsorption for the same metal sites), reducing their primary function and resulting in inadequate control of MIC (Maruthamuthu et al. 2000, Xiong et al. 2015, Rahmani et al. 2016 ).To avoid interfer ences, these a gents need to be car efull y selected, not onl y considering standard criteria like the ability to oxidize the metal, the presence of a particular functional group, the capacity to cover a wide ar ea, cost-effectiv eness , solubility, and en vironmental safety (Lav an ya 2021 ), but also the conditions present throughout the entire system. Certain corrosion inhibitors can also interact with biofilms and impair their structure and functionality.For example, positively c har ged heter ocyclic quaternary ammonium salt surfactant can selectiv el y adsorb on the negativ el y c har ged SRB biofilm surface and penetrate the cell membrane, disrupting their selective permeability and genetic system, thus leading to the inhibition of SRB activity or death (Feng et al. 2022 ).This shows the ability of synthetic corrosion inhibitors to also provide antimicrobial effects .T his multifunctional ability has been r eported particularl y for cationic surfactants, including Gemini and poly (quaternary ammonium) salt surfactants (Badawi et al. 2010, Labena et al. 2020, Feng et al. 2022 ).The effectiveness of these mechanisms, ho w e v er, depends on the specific system's conditions and the micr oor ganisms involv ed.In some cases, they may e v en become ineffecti ve (Dari va andGalio 2014 , Mand andEnning 2021 ) or act as a source of nutrients for bacterial growth (Edwards andMcNeill 2002 , Fang et al. 2009 ) Ther efor e, it is crucial to fill the knowledge ga p r egarding the mec hanisms of action of corrosion inhibitors and bioactive agents, as well as their effects on the development and resistance of biofilms (Bridier et al. 2011 , Bas et al. 2017 , Kimbell et al. 2020, Tuck et al. 2022 ).Despite progress over recent y ears, kno wledge is still scar ce and fr a gmented (Ar aújo et al. 2014, Huang et al. 2020, Silva et al. 2021, Lima et al. 2022 ). Furthermor e, similarl y to bioactive agents, the long-term use and toxic c har acteristics of synthetic corr osion inhibitors calls for more work on sustainable and environmentally friendly agents deriv ed primaril y fr om natur al sources (Lav an ya 2021 , Verma et al. 2021, Al Jahdaly et al. 2022, Fazal et al. 2022, Wang et al. 2023a ,b ). The increasing discovery of greener and natural agents, including corrosion inhibitors and bioactive agents, with new chemical structures and functionalities is likely to uncover additional modes of action (Lav an ya 2021 , Barba-Ostria et al. 2022 ).This is likely to further improve our understanding of the mode of action of bioactive agents and how they interact with biofilms, which is essential for de v eloping mor e effectiv e mitigation str ategies.Artificial intelligence has been proposed as a potential tool to accelerate the identification of targets for novel active agents (Paul et al. 2021 ). The gr owing cr oss-sector al awar eness of the economic importance of microbial biofilms has helped to accelerate the development of mitigation a ppr oac hes and tec hnologies as well as our understanding related to biofilm-bioactive agent interaction.Ho w e v er, some sectors possess more advanced knowledge than others .For example , the incr easing pr oblem of antibiotic r esistance is well known in the healthcare sector, while the marine sector has been a ppl ying anti-biofouling a ppr oac hes for some time.Regr ettabl y, anti-biofouling, anti-MIC, or anti-corr osion a ppr oac hes ar e r ar el y r elated, although a fe w r ecent publications have started to emphasize their similarities (Li and Ning 2019 ).This lack of sectoral and multidisciplinary knowledge shar-ing undoubtedly limits the potential for advances in mitigation strategies. Finally, it is worth noting that the complete eradication of biofilms in most industrial situations is highl y unlikel y, as micr oor ganisms will al ways be pr esent.T hus , the economics and efforts r equir ed to meet suc h a stringent target need to be car efull y questioned.A mor e r ealistic goal is to learn how to coexist with and manage the presence of biofilms, minimizing unwanted interferences in the most efficient, benign, and long-term manner possible .Furthermore , MIC management extends beyond single solutions.Rather, integr ated a ppr oac hes should be used, le v er a ging multidisciplinary teams and cross-sectoral knowledge sharing. Monitoring The third core activity in the corrosion control process is measuring the system performance and effectiveness of the methods used to reduce the likelihood and/or severity of corrosion.Various corrosion monitoring techniques and inspection methods can provide information about the rate of metal loss due to corrosion (Bardal 2004, Dawson et al. 2010, N ACE 2012 ); ho w e v er, man y of these methods do not identify the mechanism of the corrosion or the effects of mitigation activities on the cause of the corrosion, biofilms for MIC.Again, the MLOE a ppr oac h is useful for e v aluating the effectiveness of mitigation measures and optimizing these measures as system operating conditions and corrosion threats c hange ov er time (see Fig. 1 ).For MIC mitigation, monitoring measur es ideall y need to be able to identify both changes in corrosion rates and microbiological changes in associated biofilms (Fig. 5 ). Corr osion engineers typicall y attempt to integr ate MLOE, operating data, corrosion monitoring data, c hemical/micr obiological fluid and deposit analysis results, in-line inspection (ILI) and other inspection data, and flow/corr osion r ate model outputs to ascertain the short-term and long-term effectiveness of mitigation measures.Short-term effectiveness (i.e .o ver hours , da ys , or weeks) may be e v aluated thr ough differ ent par ameters or measurements than those used to monitor long-term (i.e.monthly or annual) effectiveness.For example, short-term effectiveness monitoring could focus on controlling microbial populations in biofilms, wher eas long-term effectiv eness would focus mor e on contr olling corr osion dama ge that r esulted fr om those biofilms.One significant MIC management challenge faced by industry is the lack of an established, widely accepted processes (or standards) for integrating MLOE into monitoring pr ogr ams.Often, micr obiological data ar e incor por ated into decision-making by inferring the activities and roles of micr oor ganisms in corr osion mec hanisms, and mitigation measures are adjusted based on empirical observations. A k e y challenge in providing timely and effective anti-MIC measures is the establishment of early biofilm-specific detection systems suitable for in-situ and point-of-use industrial contexts (Xu et al. 2020 ).This could potentially include surface monitoring, regular chemical and microbiological analyses, and the use of probes, sensors , and MIC markers .Earl y pr ediction is critical during the initial and validation stages of mitigation strategy development (Fig. 4 ), as it allows the identification of specific locations with MIC threats as well as the e v aluation, tailoring, and implementation of a ppr opriate anti-MIC strategies. Finally, one of the major problems with MIC prediction capacity is that the entire contextual story is not always reported or considered.For example, material engineers (e.g.metallurgists) may tend to ignore biological data, whereas microbiologists may not a ppr opriatel y consider materials/metallur gical aspects.Collecting r ele v ant data fr om all aspects (media conditions , fluids , micr oor ganisms, and materials) r equir es m ultidisciplinary teams composed of oper ators, micr obiologists, corr osion engineers , chemists , and materials engineers .Furthermore , for some, there has been a major stigma associated with r e v ealing MIC cases .Hence , data on MIC failures in industry is largely inaccessible to the broader R + D community, while MIC mitigation business agents often protect customers through commercial confidentiality a gr eements, significantl y limiting the av ailability of cases and the transfer of potential mitigation technologies between industry and academia.De v elopment of forums for the sharing of MIC case histories with the a ppr opriate le v el of detail so as to maintain the identities of those providing the information is one way to impr ov e knowledge sharing. EC techniques used to study MIC As noted earlier in this r e vie w, MIC is a m ulti-disciplinary field that r equir es expertise encompassing significantl y differ ent fields of study.The corrosion of metals (including MIC) is inherently an EC pr ocess wher e one or mor e c hemical species under go c hanges in o xidati v e state.Numer ous EC tec hniques hav e been de v eloped to mec hanisticall y study fundamental corrosion mechanisms in the laboratory, in addition to monitoring corrosion behavior in field conditions.As most of these methods are out of the scope of expertise of many microbiologists, we decided to dedicate a separate section to provide some basic background information about EC methods and the main techniques used in MIC assessment.Nonetheless, the authors highly encourage the collaboration of microbiologists with subject matter experts in corrosion and electrochemistry.All EC techniques have limitations in application and interpretation; thus, the choice of technique and the interpretation of results need to be car efull y weighted. In the following sections, we aim to provide a more detailed ov ervie w about specific EC techniques that are beneficial during MIC studies .T he EC techniques ha v e been gr ouped based on the amount of external signal (e.g.applied potential or current) r equir ed during measurement.In general, the larger the exter-nal signal, the more information about the system can be obtained (Fig. 6 ); ho w e v er, a ppl ying a lar ger signal can also r esult in alterations to attached biofilm or substrate surface chemistry.The traditional EC cell is a three-electrode system containing: (1) the metal of interest (working), (2) a stable (i.e.non-polarizable) electr ode (r efer ence), and (3) a corr osion-r esistant metal used to complete the electrical circuit for external signal application (counter).Modifications on the number and types of electrodes are dependent on the EC technique being applied.The following is not an extensive review of EC techniques, rather those that are most commonly used in the design and monitoring of MIC experiments. Techniques requiring no external signal The following techniques do not apply current or potential signals to the working electrode .T hese techniques are used to monitor corrosion behavior, but ov erinter pr etation of the measurements is cautioned. Corrosion potential, E corr The simplest EC technique is the potential measur ement acr oss a two-electrode system immersed in an electrolyte; where one electrode is the material of interest and the other is a stable r efer ence electrode .T he potential is measured across a high-impedance voltmeter that pr e v ents curr ent flow between the electrodes .T his potential is called the E corr but is also r eferr ed to as the open circuit potential.Regardless of nomenclatur e, E corr measur ement is a passive monitoring method that does not disturb an attached biofilm.P assiv e metals suc h as titanium and gold exhibit higher E corr compar ed to mor e activ e metals such as zinc and aluminum.Biofilms can also affect E corr and make the inter pr etation of results difficult (Little and Wagner 2001 ).The most common utilization of E corr measurements has been the study of potential ennoblement.Ennoblement is the incr ease (i.e.mor e electr opositiv e) of E corr due to the formation of a biofilm on a metal surface (Little et al. 2013 ).Ennoblement of passive alloys exposed in marine environments due to biofilm formation has been extensively documented (Mollica and Tr e vis 1976, Johnsen and Bardal 1985, Scotto et al. 1985 ). Theor eticall y, E corr ennoblement should incr ease the probability for pitting and crevice corrosion initiation and propagation for those passive alloys where E corr is within a few hundred millivolts of the pitting potential (E pit ) (Fig. 6 ).Little et al. ( 2008 ) r e vie wed mec hanistic inter pr etations of ennoblement in marine waters.Ennoblement has also been shown to occur in fresh and estuarine waters through microbial manganese oxidation and deposition on the metal surface (Dickinson and Lew ando wski 1996, Dickinson et al. 1996, Dexter et al. 2003 ).While the ennoblement phenomenon has been observ ed thr ough the world under different water conditions, a unifying mechanistic explanation for all observations does not exist.The main dr awbac k of E corr measur ement is the inability to inter pr et whether ennobled E corr (or other changes in E corr ) are due to thermodynamic effects, kinetic effects, or both.In addition, E corr measurement alone cannot be used to determine changes in corrosion rates o ver time .Unfortunately, ho w e v er, ov erinter pr etation of E corr with respect to corrosion rates is commonly found throughout the literature. Dual cell technique The dual cell uses two similar EC cells that are separated by a semipermeable membr ane.Eac h cell contains the same electr ol yte and nominally similar working electrodes .T he two working electrodes are connected electrically to a zero resistance am-meter (ZRA), and the semipermeable membr ane pr ovides ionic conduction to complete the circuit.One cell is maintained under sterile conditions.Micr oor ganisms ar e added to the other, and the sign and magnitude of the resulting current through the ZRA are monitored to determine the details of the corr osiv e action of the bacteria.The dual cell technique does not provide a means to calculate corrosion rates, but rather changes due to the presence of a biofilm.Dexter and LaFontaine ( 1998 ) used a dual cell configuration to monitor the corrosion of copper, steel, 3003 aluminum, and zinc samples coupled to panels of highly allo y ed SS.Natural marine microbial biofilms were allo w ed to form on the SS surface.On the control tests, the action of the biofilm was pr e v ented.Corrosion of copper, steel, and aluminum anodes was significantly higher when connected to cathodes on which biofilms were allo w ed to grow naturally. Electrochemical noise analysis (ENA) EC noise has conv entionall y been a pplied to two electr odes of the same material.ENA data can be obtained with applied signal (i.e.fluctuations of potential at an applied current or vice versa).In addition, ENA can also be operated with no applied signal, where small fluctuations of E corr are recorded as a function of time.For MIC studies, the no-signal mode provides a monitoring technique that has a clear adv anta ge ov er the a pplied signal mode that may influence biofilm properties.Under controlled laboratory studies, it is possible to measure potential and current fluctuations simultaneousl y.Sim ultaneous collection of potential and current data allows analysis in time and frequency domains .T her e ar e numer ous par ameters that can be determined through data analysis with EC noise resistance (R n ) being the most commonly interpreted.To this day, the interpretation of R n to a quantifiable corr osion r ate is debated.Bertocci et al. (1997a ,b ) described methods for data analysis.Little et al. ( 1999 ) sho w ed an example of using ENA to examine the influence of marine bacteria on localized corrosion of a coated steel.Samples with intentional defects in the coatings exposing bare metal were immersed in artificial and natural seawater with and without attached zinc coupons , pro viding sacrificial CP to the exposed ar eas.R n incr eased with time for all cathodicall y pr otected samples due to the formation of calcareous deposits in the defects.Surface analysis sho w ed that very few bacteria were present in the defects of the cathodically protected samples, while large amounts of bacteria were found in the rust layers of the fr eel y corr oding samples. Techniques requiring a small external signal The following techniques require an external signal to be applied to the working electrode .T here are , ho w ever, no standar ds in regards to the magnitude of the signal a pplied, whic h is most commonly an applied potential.The majority of applications of these techniques in the literature apply between + /-5 and 10 mV to the working electrode. Polarization resistance technique The polarization resistance (R p ) technique is a direct current (i.e.no frequency dependence) method that can be used to continuously monitor the instantaneous corrosion rate of a metal, as detailed in ASTM G59-97 ( 2014 ) and r e vie w ed b y Scully ( 2000 ).Mansfeld ( 1976 ) described the use of the R p technique for the measurement of corrosion currents. A simplification of the R p technique is the linear polarization tec hnique, in whic h it is assumed that the relationship between E and i is linear (i.e.resistance is a scalar value) in a narrow r ange ( + / −5 mV) ar ound E corr .The potential is scanned fr om -5 mV vs. E corr to + 5 mV vs. E corr at specified intervals (e.g. 1 mV) and scan rate (e.g. 1 mV/sec).The selection of these measurement parameters is dependent upon the electrolyte/metal system being examined (Mansfeld 1976, Scully 2000 ).The slope of the curve pro vides R p '. T he in v erse of R p (R p −1 ) is pr oportional to the corr osion r ate i corr .This a ppr oac h is used in field tests and forms the basis of commercial corrosion rate monitors (ASTM G96-90, 2018 ).Applications of R p techniques have been reported by King et al. ( 1986 ) in a study of the corrosion behavior of iron pipes in environments containing SRB.In a similar study, Kasahara and Kajiyama ( 1986 ) used R p measurements with a compensation of R s and r eported r esults for activ e and inactiv e SRB. Lee et al. ( 2004 ) used linear polarization measurements to demonstrate that corrosion of CS was more aggressive in stagnant anaerobic seawater than in stagnant aerobic seawater over a 396-day exposure.In general, instantaneous corrosion rates for the anaerobic condition were two orders of magnitude higher than the aerobic condition. Significant errors in the calculation of corrosion rates can occur for electr ol ytes of low conductivity or systems with very high corr osion r ates (low R p ) if a correction for R s is not a pplied.Corr osion rates will be underestimated in these cases.Additional problems can arise from the effects of the sweep rate used to determine R p according to equation ( 1 ).If the sweep rate is too high, the experimental value of R p will be too low, and the calculated corrosion rate will be too high.For localized corrosion, experimental R p data should be used as a qualitative indication that r a pid corr osion is occurring.Large fluctuations of R p with time are often observed for systems undergoing pitting or cr e vice corr osion.R p data ar e meaningful for general or uniform corrosion but less so for localized corrosion, including MIC.Additionally, the use of Stern-Geary theory, wher e corr osion r ate is inv ersel y pr oportional to R p at potentials close to E corr , is valid for conditions controlled by electron transfer but not for diffusion-controlled systems as frequently found in MIC.R p and E corr tec hniques ar e often performed simultaneously during monitoring as the two techniques provide complementary EC data.Measurement of the time-dependent R p /E corr trend is one of the most commonly used corrosion monitoring techniques in field conditions. Electrochemical impedance spectroscopy (EIS) EIS tec hniques r ecord impedance data as a function of the fr equency of an applied signal at a fixed potential.For comparison, EIS is an AC (alternating current) frequency-dependent technique compared with the DC (direct current) polarization technique described abo ve .A large frequency range (mHz to kHz) must be investigated to obtain a complete impedance spectrum.Dowling et al. ( 1988 ) andFranklin et al. ( 1991 ) demonstrated that the small signals r equir ed for EIS do not adv ersel y affect the numbers, viability, and activity of micr oor ganisms within a biofilm.EIS data may be used to determine R p , the inverse of corrosion rate.EIS is commonly used for steady-state conditions (uniform corrosion); ho w e v er, sophisticated models hav e been de v eloped for localized corrosion (Mansfeld et al. 1982, Kendig et al. 1983 ). Se v er al r eports hav e been published in whic h EIS has been used to study the role of SRB in the corrosion of buried pipes (Kasahara and Kajiyama 1986, King et al. 1986, 1991 ).Formation of biofilms and calcareous deposits on three SSs and titanium during exposur e to natur al seaw ater w as monitored using EIS and surface analysis (Mansfeld et al. 1990 ). Dowling et al. ( 1988 ) used EIS to study the corrosion behavior of CSs affected by bacteria and attempted to determine R p from the EIS data. EIS is also useful for studying the MIC of metals with pr otectiv e coatings .J ones-Meehan et al. ( 1991 ) used EIS to determine the effects of se v er al mixed micr obiological comm unities on the pr otectiv e pr operties of e po xy top coatings over zinc-primed steel.Spectra for the control remained capacitive, indicating intact coatings, while spectra for five of six samples exposed to mixed cultures of bacteria indicated corrosion and delamination. While EIS can provide useful information for MIC studies, it potentiall y r equir es an incr eased le v el of understanding compared to some of the other EC methods in order to be able to corr ectl y inter pr et the r esults.One of the k e y issues is the determination of the equivalent electrical circuits used for modeling of the solid/electr ol yte interface. Medium and large signal polarization Medium and large signal polarization tec hniques r equir e potential scans ranging from several tens of mV to se v er al V (see Fig. 6 ), with the exact range applied depending on the information that the test is aiming to obtain.An external signal is applied to obtain potentiostatic or potentiodynamic polarization curves as well as pitting scans.Medium signal polarization is often capped at + / −200 mV vs. E corr .Medium signal polarization curves can be used to determine i corr by Tafel extr a polation as well as specific electr on tr ansfer r eactions.With lar ge signal polarization, mass tr ansport-r elated phenomena can be e v aluated based on the limiting current density (i L ).For metal/electrolyte systems for which an acti ve-passi ve transition occurs, the passive properties can be e v aluated based on the passive current density (i pass ) .Pitting scans are used to determine (E pit ). A disadv anta ge of lar ge-signal polarizations is the destructiv e natur e , i.e .the irr e v ersible c hanges of surface properties due to the application of large anodic or cathodic potentials.Choice of scan rate is important in MIC studies to reduce effects on biofilm structure and character.The faster the scan rate, the less the impact on microbial activities.Recording polarization curves provides an ov ervie w of reactions for a given corrosion system, e.g.c har ge tr ansfer or diffusion-contr olled r eactions, passivity, transpassivity, and localized corrosion phenomena.Due to the irr e v ersible c hanges to biological and chemical surface c har acteristics, large-scale polarization experimental design should include enough samples for separate anodic and cathodic polarization scans , i.e .lar ger-scale polarization should not be a pplied to one sample through the full cathodic and anodic potential ranges.Due to the destructive nature of large-signal polarizations (e.g.inducing pitting), care also needs to be taken not to confuse such changes to the surface of a test sample as being due to corrosion that has taken place prior to the EC testing. Numer ous inv estigators hav e used polarization curves to determine the effects of micr oor ganisms on the EC properties of metal surfaces and the resulting corrosion behavior.In most of these studies, comparisons have been made between polarization curves in sterile media with those obtained in the presence of bacteria and fungi.K er esztes et al. ( 1998 ) used measurements of E corr , R p , and potentiostatic polarization measurements to obtain corr osion r ates.Cultur e media containing sulfide of both biogenic and chemical origin were used to determine the effects of metalsulfide la yers .Biocides were used to inhibit bacterial metabolic activity.An atomic force microscope was used to image the topogr a phy of sulfide la yers .T hey concluded that SRB produced continuous and localized sulfide, r egener ating anodic sites and-in the case of ir on-activ ating cathodic sites in the vicinity of the anodes. In gener al, EC tec hniques ar e v aluable methods for mec hanistic investigations and monitoring of MIC.Like all specialized tec hniques, selection, a pplication, and data inter pr etation for eac h EC tec hnique ar e the k e y learning curv e for an y scientist wishing to utilize them.The most common mistake r esearc hers make using these techniques is ov erinter pr etation of data.In addition, all of the described techniques are highly dependent upon the specific environment of application, as well as changes in the en vironment (e .g. temperature , pressure , and flow rate) o ver time .For example , temper atur e fluctuations can dir ectl y affect EC behavior.Finally, as MIC is a wide interdisciplinary phenomenon, EC should be just one part of the ov er all experimental design in addition to other chemical and biological measurement techniques. Future perspectives MIC r esearc h is a truly interdisciplinary field requiring various expertise.In addition to basic laboratory skills, microbiologists have a wealth of advanced knowledge and expertise that could be applied to help solve some of the k e y issues in this important topic.For example, microbial ecology studies can provide important nutrient utilization rate data r equir ed for pr edictiv e model de v elopment (Okabe and Char ac klis 1992 ). Modeling ecological networks and functional interactions of community members of the microbial consortia involved in corrosion may further our understanding of k e y players in and factors influencing MIC.Evolutionary studies can offer important information on the differences between field isolates and culture collection species typically used in laboratory studies.Work could be performed to produce targeted detection that finds specific gene markers related to corrosion rather than just general phenotypes (e.g.sulfate reduction), genetic, or metabolite markers that may serve as leading indicators of MIC for helping to optimize treatments or de v eloping ne w mitigation strategies .T he tr ansfer of knowledge on and the a pplication of the latest analytical methods (e.g.microscopy, isolation, species identification/sequencing, metabolomics, transcriptomics , proteomics , etc.) to MIC r esearc h ar e likel y to pr ovide further useful insights.Microbiologists are also needed to provide insights/knowledge to the de v elopment of laboratory and fieldtesting standards and best-practice guides relevant to MIC (e.g.field testing, biocide selection, sample handling/pr eserv ation for genomics analysis, etc.). As discussed throughout this review, reliable diagnosis of MIC r equir es MLOE, wher e in gener al, an incr eased number of measur ements of differ ent types (e.g.micr obiological, c hemical, metallur gical) impr ov es confidence in the conclusions that can be dra wn.T here are at present, ho w ever, no guidelines as to how many types of measurements are needed or if specific measurements are better than others.While efforts on this are underway (e.g. in updated NACE/AMPP and other standards), it is an area that could definitely benefit from additional work and collaborations that have the potential for major impact. Close collaboration among disciplines involved in MIC research is truly the k e y to efficiently tackling current challenges .T he association for material protection and performance (AMPP), the international biodeterioration and biodegradation society (IBBS), and the EUROCORR, among others, aim to bring together r epr esentatives of various fields as well as provide platforms for continuous communication between industry and academy.A frequent exchange between various societies would be also beneficial for the understanding of MIC. Ther e ar e numer ous c hallenges that can affect interdisciplinary collaboration in MIC research and its application in industry such as differences in research priorities, communication barriers , funding constraints , and e v en the format of guidelines (Wade et al. 2023 ).Ho w e v er, as we hav e seen in the oil and gas industry ( geno-MIC Pr oject ), ne w de v elopments stem fr om close cooperation among industrial operators and stakeholders as well as r esearc hers fr om academia.Suc h collabor ations can gr eatl y impr ov e our understanding of MIC mec hanisms, de v elop ne w monitoring techniques and green mitigation measures .T he Euro-MIC COST Action ( eur o-mic.org ), launc hed in the fall of 2021, has similar aims , in v olving MIC resear chers across the globe from various disciplines as well as stakeholders from various industrial sectors.Many of these groups/organizations are open to new members, ar e a gr eat way to de v elop one's knowledge of other disciplines and make further contacts with experts in the field of MIC. International networks encour a ge debate on ur gent global challenges, as well as international collaborations, particularly within the Higher Education Sector, allowing graduates and early car eer r esearc hers to acquir e knowledge in a div erse and pr ofessional environment, as well as new perspectives on their research, pr e v enting stigmas and paradigms and promoting unity of efforts by bringing together stakeholders with specialized and complementary expertise to address critical industry-led scientific challenges, and enhancing educational and r esearc h outcomes. Corr osion tr aining and education pr ogr ams av ailable to date are often limited to a specific area without giving a broader ov ervie w of MIC.The cooperation of different panels, knowledge exc hange between differ ent disciplines as w ell as betw een different generations of scientists needs to be enhanced, and MIC training and education pr ogr ams should be de v eloped.Dissemination is k e y to quic kl y and efficientl y deal with MIC issues. Concluding remarks The paper has provided an ov ervie w of some of the k e y aspects of MIC to provide background to microbiologists, as well as those new to the field, with a focus on non-microbiological aspects of MIC.A major aim of the work is to help break down some of the siloed knowledge and r esearc h being undertaken on this topic and encour a ge m ultidisciplinary collabor ations. Some of the k e y messages from the paper include: r MIC does not describe a single mechanism for corrosion, and there is still much more work to do to clarify some of the mec hanisms involv ed. r Corr ect dia gnosis of MIC r equir es MLOE (micr obiological, metallur gical, and c hemical), as well as information on engineering design and operations. r A wide range of materials can suffer degradation due to MIC, and various examples have been discussed with an emphasis on metal alloys, including methods for analyzing metallurgical aspects of MIC. r Models for studying MIC vary from single strain through to real-world consortia with each type allowing different aspects to be studied.There is significant potential for de v eloping and testing models that mor e accur atel y mimic the 'real world'. r Mana gement of MIC involv es the k e y aspects of threat assess- ment, mitigation/pr e v ention and monitoring.Again, many of these r equir e MLOE to pr ovide accur ate and useful information. r EC tec hniques can pr ovide critical information for MIC stud- ies in the laboratory and in the field; ho w e v er, ther e ar e numer ous limitations, so car e needs to be taken when designing, performing, and inter pr eting r esults fr om these methods. MIC is a field growing at an exponential rate, with huge potential for new scientific disco veries .MIC researchers and specialists with multidisciplinary background are critical to drive this field forw ar d.Microbiologists with an inter disciplinary mindset will have an important role in shaping the future of MIC r esearc h. Figure 1 . Figure 1.MLOE used in the MIC assessment .Puzzle pieces r epr esent the four main categories of evidence with typical types of measurement.To solve the puzzle, evidence from most or all four categories is needed. Figure 2 . Figure 2. Depiction of the potential increasing complexity of different combinations of microorganisms in model systems that can be used to study MIC.Note that the depiction of the EPS and metal surface changes is not intended to indicate how the different model microbial systems affect corrosion outcomes. Figure 3 . Figure 3. Examples of different strategies used for MIC control. , Mansor and Tay 2020 , Machate et al. 2021 , de Campos et al. 2022 ).Their use is now strictly regulated in certain areas [e.g.EU Biocides Regulation 528/ 2012 (98/8/EC)], prompting the search for effective and environmentally friendly long-term solutions (Loureiro et al. 2018 , Ferr eir a et al. 2020 , Vilas-Boas et al. 2020 , Ferr eir a et al. 2021 ).Significant advances have been achieved in relevant coating technologies, with work on a range of different properties such as the group of targeted organisms, mechanism of action, bioactiv e a gents, the pol ymeric matrix, surface structur e, envir onmental surr ounding, or e v en fundamental working principle being conducted (Ferr eir a et al. 2020 , Bhoj et al. 2021 , Silva et al. 2021 ). Figur e 4 . Figur e 4. T he main steps involved in developing an MIC mitigation strategy, along with associated challenges to attain effective solutions. Figure 5 . Figure 5. MIC monitoring r equir es integr ativ e data anal yses using a compr ehensiv e r ange of tools. Table 3 . Example of methods used to obtain microbiological data for MIC studies. Larger sample size is needed to perform analyses as it is more difficult to isolate RNA than DNA.Samples are more sensitive to degradation.Can be costly and a specialized laboratory is needed.Skilled personnel are needed for data inter pr etation.(Krohnetal.2021 ) Table 3 . Continued (Dockens et al. 2017)-Test asset affected, the main questions to be answered, the availability of trained personnel to perform the sampling, and/or access to r ele v ant labor atories .T he methods presented abo ve could be used individually or in various combinations to provide the best possible line of evidence to assess the involvement of microorganisms in corr osion.Furthermor e , these methods , either used alone or in combination, are not sufficient to support the involvement of micr oor ganisms in a corr osion pr ocess; collecting other lines of evidence , e .g. chemical, metallurgical, and operational information, is critical during MIC diagnosis/studies. Table 4 . Examples of engineering materials and reports of their susceptibility to MIC [Note: where possible, references showing examples of MIC of materials in the field or r e vie ws hav e been included]. Table 5 . Examples of analytical techniques for the study of metal surfaces and corrosion by-products. their mitigation pr ogr ams.Like wise, academia's motiv ations for participation in r esearc h include the continued need for peerr e vie wed publication, r equir ements for bringing funding to a department, and the support of work for graduate students .T hese are examples of only some of the silos that challenge multidisciplinary work.The need for multiple disciplines to understand MIC is where the roles of chemistry , microbiology , metallurgy , physics, electr oc hemistry, genetics, and other sciences are essen-
29,787.8
2023-07-12T00:00:00.000
[ "Materials Science", "Chemistry", "Environmental Science", "Biology" ]
Research on milk powder production and sales management system based on Beidou positioning : Food safety is a major issue related to people's livelihood. In recent years, food safety incidents have emerged in China, and it is urgent to solve the problems in the field of food safety. Food safety traceability is an important means to ensure food safety. At present, BeiDou positioning system has not been widely used in food traceability, and the demand for dynamic food traceability has not been solved. To this end, the paper introduces the application of Beidou positioning technology to milk powder traceability supervision, combined with QR code scanning technology, to design a system that can be dynamically supervised. The product is based on the Beidou positioning system, which mainly includes a low-power Beidou positioning device terminal and a cloud server. The BeiDou locator attached to the bottom of the lid is the main device for all services to be realized, and the current location information of the device is obtained regularly when it is working. Combined with the two-dimensional code technology, it facilitates information entry and reading, and establishes a complete traceability supervision system, including an APP client, which facilitates users to inquire various information about the products. The information of traceability has openness and transparency, data immutability and traceability, so that consumers and food regulatory authorities can know all the information of milk powder from production to sales, and it is convenient to manage and monitor the whole process from production to sales, so that users can consume with confidence. Project Background China's economy is developing rapidly, the per capita wage level and other aspects of a larger replacement, resulting in a very important consumer power, and in the milk powder market has shown the rapid development of mid-range products, but the rapid development has also brought unstable factors to the "melamine incident" as the representative of the infant milk powder The melamine incident has had a huge negative impact on the trust in the safety of domestic infant formula. Although Chinese dairy enterprises, the government and society have made continuous efforts, the trust in the safety of domestic infant formula has not yet been restored. According to the data released by the State Administration of Market Regulation, the passing rate of infant formula sampling in 2020 reached 99.89%, but China imported 335,000 tons of infant formula, due to frequent safety problems, people are very dependent on imported milk powder, and online shopping as a popular shopping method in cash society, because of better portability and cheaper than physical stores are very popular among consumers, and in recent years, there is a large number of domestic milk powder posing as "baby formula". In recent years, a large number of domestic milk powder posing as "foreign brands" has emerged frequently, and illegal businesses sell counterfeit goods for profit to generate safety and quality problems that are difficult to locate at the source. However, the traditional traceability technology is susceptible to human tampering and centralization, and cannot guarantee data security. The double guarantee of using Beidou positioning module with indexed traceability code for dynamic supervision ensures that each bucket of milk powder has unique information and guarantees the authenticity of traceability data. Project Introduction Using the ultra-low power miniaturized Beidou positioning module to measure the distance between the known satellite and the receiver, and calculate the specific location through the computer, the spatial distance rear rendezvous method is adopted, together with the QR code as the unique cited traceability code, so that users can directly browse the traceability information flow of milk powder from the origin of milk to the whole sales process through the special traceability channel of the APP client, and obtain the traceability information flow from production to The source of raw materials, production information, logistics information, etc. from production to sales 2. Traceability system industry and based on the analysis of applications within the market Industry Analysis The traditional traceability system has some problems. 1) The supply chain system information standards are not uniform and the data are not connected. 2) Food information is not open and transparent, leading to mutual trust issues between the two parties. 3) The core data of the traceability system can be easily tampered with. 4) The traditional traceability system is fragmented, making it difficult to identify the responsible parties for food problems. 5) The system is heavily centralized, and the authenticity of information cannot be verified. The traceability system based on BeiDou positioning system: (1) The production and logistics information of commodities is made public through low-powered BeiDou positioning equipment, which can dynamically supervise the products, so that both sides of the information can get the real information of commodities in real time. (2) The dual guarantee of Beidou positioning system combined with QR code technology prevents counterfeit products and has anti-counterfeiting function. (1) The state launches corresponding intra-industry guidelines, unifies industry standards, increases public credibility of the traceability system, and builds a true and efficient traceability system. Market prospect analysis (2) The traceability system is more oriented to daily consumer goods and will focus more on anti-counterfeiting of equipment and consumables and anti-counterfeiting of upstream channels. (3) The app developed by the anti-counterfeiting platform itself needs to maintain sufficient interaction with customers, enhance attractiveness and form user stickiness. (4) The government platform is affected by a variety of factors and progress is slow, while the enterprise platform is small in volume, relatively focused and has a variety of modes, and the complementary intermingling of the two types of platforms is an important way to promote the continuous and healthy development of the traceability system. The Beidou-3 global satellite navigation system marks the international leading level of China's satellite navigation and lays the material and technical foundation for the marketization, industrialization and internationalization of the Beidou system, and the Beidou industrialization and industrial Beidou empowerment promoted by the state are the focus of the implementation of the Beidou strategy during the 14th Five-Year Plan period, in which the traceability system profits. The industry certification of the Beidou system can do justice to the security of the traceability system, integrate the technology with the market, and improve the security of the traceability system. Milk powder sales management system function overview Five functions of the milk powder sales management system: 1). The product is based on Beidou positioning system, mainly including low-power Beidou positioning equipment, terminal and cloud server. The BeiDou locator attached to the bottom of the lid is the main device for all services to be realized, and the current location information of the device is obtained at regular intervals during its work, and if the location remains unchanged for a long time, it enters the hibernation mode. When the quality inspection department detects a substandard product or a consumer report can be immediately located, find the same batch of products and immediately recall or detain. At the same time can be traced to the specific production address and milk source address, the production enterprises to deal with the supervision. 2). Combined with the QR code technology, it is convenient to enter and read the complete information of milk powder from production to sale, and also prevent the appearance of counterfeit and shoddy products with anticounterfeiting function. After the milk powder passes through the production line, the QR code is printed on the milk powder cans, and consumers can get the corresponding milk powder production address, production time, production line, quality inspector and the corresponding Beidou chip serial number and other information in time by scanning the QR code. 3). The milk powder is shipped from the factory to open the positioning mode, and the positioning information is sent every other day and stored in the background database. If the address information remains unchanged for three consecutive days, the chip enters the sleep mode and starts to reposition after the acceleration is detected. 4). The database of the system can store two kinds of information. One is the basic information of the milk powder, and the other is the location information of the milk powder, both of which are linked by the serial number of the Beidou chip. The basic information of the milk powder has the serial number of the corresponding Beidou chip, so if you want to know the address information of the milk powder, you just need to call up the location information of the corresponding piece of Beidou chip. 5). this complete set of traceability supervision system, including APP client, users can log into the APP to scan the code to check the information of milk powder, but also a key to report, the regulatory department will immediately investigate after receiving the reported information, so that the supervision is more convenient and rapid, but also allows milk powder enterprises to gain credibility. 1) Target Management In order to better control the sales management system of milk powder production with Beidou positioning we set up clear sales targets and plans, including sales volume, sales, market share, etc., as well as specific methods and steps to achieve these targets and plans. We have also established a perfect sales organization and management process, and clearly defined the responsibilities and tasks of the whole system and the various departments and personnel with APP production and operation. An effective communication mechanism and collaboration mechanism has also been established to ensure the smooth development and later operation of the system. 2)Personnel training Strengthened the training and management of promotion staff, including training and improvement of sales skills, product knowledge, market analysis ability, communication ability, and team cooperation ability, and established incentive mechanisms and performance evaluation systems. Sales data analysis and decision support system is established to provide in-depth understanding of industry situation and application trends through data collection, analysis and application, provide decision support and business optimization suggestions, optimize promotion process and strategy, and improve product profit and competitiveness. It also strengthened risk management and compliance management of promotion and operation, established risk assessment and prevention and control mechanisms, and enhanced training on relevant laws and regulations and compliance awareness to ensure legal compliance and safety and stability of the process. 3) Promotion work To promote the Beidu Positioning System milk powder sales management system, we will set up official websites and social media accounts, post product information and promotional content, and improve the search ranking of our website through SEO optimization. Participate in relevant industry exhibitions and trade shows to showcase products and provide product demonstrations to potential customers to attract their attention and interest. Promote products to potential operators and businesses through search engine advertising and social media ad placements to increase awareness and sales. You can also cooperate with partners in the same industry to promote product technology, such as results can be cooperated with mom-and-pop stores to set up product display cabinets in the stores to increase product exposure and publicity and improve the word-of-mouth communication effect. 1)Marketing direction The implementation of the Beidou positioning milk powder production and sales management system requires a multifunctional APP, so we made a long discussion about the setting and later operation of the APP. Only to let users really appreciate the added value of using the product. Set clear goals and strategies: the first task of operating the app our clear goals and strategies are to improve user retention, increase the number of users, improve revenue, etc., and determine the operation plan based on the goals and strategies. Continuous optimization of user experience: We know that optimizing user experience can help improve user satisfaction and retention, so we have simplified the registration process, provided better navigation and search functions, added social features, etc. Implement effective marketing strategies: In order to attract more users, we implement effective marketing strategies, such as promoting through social media, attending trade shows, offering coupons, etc. Analyze data and make decisions: To make decisions more informed and convincing, we analyze data to understand user behavior and feedback, such as user retention rate, usage frequency, and reasons for user churn, and then make corresponding decisions and adjustments based on the data. Interaction and communication with users: Interaction and communication with users can help increase user loyalty and word of mouth, such as responding to user comments, providing online customer service, posting update announcements, etc. Continuous updates and improvements: In order to stay competitive and attract more users, we need to constantly update and improve the app's functionality and performance, such as adding new features, improving speed and stability, optimizing the interface, etc. 2) Marketing Strategy For government supervision and milk powder enterprise production safety, this project designs a milk powder traceability supervision system combined by Beidou positioning and QR code technology, which realizes real-time supervision of milk powder production and sales, as well as milk source traceability, so that consumers and food supervision departments can get all information of milk powder from production to sales, which is easy to manage and monitor the whole process from production to sales, and can make users consume with confidence. Therefore, in the positioning system, we are managing the positioning system, but also have their own safety management system. A. Set system goals and targets: First of all, we need to clarify the goals and targets of the positioning system, such as real-time monitoring of logistics and transportation, improving positioning accuracy, etc. B. Determine the scope of use and authority of the system: Determine the scope of use and authority of the system, such as which personnel can access the positioning system and what data they can see. C. Establish data collection and analysis mechanism: Collecting and analyzing data is an important part of positioning system management, and a scientific and reasonable data collection and analysis mechanism needs to be established to analyze data regularly and find problems in time. D. Maintenance of system equipment and software: The positioning system equipment and software need to be regularly maintained and upgraded to ensure the stability and accuracy of the system operation. E. Establishing emergency processing mechanism: When there is an abnormal situation in the positioning system, an emergency processing mechanism needs to be established to solve the problem in time and ensure the normal operation of the system. F. Training and management system users: Positioning system users need to receive professional training and management to ensure the use of norms and correctness. H. Regular inspection and evaluation of system effect: Regularly inspect and evaluate the positioning system, find problems and improve it in time to improve the system effect and management level. Market risk Market risk is the entrepreneurial risk resulting from the possibility of loss and uncertainty of profitability for a startup business to conduct economic activities. The main market risk of the BeiDou location-based milk powder production and sales management system is that other traceability platform service providers in the market have better services and more inside market information to launch stronger sales strategies or play price wars to capture market share. Market risk prevention countermeasures: adhere to the market-oriented business philosophy, develop the correct marketing concept and marketing strategy. Strengthen marketing management, do a good job in marketing and product after-sales service, and enhance customer loyalty. Pay more attention to market trends in the same industry to provide basic guidance for the company to develop market decisions. Managing risk The milk powder production and sales management system based on Beidou positioning is mainly due to the traceability service provided by the platform agent selling milk powder products from other suppliers. Milk powder sales are an important area involving food safety and must comply with relevant regulations and policies. To ensure that the products sold meet the relevant standards to ensure the health and safety of consumers. The quality of the products selected is not up to standard may cause certain trouble to customers, thus affecting their trust in this milk powder production and sales management system and affecting the Company's normal business operations. Management risk prevention countermeasures: strengthen the screening of suppliers, ensure that the products of cooperative enterprises are legal and compliant, and carefully audit the production information of suppliers to avoid falsification. Legal risks Legal risk is the risk of breaking the law due to lack of legal knowledge or other reasons in the business activities of the enterprise. The sale of milk powder involves national food safety regulations, so system development requires strict compliance with relevant laws and regulations to ensure that the products meet the relevant standards and regulations, or else you may be held legally responsible. Legal risk prevention countermeasures: carefully study and understand the relevant legal provisions to avoid the risk of violations. Technical Risks Technology risk is the uncertainty of gains and losses due to the uncertainty of the technology or collection of technologies applied or proposed by science and technology and the uncertainty in the process of combining technology with other technological activities. The uncertainty of technology includes both the uncertainty of the function and growth of the technology itself now possessed and the uncertainty of changes in related technologies (complementary and alternative), thus including both static and dynamic technology risks. Although high technology has many advantages, intense competition between technologies is inevitable, and the interaction between technology and the environment makes it easy to lose its superiority, thus also devaluing the benefits. Competitiveness and process should be reflected in the concept of technology risk, so technology risk analysis should consider technology in a complex developing environment. [1] The development of a Beidou positioning-based milk powder sales management system requires appropriate technical skills, including familiarity with Beidou positioning technology, software development and database management and other related technologies. If the team has insufficient technical skills or lacks relevant experience, technical problems may occur during the development process, resulting in the system not operating properly. If there is a problem with the information transmission or cooperation with the Beidou positioning service provider, it will lead to the lack of technical support to the customer and cause certain impact to the customer. The platform of milk powder production and sales management system based on Beidou positioning may have certain technical problems in the initial operation, unreasonable program design, program logic operation errors, etc. Technical risk prevention countermeasures: Selecting reliable development teams and suppliers: Selecting experienced development teams and suppliers can reduce the risk in the development process. Adequate testing during the development process can help identify and solve problems. Also, establish a quality assurance system to monitor and manage the operation of the software to ensure the stability and reliability of the software. After the software goes online, continuous improvements and upgrades are required to ensure the perfection of software functions and optimization of user experience. At the same time, vulnerabilities are fixed and security measures are strengthened in a timely manner to prevent security risks. Data Security Risks A data security risk is any threat or danger that could lead to unauthorized access, disclosure, destruction or misuse of data. Examples include hacker attacks, viruses and malware. The sales management system of milk powder based on Beidou positioning needs to collect and store sensitive information such as users' personal information and transaction data. If the data security mechanism of the system is not perfect or there are loopholes, it may lead to user data leakage or hacking, thus causing incalculable losses. Data security risk prevention measures: During the development process, the security of user data needs to be protected. Encryption technology, backup data and restricted access can be used to protect data security. Conclusion This paper firstly introduces some major milk powder safety incidents and introduces some relevant data for illustration to emphasize the importance of milk powder safety, based on which the proposal of establishing a formula milk powder traceability system based on Beidou system positioning is proposed, and introduces the concept of traceability system and its origin, as well as the research status and development of traceability system at home and abroad, and then specifically to the research status and development of milk powder at home and abroad. Then the research is divided into three parts: 1) Beidou positioning module: using the ultra-low power consumption and extremely miniaturized locator, the distance between the satellite with known position and the user receiver is measured by the Beidou positioning system, the data of the satellite is integrated to calculate the specific position of the receiver, and then the spatial distance rear rendezvous method is adopted based on the instantaneous position of the satellite with high speed movement as the known starting data, so as to determine the milk powder location information, such as latitude and longitude, location name, etc. 2) Traceability system: using QR code as the unique index traceability code. Consumers can login to the APP client to scan the code to get the information, and then they can see the milk powder production information, which includes the traceability information flow of each link of the whole process from milk source origin to sales. 3) APP production: set up a special traceability channel on the app, and through scanning the QR code and can directly browse the production information of milk powder from production to sales, logistics information, etc., and place the company's mini online Users can purchase other products of the company through the app, and can also check the outgoing and logistics information through the app after placing orders for products. When milk powder finally reaches consumers, it goes through many links from production to sales, and problems in each link may have an impact on the quality and safety of milk powder, so it is of great significance to establish a milk powder traceability sales management system based on Beidou positioning.
5,063.8
2023-05-03T00:00:00.000
[ "Agricultural and Food Sciences", "Engineering" ]
Localizing Perturbations in Pressurized Water Reactors Using One-Dimensional Deep Convolutional Neural Networks This work outlines an approach for localizing anomalies in nuclear reactor cores during their steady state operation, employing deep, one-dimensional, convolutional neural networks. Anomalies are characterized by the application of perturbation diagnostic techniques, based on the analysis of the so-called “neutron-noise” signals: that is, fluctuations of the neutron flux around the mean value observed in a steady-state power level. The proposed methodology is comprised of three steps: initially, certain reactor core perturbations scenarios are simulated in software, creating the respective perturbation datasets, which are specific to a given reactor geometry; then, the said datasets are used to train deep learning models that learn to identify and locate the given perturbations within the nuclear reactor core; lastly, the models are tested on actual plant measurements. The overall methodology is validated on hexagonal, pre-Konvoi, pressurized water, and VVER-1000 type nuclear reactors. The simulated data are generated by the FEMFFUSION code, which is extended in order to deal with the hexagonal geometry in the time and frequency domains. The examined perturbations are absorbers of variable strength, and the trained models are tested on actual plant data acquired by the in-core detectors of the Temelín VVER-1000 Power Plant in the Czech Republic. The whole approach is realized in the framework of Euratom’s CORTEX project. Introduction Nuclear power plants (NPPs) are equipped with many sensors that provide data for the assurance of safety and plant operations, capturing the neutron flux within the core. When the plant is operating under normal conditions, these sensors report steady-state values. In addition to the static component, small fluctuations of the signal may appear, due to inherent fluctuations in the process, caused by a multitude of factors (mechanical vibrations of the fuel assemblies or the core barrel, disturbances in heat transfer fluid flow rates, temperature or density variations, etc). In this sense, an important parameter to observe is the so-called "neutron noise": that is, the fluctuation of the neutron flux around a mean value, observed in steady-state operating conditions. These fluctuations are important, as they may convey information related to in/out of the core phenomena that can occur as a consequence of initiators (such as temperature or density changes, displacements of core components, etc.), which, in turn, induce fluctuations in neutron cross sections. The aforementioned phenomena can be grouped in different scenarios [1], such as generic absorbers of variable strength, axially traveling perturbations at the velocity of the coolant flow (e.g., due to fluctuations of the coolant temperature at the inlet of the core), fuel assembly vibrations, control rod vibrations, core barrel vibrations and many others. Those scenarios can be subsequently simulated with computer codes. In this work, perturbation analysis proceeds in two phases. Initially, every perturbation is propagated in the frequency domain across the whole reactor core volume; this is known as the forward problem. Thus, there is a one-to-one relationship between every possible location where a perturbation is located and the position where the neutron noise is measured. In operating NPPs, however, the location of the perturbation has to be inferred from measurements originating from neutron detectors (present at specific locations within and out of the core); this is the backward problem, for which we have to be able to invert the reactor transfer function. Since inverting the nuclear transfer function is a non-trivial problem, we employ machine and deep learning techniques to this end. The intuition behind this research direction was driven by the authors' participation in Euratom's CORTEX project (core monitoring techniques and experimental validation and demonstration) [2], whose objective was to assess the feasibility of methods for exploiting neutron noise in nuclear reactors by employing machine learning models for the inversion of the transfer function and validating the techniques on simulated datasets. The rest of this paper is organized as follows. In Section 2, we review related work on this subject, while in Section 3, we present the plant and the neutron detector measurements. In Section 4, we give some information about the FEMMFUSION code extensions introduced to take into account the hexagonal geometry and the description of the generated simulated data. In Section 5, we describe the employed deep learning model that locates the perturbation sources distributed in the core. In Section 6, we discuss the performance of the trained model on real plant data, which exhibits promising localization results, when the perturbation occurs around the frequency value of 10 Hz. Finally, Section 7 concludes this work, discussing possible extensions in the direction of completing the simulated data and applying the deep learning model on a larger range of frequencies. Related Work Computational intelligence techniques have been broadly used in NPP operation for many years [3]. For example, artificial neural networks (ANNs) were tested successfully in locating a control rod perturbation from in-core self powered neutron detector (SPND) spectra recorded at Paks-2, a commercial VVER-type pressurized water reactor (PWR) in Hungary [4,5]. In this case, only three detectors were used to detect seven control rod locations. Analysis was performed in the frequency domain with only one perturbation frequency, and a standard three-layered, fully connected feed-forward ANN was used [5] with six nodes in the input layer (three for the auto-spectra and three for the cross-spectra), ten nodes in the hidden layer and seven for the output layer. Nevertheless, simple architectures, such as the one described above, are not adequate for the more general problem of locating complex perturbations that may cover the whole reactor core, calling for advanced approaches [6]. In recent years, the development of massive parallel processing computation systems at reduced cost (e.g., in the from of graphical or tensor processing units) has permitted the training of much larger ANN architectures on large volumes of data, leading to the introduction of deep learning approaches in NPP operation and safety. In this respect and in the framework of the CORTEX project, a three-dimensional (3D) convolutional neural network (CNN) model in the frequency domain was adapted to the localization problem [7]. Other works analyzed neutron noise signals using recurrent neural networks (RNNs) [8] and long short-term memory (LSTM) units [8,9]. On the other hand, certain time domain analysis approaches firstly compute the wavelet transformation of the neutron noise signals and construct the respective scale-ograms. Then, those scaleograms are treated as images by deep CNN architectures that perform the perturbation identification and localization tasks [10,11]. All of the aformentioned methodologies were applied to nuclear reactors in Cartesian geometry for which the transfer function can be calculated by the CORE SIM solver [12,13]. The comparison with real plant data is in progress for some pre-Konvoi nuclear reactors [14]. For the time domain, the simulated perturbation data are generated by the SIMULATE 3K code (S3K) [15]. However, none of these codes can model the hexagonal reactor geometry of the VVER-type reactors. Therefore, the FEMFFUSION code is used instead, after being modified to deal with neutron noise problems in hexagonal geometry, in both the time and frequency domains. In this study, the simulated perturbation data generated by the FEMMFUSION code [16] pertain to a scenario involving a distributed variable-strength neutron absorber over the whole VVER-1000 reactor core. These data are used to train a machine learning model, then this model is used to try to backtrack the source of the perturbation in the core. In a second part, we test the model with real plant measurements from the Temelín NPP in the Czech Republic [17]. The VVER-1000 Nuclear Reactor at Temelín In this section, the general characteristics of VVER-1000 reactor are summarized, together with the description of neutron flux measurements. Reactor Description The Temelín NPP is located near Temelín in the Czech Republic [17]. Its technological schema corresponds to a standard Gen III power plant. In the 1990s, alterations to the original design were made by the Westinghouse Electric Corporation, in conjunction with the State Office for Nuclear Safety of the Czech Republic (SÚJB) and the International Atomic Energy Agency (IAEA) in an effort to bring reliability and safety levels into conformance with Western European standards. The entire primary circuit (the nuclear reactor, four loops with steam-generators, circulation pumps, etc.) is in a fully pressurized containment facility, hermetically enclosed in a protection envelope from reinforced concrete. The reactor core contains 163 fuel assemblies, each with 312 fuel rods and 61 regulating rods. The PWR is fueled by uranium dioxide (UO 2 ) enriched to an average of 3.5% with the fission isotope 235 U. The characteristics of the VVER-1000/320 reactor are presented in Figure 1. Measurement Methodology Neutron noise data are measured and gathered with the mobile, in-house distributed measuring test system (DMTS), developed by UJV (nuclear power engineering in Czech Republic), from the standard diagnostic plant reactor vibration monitoring system (RVMS), together with records of technological data. The RVMS diagnostic sensors of each unit include 4 accelerometers on the reactor head flange, 12 ionization chambers placed in three vertical planes and at two horizontal levels, and more than 256 self power neutron detectors (SPNDs) across the whole core in four axial heights in non-uniform radial spreading. SPND signals are sequentially measured in groups of 16, controlled by means of 19 configuration sets. RVMS measuring chains contain conditioning with isolation and buffer amplifiers, high/low-pass 8-pole Butterworth filters with a minimum of 48 dB per octave roll-off to form anti-aliasing filters in several bandpass ranges. The sampling frequency is up to 1 kHz with 16/12 bits resolution for noise/DC signals. The diagnostic data acquired by DMTS are stored with basic fixed frequency ranges per channel (200 Hz and 300 Hz), with a typical 0.007 Hz lower cut-off frequency, sampled at 1 kHz with 24 bits resolution at the 5 V to 10 V output signal of standard NPP diagnostic measuring chains. Figure 2 displays the radial and vertical detector positions with marked fa1, fa3, fa4, and fa6 configuration sets and positions in the TVSA-T fuel assembly. All neutron noise data are shortened to the uniform length of 720,000 samples in order to avoid undesirable transients at the start of the plant measurements. Therefore, there exist uniform 12-min intervals for the processing steps described next. Direct current (DC) components of all neutron noise data in these sets are removed. All ionization chambers and SPND data are normalized to the DC part of the respective signal. Figure 3 exhibits the joint time-frequency spectrograms (JTFSs) of a SPND, selected from the configuration set fa1 as a typical view in the frequency-time domain. Data from the beginning of cycle (BOC) U1C09 were acquired in 19 configuration sets in October 2010, during physical tests of the neutron instrumentation and under strict operational conditions [17,18]. Detectors Used for Structural Health Monitoring The plant measurements used in this work were acquired in the framework of the study of the control rod insertion reliability. In this respect, the strategy of migrating one assembly in the core through four consecutive fuel cycles (U1C09, U1C10, U1C11 and U1C12) was followed. At the end of the last cycle, a problem related to incompatible rod insertion (IRI) occurred. Having gathered all appropriate data, it was made possible to investigate and to identify this phenomenon through the neutron noise signals acquired under the same operational conditions in every cycle (i.e., the cycles without trouble) and the cycle where the problem occurred. In order to follow the migration of an assembly in the core, only the neutron noise signals acquired on four detectors groups (fa1, fa2, fa3 and fa4), as shown in Figure 2a, were taken into account. For this reason, the deep learning architecture (Section 5) is trained four times, learning one model for each detector group (Table 1). Then the idea is to use only the results obtained from one of these groups to determine the location of the problem in the core. In this regard, only the data acquired at the beginning of the first cycle (U1C09) are used. The study of the three other cycles is not relevant in the context of this work and, therefore, the IRI problem is not further discussed (hoping, nevertheless, that the methodology initiated in this work serves as intuition for further analyses). The Simulated Data Prior to unfolding, the machine learning algorithms need to be fed with training data, i.e., data where a known perturbation is assumed and the corresponding induced neutron noise at the location of the detectors is estimated. This section deals with the generation of these training sets for hexagonal reactors (in a similar manner, data generation for rectangular geometry reactors is described in [19]). Generation of the training datasets is performed with the FEMFFUSION code in its frequency domain mode [20]. The FEMFFUSION Diffusion Code FEMFFUSION is an open source finite element code that solves the neutron diffusion approximation [16,21]. This code was extended for the CORTEX project to deal with neutron noise problems in the time domain [22] and in the frequency domain [20], especially for hexagonal reactors. Due to the quantity of the simulations for this work, the calculations are performed in the frequency domain. FEMFFUSION permits any kind of structured and unstructured meshes, as long as they are composed of quadrilaterals (2D) or hexahedrals (3D). In this way, each hexagon of the hexagonal reactor is discretized into 3 quadrilateral cells ( Figure 4). A simple extrusion of this geometry is performed so as to account for the height of the reactor. (1) and (2) below [23] [ Neutron Noise Diffusion Equation In the two group theory without upscattering, matrices v −1 , L, M, νΣ f , χ and Φ are defined as in Equations (3) and (4) [ The main unknown of the neutron transport equation is the space-and time-dependent neutron flux in its usual separation in the fast and thermal energy groups Φ = [φ 1 ( r, t), φ 2 ( r, t)] T , and the neutron precursor concentration C p ( r, t) for each neutron precursor group p. The total delayed neutrons fraction is β = ∑ N p p=1 β p . Σ a1 and Σ a2 are the fast and thermal absorption cross sections that quantify the probability of a fast or a thermal neutron to be absorbed by a nucleus per centimeter of neutron travel, respectively. In the same way, Σ f 1 and Σ f 2 are the probabilities of a fission reaction produced by a fast or a thermal neutron per centimeter of neutron travel. ν is the average number of neutrons released per fission, and Σ 12 is the probability of a scattering interaction per centimeter of fast neutron travel. These cross sections are determined by the materials of the reactor. All other quantities have their usual meaning in the nuclear engineering area [23]. The first order neutron noise theory [24] splits all time-dependent terms of Equations (1) and (2), X( r, t), into their mean (or static) values X 0 ( r) and their fluctuation around the mean δX( r, t) (Equation (5) This separation stands for (i) The neutron operators, L and M; (ii) The materials cross sections Σ a , Σ f , Σ 12 ; (iii) The concentration of delayed neutron precursors; (iv) C p ; (v) The neutron flux, Φ. Then, the following assumptions are made: 1. 2. The transient is assumed to be stationary. 3. Second-order terms are neglected. Applying the neutron noise separation from Equation (5) into Equations (1) and (2), removing the second-order terms and performing a Fourier transformation, the frequencydomain neutron noise equation becomes (Equation (6)) In the usual two group approximation, It can be seen that the neutron noise Equation (6) is an inhomogeneous equation with complex-value quantities. The application of the continuous Galerkin finite element discretization [25] leads to an algebraic system of equations with complex values and the block structure of Equation (7) below where δφ 1 and δφ 2 are the algebraic vectors of weights associated with the fast and thermal neutron noise in the frequency domain. This complex value system has to be solved after the steady-state problem that calculates the algebraic vectors of weights, associated with the fast and thermal static neutron flux,φ 0,1 andφ 0,2 , respectively. The related static eigenvalue problem must be solved with the same spatial discretization as the frequency domain neutron noise to obtain coherent results. This system is transformed into an equivalent system of equations with real values, and the sparse system of Equation (7) is solved using a biconjugate gradient stabilized method [26], with an incomplete LU decomposition [27] employed as preconditioner. Each simulation calculates the thermal relative neutron noise δφ 2 /φ 0,2 at each detector position. The VVER-1000 reactor is modeled using 221 vertical assemblies discretized in 50 planes, summing to a total of 10,550 hexagonal cells. The hexagons are sorted using a left to right and up to down numbering ( Figure 5). Successive planes use a correlative numbering. Generic Absorber of Variable Strength The scenario being considered is a Dirac-like perturbation at the hexagonal cell, directly expressed as a perturbation of the macroscopic absorption cross-sections. This scenario is particularly important since it can be used to localize a generic type of perturbation that does not fit any special category. The perturbation inserted at each simulation is set to δΣ a1 (c, ω) = 0.1 and δΣ a2 (c, ω) = 0.1, where c is the hexagonal cell and ω is the angular frequency of the perturbation. Three perturbation frequencies are considered: (i) 0.1 Hz; (ii) 1 Hz; (iii) 10 Hz. In this way, 31,650 simulations for the VVER-1000 reactor are performed. Due to the volume of the simulations, all calculations are made with linear finite elements, where each simulation takes approximately 30 s, using one processor. The Machine Learning Model Based on the available data (simulated and real), the objective of the machine learning model is to perform an identification task: that is, to identify if any perturbations occur with respect to each fuel assembly. For this purpose, the chosen model is firstly trained on all of the available simulated data, where it is known in advance at which fuel assembly the perturbation occurs, thereby forming a supervised classification problem. Once the training phase completes, model performance is validated on plant measurements in an effort to identify possible perturbation locations. Figure 6 outlines the optimal model architecture. It is a deep convolutional neural network, whose input is comprised of 16 in-core signals. More precisely and in order to align the simulated data with the plant measurements, four groups of 16 signals are created, as illustrated on Table 1 below, thereby training four distinct models. Model Architecture Each detector signal has a length of 200 timesteps. The simulated data files contain the result of the calculated complex value, which gives the characteristics of the vibration arriving on the detector for each radial and axial location. The transformation from frequency to the time domain is realized according to Equation (8) below where ω p is the angular frequency of the perturbation and thus the dominant frequency of the neutron noise flux. Additionally, the simulated data are normalized (divided by the global standard deviation). Concerning the plant measurements, they are already on the time domain. The size of the signals are decimated by a factor of 10, in order to speed up calculations for model training, while at the same time retaining the necessary frequency content for the physical analysis. Then, different sample sizes of 200 timesteps are chosen during training. Finally, two approaches are followed prior to providing the plant measurements to the machine learning model. The first is to provide the raw signal (after detrending and normalization), while the second is to perform an additional preprocessing step: the application of a band-pass filter on specific frequencies, only those that were simulated and used for model training. As pictured in Figure 6, the input signals undergo a number of consecutive feature extraction steps that are comprised of one-dimensional convolutions, followed by normalization layers [28] that normalize the input to the next layer by computing its mean and variance. Then, every two convolutions, the signal is averaged by an one-dimensional average pooling layer. The first two convolutions involve 128 feature maps that are produced by kernels of size 3 × 3, followed by two convolutions of 256 feature maps of equal sized kernels and then of two final convolutions of 512 feature maps. Then, the output of the final feature extraction step is flattened (green rectangle) and provided to the fully connected component at the end of the architecture (three yellow rectangles) that performs the localization task. This component consists of three dense layers, with the first two having 1000 neurons each, while the last one has 211, corresponding to each fuel assembly within the core. The overall architecture is trained on the labeled simulated data, using the Adam optimization technique [29] for 200 epochs for the four different groups discussed above. According to the available simulated data, it would be possible to locate every perturbation at 211 × 50 locations because for each radial position, 50 axial levels in the core are calculated. However, this would produce a 10,550 output vector (one-hot encoded for every axial and radial combination). At this point of our research, we are primarily interested in identifying the perturbations at the fuel assembly level ( Figure 5); therefore, the machine learning model returns the radial position at each given frequency, regardless of the axial position where the perturbation actually occurs. This observation is necessary in order to understand the results obtained by the machine learning models in Section 6.2. Figure 6. The machine learning model. Results and Discussion Currently, the simulated data are created for three frequencies and only for the irradiation conditions related to the first cycle U1C09 (Sections 3.2 and 4.3). Validation scores regarding the capability of the trained models to localize the anomalies are being reported and subsequently the models are being tested on actual plant measurements. In the discussion section (Section 6.2), it is demonstrated that the proposed methodology exhibits better results at 10 Hz, while performance at the very low frequencies (0.1 Hz and 1 Hz) is not as optimal. 6.1. Results 6.1.1. The Efficiency of the Trained Models Table 2 summarizes the training accuracy of the four different models trained on the respective detector subsets of Table 1 (Section 5) on the simulated data. The achieved performance means that the machine learning models are capable of identifying the fuel assembly where the perturbation occurs more than 9 in 10 times. . Prediction on the Plant Measurements Every trained model in the simulated data is validated on plant measurements from the Temelín reactor, at 0.1 Hz, 1 Hz and 10 Hz, for a limited number of detectors. Initially, the models are tested on raw measurements (that is, without filtering). Then, a band-pass filter is applied around each simulated frequency used during model training. Figures 7-9 outline system performance on the actual plant measurements, thanks to a visual representation, which gives an assessment of the localization probabilities for the perturbation. 6.2. Discussion 6.2.1. The Other Available Measurements in the Core As indicated in Section 3.3, data acquired at the beginning of the cycle U1C09 are used, while the focus is around only three frequencies for which simulated data are available (Section 4.3). In order to be able to reason on the results shown in Figures 7-10, the presence or absence of the perturbation at the predicted frequency and location needs to be verified. This is achieved by examining the available measurements acquired by the neutron detectors in the core and, more specifically, for the closest assembly to the radial location predicted by the models. Afterwards, the spectrum analysis of the signals acquired by the detectors on all the axial locations for this given radial location will provide an indication of validating the prediction; it needs to be verified whether a peak effectively exists at the perturbation frequency used by the model to realize the prediction on all axial locations. Regarding the first cycle (U1C09), the available radial locations are displayed in Figure 11. Additionally, for every model, the signals acquired on the detectors not considered by the models may be used, e.g., if the performance of the a1 model is evaluated, then the detector signals of groups fa3, fa4 and fa6 may be employed. Figure 11. Available measurements for the U1C09 cycle, used to verify the machine learning performance. Some Considerations on the Frequency Resolution It is known that the signal length limits the frequency resolution which can be expressed as ∆ f = Fs n , where Fs is the sampling frequency and n the size of the signal. Plant measurements are acquired for a duration of 12 min, with a sampling frequency of 1 kHz (Section 3.2). Therefore, the frequency resolution is expected to be equal to 14 × 10 −4 Hz. In practice, the detector signals are decimated by a factor of ten in order to speed up the calculation time, resulting in 72,000 samples per signal for model training. This choice is feasible since the range of frequencies of interest for neutron noise analysis does not exceed 50 Hz. In an effort to further reduce training time, only 200 samples from each signal are actually considered (Section 5.1). This proves to be a good tradeoff, but in this case, the frequency resolution is limited to 0.5 Hz, excluding perturbations around 0.1 Hz. For the 1 Hz and 10 Hz cases, 200 points are sufficient to describe 2 and 20 periods of the sinusoid, respectively, which is good for the latter frequency and acceptable for the former. Spectra Observation of the Other Available Measurements to Validate the Prediction Prior to machine learning model training, a band-pass filter is applied to the simulated data around every frequency with a tolerance of 1 Hz (Section 6.2.2). Signal stationarity is also important, as the initial signals are reduced to 200 samples. This choice is verified for 10 Hz, as it can be seen in the example of Figure 3 for the N205 detector. Figures 12-17 displays the spectra calculated on the signal evolution of the available detectors not used in model training. In Figure 12, the radial position of detector N31 is marked, as the frequency value of 10 Hz is always present in the spectrum along all axial positions (N311, N313, N315, N317). Additionally, 1 Hz is highlighted around every frequency of the band-pass filters in order to determine its presence or absence. Radial position N31 is also close to a predicted perturbation location by the machine learning model ( Figure 10). Therefore, it could be said that the performance of machine learning models is satisfactory for the 10 Hz case. SPND Detectors For the radial position N62, a frequency value of 10 Hz is observed at the first axial position (N621, Figure 10), but not for the others. That is also the case for the radial locations N44, N52 and N53 (cf. Figures 14-16). For the other shapes, there is no ambiguity, for instance with the radial locations N55 (cf. Figure 17, bottom right side of the core) where there is absolutely no peak around 10 Hz or the radial location N16 (cf. Figure 13, upper right side) where the peaks around 10 Hz are very weak at two axial levels. The same kind of observation is also valid for the groups of detector signals not used during model training. For the 1 Hz case, interpretation is not so obvious since the shape of the spectra in the vicinity of 1 Hz are all identical and the machine learning models predict only one location on the left. At 2 Hz, the noise rapidly increases due to a so-called 1 f noise, which is characteristic of neutron noise measurements in power reactors because of the population of the delayed neutrons emitted after nuclear fission by the fission products. The shape of the power spectral density at low frequencies is the same for all detectors, except for N317, where there is a peak near 1 Hz, which can explain the prediction of the model in the vicinity of the N31 detector radial location ( Figure 9). However, previously, for the same kind of observation with the peak at 10 Hz in the spectrum of the N621 detector, the machine learning models did not exhibit the same classification results. This behavior needs to be further studied but it could be attributed to the fact that the frequency of 1 Hz is not very well represented by only 200 points over time (only two periods of the sinusoid, as discussed in Section 6.2.2). Finally, when no filter is applied to the simulated data (Figure 7), predictions are coherent in the sense that they are similar to predictions obtained in the case of the bandpass filter around the frequency value of 10 Hz. The reliability of the obtained results for the three band-pass filters is summarized on Table 3. Table 3. Reliability of the obtained results for the three band-pass filters. Model Characteristic 0.1 Hz 1 Hz 10 Hz Prediction accuracy of the radial location low medium high Spectral resolution not well described partially described well described Conclusions In this work, machine learning models were employed in order to identify and localize perturbations in plant measurements originating from the Temelín VVER-1000 NPP in the Czech Republic. The FEMFFUSION code allowed the generation of simulated data in hexagonal geometry, associated with a Dirac-type perturbation in the core. A onedimensional deep neural network was trained on these data in order to predict the radial position of the perturbations. It was demonstrated that is possible to reach some reliable conclusions on the radial localization, when the perturbation is around 10 Hz. Because of certain performance considerations during machine learning training, the frequency resolution of the spectral analysis had to be limited, which resulted in non-optimal performance for the 0.1 Hz and 1 Hz frequency values. This study can be extended in a number of ways. Firstly, new simulated data can be generated for frequencies greater than 5 Hz, that is to say, over the very low noise frequencies in 1 f . Then, machine learning models will be trained for every group of detectors, for the irradiation conditions specific to every fuel cycle until the incompatible rod insertion (IRI) phenomenon. If the said phenomenon can be related to a frequency in this frequency range, it will be possible to propose an online methodology that detects the radial location of its occurrence. Additionally, the part of the spectrum at very low frequencies (less than 2 Hz) needs to be better analyzed. This may be achieved by increasing the sample size considered for model training (currently at 200 samples). This would result in an increase of the frequency resolution and as a consequence, it would improve the spectrum description at very low frequencies. Finally, a possible extension of this work is to consider the axial location of the occurring perturbations, apart from the radial one. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to non-disclosure agreements signed in the framework of the CORTEX project. Conflicts of Interest: The authors declare no conflict of interest.
7,296.8
2021-12-24T00:00:00.000
[ "Computer Science" ]
Untargeted metabolomics of human keratinocytes reveals the impact of exposure to 2,6-dichloro-1,4-benzoquinone and 2,6-dichloro-3-hydroxy-1,4-benzoquinone as emerging disinfection by-products Introduction The 2,6-dichloro-1,4-benzoquinone (DCBQ) and its derivative 2,6-dichloro-3-hydroxy-1,4-benzoquinone (DCBQ-OH) are disinfection by-products (DBPs) and emerging pollutants in the environment. They are considered to be of particular importance as they have a high potential of toxicity and they are likely to be carcinogenic. Objectives In this study, human epidermal keratinocyte cells (HaCaT) were exposed to the DCBQ and its derivative DCBQ-OH, at concentrations equivalent to their IC20 and IC50, and a study of the metabolic phenotype of cells was performed. Methods The perturbations induced in cellular metabolites and their relative content were screened and evaluated through a metabolomic study, using 1H-NMR and MS spectroscopy. Results Changes in the metabolic pathways of HaCaT at concentrations corresponding to IC20 and IC50 of DCBQ-OH involved the activation of cell membrane α-linolenic acid, biotin, and glutathione and deactivation of glycolysis/gluconeogenesis at IC50. The changes in metabolic pathways at IC20 and IC50 of DCBQ were associated with the activation of inositol phosphate, pertaining to the transfer of messages from the receptors of the membrane to the interior as well as with riboflavin. Deactivation of biotin metabolism was recorded, among others. The cells exposed to DCBQ exhibited a concentration-dependent decrease in saccharide concentrations. The concentration of steroids increased when cells were exposed to IC20 and decreased at IC50. Although both chemical factors stressed the cells, DCBQ led to the activation of transporting messages through phosphorylated derivatives of inositol. Conclusion Our findings provided insights into the impact of the two DBPs on human keratinocytes. Both chemical factors induced energy production perturbations, oxidative stress, and membrane damage. Supplementary Information The online version contains supplementary material available at 10.1007/s11306-022-01935-2. In vitro studies are an invaluable source of information on human health, as the chemical damage and cytotoxicity can be evaluated with the use of cell cultures (Aragonès et al., 2017). Cells are handled easily, the testing of different compounds can be performed simultaneously and the chemical toxicity can be evaluated in a cost-effective way (Muñoz & Albores, 2010). DBPs produce adverse health effects, as it has already been demonstrated by many toxicological studies. Thus, it is necessary to thoroughly evaluate their effects after human exposure. Cultured human keratinocytes are employed to study the exposure of human skin to chemical factors . The HaCaT is a monoclonal cell line that does not produce tumors. It is long-lived, thus allowing the continuous and uninterrupted study of cells, it does not require costly growth factors for its survival, making it ideal for the study of keratinocyte functions (Colombo et al., 2017). Metabolomic screening is a promising tool to carry out the study of cell functions, allowing the identification and quantification of varying metabolites in biological samples. The identification of metabolites and the assessment of metabolic pathways provide valuable information about the toxicological effects and biological mechanisms of cells exposed to chemical factors (Oliveira et al., 2016). Nuclear Magnetic Resonance (NMR) spectroscopy has been playing a key role in decoding and understanding the metabolism and metabolic processes of exposed organisms and cells, for more than fifty years now (Williams et al., 1979;Winkel & Jans, 1990). It offers a variety of information, has the advantage of identifying multiple metabolites simultaneously and provides scientists with the possibility of quantifying them, through predictable and reproducible spectra (Wishart, 2019). Mass spectrometry (MS) has, also, been employed to identify unknown compounds and quantify known materials. Metabolomics based on MS has increased rapidly in the last decades due to advantages, such as high sensitivity and mass accuracy and reliable characterization of various biomolecules (Nagana Gowda & Djukovic, 2014). Taking into consideration that DCBQ and DCBQ-OH are predominant DBPs in the sanitization of swimming pool water (and not only), human epidermal keratinocytes were chosen in this study as the ideal in vitro model to examine the effect of the two chemical factors on the cellular viability and the metabolic phenotype of the exposed cells. The exposure concentrations for the metabolomic study were chosen based on the outcome of a cell viability study (Chatzimitakos et al., 2018). Then, the perturbations induced in cellular metabolites and their relative content were screened and evaluated through a metabolomic study using 1 H-NMR and MS spectroscopy. The metabolic pathways were annotated and attributed to certain alterations of cells. Instrumentation Instrumentation is described in detail in the Supplementary Information. Stability study of DCBQ The stability of DCBQ in DMEM was performed by monitoring the transformation of DCBQ to DCBQ-OH, at room temperature and darkness. The DCBQ solutions at concentrations of 0.05 and 0.075 mM were prepared in DMEM and adjusted to pH = 5.0. The transformation into DCBQ-OH (%) was estimated after a 30 min period of exposure to DCBQ. The absorbance was measured at 530 nm (Görner & Von Sonntag, 2008) every 10 min, up until its value had reached a plateau, indicating that DCBQ was transformed to DCBQ-OH. Synthesis and stability study of DCBQ-OH A quantity of 15.0 mg of DCBQ was dissolved in 1.0 mL of methanol and then DDW was added up to a final concentration of 3.0 mM. The solution was left to stand in the daylight for a period of 24 h. A DCBQ-OH solution of 1.0 mM was prepared, and the molecular absorbance (in the range of 400-600 nm) and NMR spectra were obtained. 2,2-Diphenyl-1-picryl-hydrazyl (DPPH) assay A stock solution of DPPH at 6 × 10 − 5 M was prepared in methanol. Solutions of DCBQ-OH in DDW and DCBQ in methanol were prepared at various concentrations (0.005-0.75 mM). An aliquot of 2.34 mL of DPPH and 0.66 mL of the solution of the compound tested were mixed and after stirring for 30 min the absorbance was measured at 517 nm. Control samples of DPPH were prepared with 2.34 mL of DPPH in DDW and methanol for DCBQ-OH and DCBQ, respectively. Blank samples containing 2.34 mL of DDW or methanol were prepared for DCBQ-OH or DCBQ, respectively. Spectrophotometric measurements were done and the % free radical scavenging activity was calculated (Chatzimitakos et al., 2016). Cultivation of HaCaT cells HaCaT cells were stored in liquid nitrogen. The cell pellet was resuspended in a small volume of the cell culture medium DMEM. Cells were cultivated in cell culture dishes containing high-glucose DMEM, 1% penicillin, 1% streptomycin, 1% L-glutamine, and 10% FBS. Cultures were maintained in an incubator in a 5% CO 2 atmosphere, at 37 o C. Next, DMEM was discarded and cells were washed with PBS to remove dead cells. Cells were harvested by trypsinization and were put in cell culture multiwell plates for cell viability assay and metabolomic study. All experiments were done in a sterilized environment with laminar flow in α Euroclone Fume Hood. Cell viability assay For the cell viability assay of DCBQ-OH, HaCaT cells were exposed to concentrations of 0.01-1.25 mM. The incubation was performed at 37 o C in a humidified environment with 5% CO 2 , for 2, 6, 8, 12, and 24 h. The viability of cells at concentrations of 0.10 mM and 0.30 mM of DCBQ-OH was also evaluated for a 30-min exposure time. For the cell viability assay of DCBQ, HaCaT cells were exposed to concentrations of 0.01-0.30 mM for 30 min, in DMEM, at pH = 5.0. The cell viability assay is detailed in the Supplementary Information. Metabolomic assay HaCaT cells were cultured and exposed to HBQs for the metabolomic assay. Cells were exposed to DCBQ-OH, at concentrations of 0.10 and 0.30 mM, for 24 h and to DCBQ, at concentrations of 0.05 and 0.075 mM, for 30 min. In both cases, the concentrations corresponded to the IC 20 and IC 50 of cells, respectively. Non-exposed samples were used as controls. Cells were harvested and metabolites were extracted. The extraction of metabolites was based on the Bligh-Dyer method (Ramiz & Soumen, 2019). The cell pellet was resuspended in 0.66 mL of DDW and 0.80 mL of methanol sequentially, at 4 o C. Then, the pellet was subjected to three freeze-thaw cycles using liquid nitrogen and 1.6 mL of chloroform was added. The solution with the pellet was vortexed for 30 s and centrifuged at 5000 rpm, for 5 min. The supernatant was retracted, divided into two equal parts, transferred to Eppendorf vials, and was evaporated under a gentle stream of nitrogen. This procedure was repeated twice more, without the step of the freeze-thaw cycle and all the metabolites were collected in the Eppendorf vials. For 1 H-NMR measurements, a portion of the residue was resuspended in 0.60 mL of deuterium oxide containing TSP (1 mM) as the internal standard. For the LC-HRMS study, the remaining portion of the residue was resuspended in 0.10 mL of acetonitrile. 1 H-NMR spectra processing and relative quantification of metabolites After obtaining the 1 H-NMR spectra, the spectra were aligned using TSP (0.0 ppm), and the signal positions, in ppm, were input to the 1D NMR search engine of HMDB (Wishart et al., 2007(Wishart et al., , 2009(Wishart et al., , 2013(Wishart et al., , 2018 to realize identification of the metabolites. A list of metabolites for each sample was provided by the search engine. Verification of the metabolites was performed based on the obtained MS spectra. The exposure of HaCaT cells to DCBQ-OH and DCBQ led to alterations, which were evaluated by analyzing the metabolic pathways. MetaboAnalyst 5.0 was used for the pathway analysis employing the library of Homo Sapiens for HaCaT cells (Pang et al., 2021). The variation in the metabolite relative contents was calculated by manually integrating the selected NMR signals using the TSP signal (δ = 0.00 ppm) as an internal standard. Every metabolite present in samples was assigned to 1 H-NMR peaks, which were unique for the specific metabolite. The ratios of the integrals of metabolite NMR peaks to concentration was transformed into DCBQ-OH, in 30 min (Fig. S5). DPPH assay The DCBQ-OH and DCBQ were evaluated for their potential to behave as free-radical scavengers, at concentrations of 0.005-0.75 mM. The experimental results showed that both chemical factors exhibited DPPH scavenging activity, which increased in a concentration-dependent manner. For both, DCBQ-OH and DCBQ at 0.20 mM, the % scavenging activity towards the DPPH radical reached a plateau, signifying that they both had reached their full potential as scavengers (Fig. S6). On the other hand, the exposure of cells to DCBQ was performed in DMEM at pH = 5.0, for an exposure period of 30 min. These conditions were selected based on the stability study of DCBQ (Fig. S5). Under these conditions, only a limited transformation of DCBQ to DCBQ-OH took place. After a 30-min exposure of HaCaT cells to DCBQ, an abrupt decrease in their viability as a function of the concentration of DCBQ occurred. The IC 50 and IC 20 values of DCBQ were found to be 0.075 and 0.05 mM, respectively (Fig. S9). The viability of cells incubated in DMEM at pH = 5.0 and pH = 7.6, for 30 min were compared and the cells proved to retain their viability. Therefore, exposure to more acidic DMEM did not affect cell viability. Cell viability assay To determine whether the DCBQ-OH in samples affected the viability of cells, additional experiments with DCBQ-OH at 0.10 and 0.30 mM for 30-min exposure of cells, were performed. Cells exhibited viabilities greater than 95%, which suggests that DCBQ-OH did not contribute to death when cells were exposed to DCBQ. Therefore, it is reasonable to speculate that alterations in the viability of cells were attributed only to the presence of DCBQ. As metabolic processes are closely related to the survivability of cells it is prudent to study the metabolome of cells exposed to DCBQ-OH and DCBQ and compare it with the that of the TSP in the control sample were compared with the ratios of the same metabolites in the samples exposed to the chemical factors. The percentages of variations of the relative contents of metabolites were calculated. The reproducibility of the whole process for the metabolite identification and their relative quantification was ensured by assessing the 1 H-NMR spectra, after performing the exposure experiments in cells, in triplicate. The relative standard deviation of the relative content did not exceed 7.5% for the metabolites studied. Statistical analysis At first, data obtained from the study were examined, whether they are normally distributed, using Shapiro-Wilk test. Equality of variance was examined with the F-test. Where necessary, Mann-Whitney U test was applied to evaluate the statistical significance of the examined parameters, and differences were considered significant at p < 0.05 (n = 3). Synthesis and stability of DCBQ-OH and DCBQ The DCBQ, at a concentration of 1.0 mM was transformed into the hydroxylated product, within 24 h, with the values of absorbance leveling off, at 530 nm ( Fig. S1). At the same time, the pH of the solution of DCBQ-OH formed was 5.0. The NMR spectrum of DCBQ-OH 1.0 mM showed two distinct peaks at 6.81 and 6.92 ppm, which correspond to the hydrogen on the benzoquinone ring and hydroxyl group, respectively (Fig. S2). The peak at 7.04 ppm in the DCBQ spectrum corresponds to the hydrogens of the benzoquinone ring ( Fig. S3) (Wiley & Sons, 2021). The differences in the NMR spectra are attributed to the newly formed hydroxyl group of DCBQ-OH. A stability study of DCBQ at concentrations of 0.05 and 0.075 mM, in DMEM of pH = 5.0, was realized by monitoring the transformation of DCBQ to DCBQ-OH. Taking into account that the pH of the solution is critical for cell viability and metabolomic study, the pH of DMEM varied to determine the pH value where the kinetic of transformation of DCBQ into its hydroxylated product is slow and cells remained viable. The conversion of DCBQ into DCBQ-OH was promoted at pH = 5.5 and 6.5 and kinetically favored over lower pH values. Even though at pH = 4.5 the DCBQ was slowly converted into DCBQ-OH, a 30-min exposure of cells to DMEM at this pH induced 20% death, (Fig. S4). At pH = 5.0, 54% and 40% of DCBQ at the low and high environmentally relevant, it is highly conceivable that they represent a pessimistic exposure scenario while the use of concentrations likely higher than those considered environmentally important are common among the (eco)toxicological assessments. Higher concentrations were unsuitable for the metabolomic study due to the high cell mortality, which means that many pathways would be deactivated and cells would malfunction. 1 Η NMR metabolic fingerprinting The HMDB spectral analysis yielded several metabolites, which were further verified by their mass spectra (all spectroscopic data for the identification of the metabolites are given in Table S1). A total of 98 metabolites were obtained for the cells exposed to both chemical factors. The metabolites identified in each sample are given in detail in Tables S2 and S3. When exposed to DCBQ-OH and DCBQ, 66 and 88 metabolites were obtained, respectively. Certain metabolites were common to cells exposed to both chemical factors. These include organic molecules belonging to organic acids, organoheterocyclic compounds, organic oxygen-containing, and nitrogen-containing compounds, fatty acyls, glycerophospholipids, sphingolipids, steroids, nucleosides, and nucleotides (Supporting Information-Metabolites). Metabolic pathway analysis Pathway analysis took place to transform the results into biological information using the MetaboAnalyst 5.0 tool suite ("pathway analysis"). The metabolites identified in cells not exposed to the chemical factors were assigned to twenty-one different metabolic pathways (Table 1). In the case of DCBQ-OH, seventeen metabolic pathways were unaffected, as seen in Table 1 while other metabolic pathways were activated or blocked although being active in the control cells. The activation of them signifies the attempt of cells to adapt themselves to their environment and retain survivability. Exposure to the IC 20 of DCBQ-OH yielded twenty-six metabolic pathways associated with the identified metabolites ( Table 1). The following five metabolic pathways were activated: ascorbate and aldarate metabolism, biotin metabolism, cysteine and methionine metabolism, glutathione metabolism, and pyrimidine metabolism. Exposure of cells to the IC 50 of DCBQ-OH resulted in twenty metabolic pathways ( Table 1). The fructose and mannose metabolism, glycolysis/gluconeogenesis, lysine degradation, starch, and sucrose metabolism that appeared in the non-exposed cells and those exposed to the concentration of IC 20 were downregulated in those exposed to IC 50 . Also, the metabolic pathways of α-linolenic acid metabolism, the metabolome of the non-exposed cells. Valuable information on the metabolic processes will explain the response of cells. Metabolomic study The metabolome perturbations were studied after exposing HaCaT cells to concentrations corresponding to IC 20 and IC 50 values: 0.10 mM and 0.30 mM for DCBQ-OH and 0.05 mM and 0.075 mM for DCBQ, respectively. Representative 1 H-NMR spectra of the metabolomes from control and exposed cells can be seen in Fig. S10 and Fig. S11. Both concentrations of DCBQ-OH and DCBQ used in the metabolomic study are higher than those found in real-life aqueous samples. Although the concentration levels of the chemical factors employed in this study are not In the case of DCBQ, the metabolites arising from non-exposed cells were assigned to twenty-two metabolic pathways ( Table 2). Preliminary experiments revealed that the presence of methanol did not cause any perturbations to metabolic pathways. Thus, the observed differences are ascribed, solely, to the DCBQ exposure. As mentioned above, exposure of the cells to DCBQ for 30 min has partially converted it into DCBQ-OH, the concentration of which does not affect the viability, as demonstrated by the 30-min exposure of cells to DCBQ-OH concentrations of 0.10 and 0.30 mM. Considering that (i) the concentrations of the DCBQ-OH formed are even lower than 0.10 mM where metabolic perturbations were observed earlier, and (ii) metabolic alterations differ from those found when cells were exposed to DCBQ, it is quite likely that metabolic alterations after exposure to DCBQ are attributed only to the parent compound. Nineteen metabolic pathways were not affected by the DCBQ and they were present in all samples. Cell exposure to IC 20 of DCBQ yielded metabolites assigned to thirtyone metabolic pathways ( Table 2). The pathways of biotin metabolism and valine, leucine and isoleucine biosynthesis appearing in the non-exposed cells were downregulated at both concentrations. Eleven metabolic pathways were activated in samples exposed to IC 20 of DCBQ, as seen below ( Table 2). The cell exposure to IC 50 of DCBQ yielded several metabolites assigned to thirty metabolic pathways ( Table 2). Even though lysine degradation appeared in the non-exposed cells and those exposed to IC 20 , in IC 50 samples this was downregulated. The following seven metabolic pathways, not encountered in the control sample, were activated when cells were exposed to DCBQ, at both concentrations: arginine biosynthesis, D-glutamine, and D-glutamate metabolism, fatty acid degradation, glycerophospholipid metabolism, lipoic acid metabolism, the pathway of one carbon pool by folate and riboflavin metabolism. The following four new metabolic pathways not occurring in the control and exposed to IC 20 samples were activated in IC 50 samples: glycine, serine, and threonine metabolism, inositol phosphate metabolism, pentose phosphate pathway, and phosphatidylinositol signaling system ( Table 2). It is reasonable that the metabolic pathways of the control samples employed for DCBQ-OH and DCBQ, exhibit distinct differences because the cells for the study of DCBQ were incubated for 30 min in DMEM of pH = 5.0. The differences in the metabolic pathways mentioned above will further be discussed below. Relative quantification of metabolites All data regarding the relative quantification of the metabolites are given in detail in Tables S4 and S5. Thirty-two of biosynthesis of unsaturated fatty acids and cysteine and methionine metabolism were activated. (i.e., IC 20 ) and a decrease when cells were treated with the high one (i.e., IC 50 ), as seen in the heatmap of Fig. 2. Specifically, β-sitosterol exhibited an increase of 38% and a decrease of 21%, D-xylose exhibited an increase of 19% and a decrease of 65%, and vitamin D3 exhibited an increase of 37% and a decrease of 61%. The relative content of 17α-hydroxypregnenolone, α-D-glucose, and β-D-fructose increased as DCBQ increased. The relative contents of 5α-androstane-3,17-dione, chenodeoxycholate, D-glucose 6-phosphate, L-arabinose, maltose, progesterone, prostaglandin E2, stachyose, and sucrose in cell samples significantly diminished, as the DCBQ concentration increased. The chenodeoxycholate exhibited a decrease of 40% and 54% and the relative content of prostaglandin E2 decreased by 8% and 64%, at the IC 20 and IC 50 , respectively. The respective decrease in stachyose was 26% until its NMR peak was not possible to be integrated. In sucrose the corresponding decrease was 30% and 71%. Discussion The DCBQ and its hydroxyl derivative DCBQ-OH belong to a class of DBPs that are of toxicological relevance and carcinogenic potency. The spontaneously immortalized human keratinocyte cell line HaCaT from adult skin is an ideal model for the study of keratinocyte functions and the metabolites were found to be present in control cells and cells after exposure to IC 20 and IC 50 of DCBQ-OH. The heatmap (Fig. 1) shows the quantitative alteration of significant metabolites after exposure to DCBQ-OH. The relative content of most of these metabolites increased when the cells were exposed to the low concentration of DCBQ-OH (i.e., IC 20 ) and decreased at the high concentration (i.e., IC 50 ), as compared with the control. When exposed to the IC 20 of DCBQ-OH, a significant increase in the relative content was observed for the 2-hydroxyestradiol, 7-dehydrocholesterol, androstenedione, D-xylose, and the L-glutamate, reaching percentages of 125%, 145%, 283%, 215%, and 256%, respectively. After exposure to IC 50 of DCBQ-OH, a major decrease in the relative content was noticed for β-sitosterol and D-mannose, reaching a decrease of 62% in both. The relative content of cholesterol sulfate, folate, glycocholate, and lathosterol appeared to decrease in a dose-dependent manner when cells were exposed to DCBQ-OH. For cholesterol sulfate, the decrease was 42% and 70% at IC 20 and IC 50 , respectively, while for folate the respective decrease was 40% and 82%. On the other hand, the relative content of etiocholanone appeared to increase by 32% and 41%, at the low and high concentrations, respectively. In the case of DCBQ, thirty-seven metabolites were identified in both control and exposed (IC 20 and IC 50 ) cell samples. Most cellular metabolites exhibited an increase in their relative content at the low concentration of DCBQ glutathione (Portillo et al., 2020). Finally, if mitochondria DNA is exposed to reactive oxygen species pyrimidines play a repairing role in it, if damage occurs (Garavito et al., 2015). In addition, exposure to DCBQ-OH can lead to a greater demand of cells for energy. This is evident in the case of activation of biotin metabolism, which provides cells with energy. Biotin can act as a coenzyme of carboxylases involved in the normal metabolism of proteins, lipids, and carbohydrates. Also, it is necessary for the biosynthesis of fatty acids, the catabolism of branched-chain amino acids, and odd-chain fatty acids; besides it takes part in gluconeogenesis (Pacheco-Alvarez et al., 2002). The α-linolenic acid and unsaturated fatty acids are structural components of cell membranes and affect the flexibility and permeability of membranes and enzyme activity. The activation of the above pathways after exposure of cells to the high concentration of DCBQ-OH (equal to IC 50 ) underlines the perturbation of cells. Both pathways along with the activation of cysteine and methionine metabolism at the concentration of IC 50 suggest induced oxidative stress in cells. The deactivation of certain pathways appearing in both control and exposed to DCBQ-OH IC 20 samples, such as fructose, mannose, starch and sucrose metabolism, glycolysis/gluconeogenesis, and lysine degradation indicate that IC 50 affected cells adversely by hindering energy production. Fructose, mannose, starch, and sucrose are responses upon stimulation. This study represents the first attempt to study the metabolism of HaCaT cells after exposure to the two DBPs. Prior work investigating the two DBPs reported that IC 50 values of DCBQ and DCBQ-OH in Chinese hamster ovary cells (CHO-K1) after 24 h of exposure were 27.3 µM and 61.0 µM, respectively (Wang et al., 2014). For HEK-V kidney and HepG2 cells (Wang et al., 2018) the IC 50 of DCBQ for a 24 h exposure period was found to be 44.0 µM and 72.0 µM, respectively. Herein, from the cell viability study, it was deduced that the IC 50 values of DCBQ and DCBQ-OH after exposure to them of the more resilient HaCaT cells were 0.075 mM and 0.30 mM, respectively. In the case of cells, exposed to the low concentration of DCBQ-OH (equal to IC 20 ), certain metabolic pathways were activated. For instance, the activation of ascorbate and aldarate metabolism, glutathione, methionine and cysteine, and pyrimidine metabolism indicates that the exposure of cells induces oxidative stress. Specifically, ascorbate and aldarate, as antioxidants consume oxygen-free radicals and contribute to the preservation of α-tocopherol, an important antioxidant of the cell membrane (May, 1998). Glutathione protects cell membranes from free radicals and reactive oxygen intermediates (Meister, 1983) while homocysteine, a metabolite that takes part in the cysteine and methionine metabolism, can act as a precursor in the synthesis of 2012). The pentose phosphate pathway was activated when cells were exposed to the IC 50 of DCBQ. This pathway does not provide cells with ATP to meet their energy demands; instead, it yields NADPH and ribose-5-phosphate, which are vital for their survival (Ge et al., 2020). Furthermore, when cells were exposed to both low or high concentrations, perturbations on the cell membranes occurred as arginine biosynthesis, glycerophospholipid metabolism, and fatty acid degradation were activated. Through arginine biosynthesis, ornithine is produced, which leads to the production of collagen (Tong & Barbul, 2004). Glycerophospholipids are components of cell membranes (Hermansson et al., 2011) while fatty acid degradation is attributed to cell death and degraded components of membranes of the living cells. The DCBQ induced oxidative stress on cells as D-glutamine and D-glutamate, lipoic acid metabolism, and the pathway of one carbon pool by folate were activated when exposed to IC 20 and IC 50 of DCBQ. D-Glutamine, D-glutamate, and folate are precursors of glutathione and lipoic acid is a vitamin-like antioxidant that behaves like a free radical scavenger. Other pathways which support oxidative stress, are β-alanine, cysteine, methionine, glutathione, and histidine metabolism, which were activated only at the low concentration of DCBQ. β-Αlanine and histidine are precursors of carnosine, an antioxidant and free-radical scavenging factor (Vraneš et al., 2021), while cysteine and methionine are precursors of glutathione, as mentioned above. Cells exposed to IC 50 of DCBQ activated glycine, serine and threonine metabolism as a way of protecting themselves from oxidative stress and membrane damage. Glycine is a precursor of glutathione (Wang et al., 2013a, b) and serine acts as an antioxidant (Naderi Maralani et al., 2012). Both glycine and threonine are components of collagen, a key ingredient of cell membranes. Finally, cells exposed to IC 50 of DCBQ activated inositol phosphate metabolism and phosphatidylinositol signaling system. Inositol phosphates are important components of lipids, known as phosphatidylinositols, which are key components of cell membranes. These two pathways, also, were involved in the transport of messages from the receptors of cell membranes to the interior of cells (Berridge, 2009). The phosphorylated derivatives of inositol act as second messengers in signal transduction pathways for the adaptation to environmental stress and intercellular communication. The messages transferred involve the control of functions, such as secretion, metabolism, and growth (Berridge & Irvine, 1989). The IC 20 and IC 50 of DCBQ exhibited a DPPH radical scavenging activity of 50% and 64%, respectively. These findings are in contradiction to our expectations that DCBQ can form reactive species. In previous studies, DCBQ exhibited pro-oxidant activity (Du et al., 2013;Sun et al., saccharides and their metabolism provides cells with glucose. Lysine degradation yields two acetyl coenzymes A, which can be oxidized for energy production (Leandro & Houten, 2020). In living organisms, reactive oxygen species are generated as a product of normal metabolism (Nita & Grzybowski, 2016). The DCBQ-OH at concentrations of 0.10 mM and 0.30 mM had 82% and 95% scavenging activity on DPPH radical. These figures are in contrast to our assertions that DCBQ-OH forms reactive species. In a previous study, Hung et al. demonstrated that the water-transformed DCBQ led to the formation of intracellular reactive oxygen species, based on the 2,7-dichlorofluorescin diacetate assay, thus acting as a pro-oxidant (Hung et al., 2019). In our case, the treatment of HaCaT cells with DCBQ-OH brought out its role as a pro-oxidant rather than as an antioxidant, based on the activated metabolic pathways, indicating that oxidative stress occurred. The relative quantification of the metabolites in the samples exposed to DCBQ-OH and the control samples evidenced a disturbance of membranes. The relative content of membrane components, such as glycerol, sphingomyelin, and L-proline, increased at IC 20 in order to protect the membrane. This content decreased rapidly after exposure to IC 50 because of the induced perturbation by the DCBQ-OH, which was higher at the high concentration. These metabolites contribute to the enhancement and protection of cellular membranes. Moreover, lathosterol content decreased by 46% and reached non-detectable levels at IC 20 and IC 50 of DCBQ-OH, respectively, leading to the production of cholesterol. Under the above conditions, cholesterol sulfate decreased by 42% and 70%, respectively, in favor of the production of steroids and steroid hormones for enhancing membranes. In the case of exposure of cells to DCBQ, the differences in metabolic pathways were attributed, mainly, to DCBQ, considering that DCBQ-OH was present at so low concentrations that they did not affect the metabolism. When the cells were exposed to the IC 20 and IC 50 of DCBQ, the biotin metabolism and valine, leucine, and isoleucine biosynthesis were downregulated. As mentioned above, biotin is involved in gluconeogenesis, while isoleucine plays a prominent role in enhancing glucose consumption and utilization (Zhang et al., 2017). It is evident that DCBQ prevents cells from making use of glucose. In addition, when cells were exposed to the IC 50 of DCBQ, lysine degradation was downregulated contributing to energy production. As a result, the cells activated other pathways to satisfy their energy demands. Riboflavin metabolism was activated when cells were exposed to both concentrations. Riboflavin is a water-soluble micronutrient that supports energy production by assisting in the metabolism of fats, carbohydrates, and proteins (Powers, knowledge of the key molecules that can trigger the diverse metabolic pathways in cells exposed to the above chemical factors may allow us to know more about the regulatory mechanisms involved in metabolite generation. However, further studies would still be needed to elucidate further the implications of such exposures and bridge the remaining knowledge gaps in this area. Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Open access funding provided by HEAL-Link Greece Data Availability Data are available by the corresponding author, upon request. Conflict of interest The authors declare no competing interests. Compliance with ethical standards This article does not contain any studies with human and/or animal participants performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. 2019). Specifically, the 2,7-dichlorofluorescin diacetate assay showed that the production of reactive oxygen species increased when the concentration of DCBQ increased (Du et al., 2013) and malondialdehyde production increased as DCBQ increased (Sun et al., 2019). From the activated pathways, it is evident that DCBQ acts more as a pro-oxidant in in vitro cell experiments than as an antioxidant, just like DCBQ-OH. From the relative content of the metabolites present in the samples exposed to DCBQ, some findings deserve further discussion. The relative content of α-D-glucose increased reaching percentages of 295% and 110% compared with the control samples. Pathways involved in glucose utilization like biotin metabolism and valine, leucine, and isoleucine biosynthesis were downregulated, signifying that glucose was accumulated. Also, α-trehalose was activated, when cells were exposed to DCBQ. It is worth mentioning that this saccharide, not found in cells exposed to DCBQ-OH, prevents cells from dehydrating and disrupting their internal organelles. Exposure of cells to the low concentration of DCBQ barely increased its relative content. By contrast, when the cells were exposed to the high concentration, all the content of saccharides was consumed following the patterns of maltose, stachyose, and sucrose, which decreased as DCBQ increased. All these saccharides broke down to give glucose, as the energy demands of cells were rapidly increasing. The 7-dehydrocholesterol, a steroid component of cell membranes, increases as cells produce it to enhance and protect their membranes. Sphinganine and sphingosine are components of cell membranes and their content increased when exposed to the low concentration and decreased rapidly when exposed to the high concentration, where the perturbation of cell membranes was greater. It is evident that DCBQ disrupts cells, to a considerable degree. Conclusion This study examined the effect of DCBQ and its hydroxyl analogue DCBQ-OH on the metabolism of HaCaT. Both compounds were toxic to HaCaT cells indicating their negative effects on humans, after exposure. Each of the two chemical factors was studied at two different concentrations, corresponding to the IC 20 and IC 50 , in order to obtain valuable information on cell metabolome. Both of them brought about multiple metabolomic alterations in the HaCaT cells with them being more vulnerable to DCBQ than its hydroxyl analogue. Certain metabolic pathways were downregulated whereas others were activated to assist in their survivability. As DCBQ is more cytotoxic than DCBQ-OH and brought about more alterations in the metabolic pathways it is safe to conclude that it may have a great impact on living cells. The
7,974.4
2022-11-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Adaptable Frequency Counter With Phase Filtering for Resonance Frequency Monitoring in Nanomechanical Sensing Nanomechanical sensors based on detecting and tracking resonance frequency shifts are to be used in many applications. Various open- and closed-loop tracking schemes, all offering a trade-off between speed and precision, have been studied both theoretically and experimentally. In this work, we advocate the use of a frequency counter (FC) as a frequency shift monitor in conjunction with a self-sustaining oscillator (SSO) nanoelectromechanical system (NEMS) configuration. We derive a theoretical model for characterizing the speed and precision of frequency measurements with state-of-the-art FCs. Based on the understanding provided by this model, we introduce novel enhancements to FCs that result in a trade-off characteristics which is on a par with the other tracking schemes. We describe a low-cost field-programmable-gate array (FPGA)-based implementation for the proposed FC and use it with the SSO-NEMS device in order to study its frequency tracking performance. We compare the proposed approach with the phase-locked-loop-based scheme both in theory and experimentally. Our results show that similar or better performance can be achieved at a substantially lower cost and improved ease of use. We obtain almost perfect correspondence between the theoretical model predictions and the experimental measurements. I. INTRODUCTION F REQUENCY counters are a standard equipment to char- acterize the frequency fluctuations of oscillators and clocks, especially in estimating the well-established timedomain measure of frequency stability, namely the Allan Deviation (AD) [1], [2].The averaging of instantaneous frequency over a certain observation (gate) time, which forms the basis for calculating AD, is naturally performed with a stan-dard frequency counter (FC).In this work, we propose using an improved FC with high resolution and accuracy [3] as a frequency shift monitor for an oscillatory signal that is generated by a self-sustaining oscillator (SSO)-nanoelectromechanical system (NEMS) device, as opposed to simply using it as a tool for characterizing its raw frequency stability in the presence of thermomechanical and detection noise.The goal is to detect small frequency shifts due to events of interest, arising, e.g., from the interaction of the nanomechanical resonator with a mass, temperature, or force stimulus, as fast and precise as possible.We develop a theoretical model for characterizing the FC measurements, and show that the averaging (gate) time of a standard counter can be used to balance the trade-off between the speed of detection and measurement precision.Based on the understanding provided by this model, we propose a novel counter architecture where the simple averaging of frequency over a gate time that spans across multiple signal cycles is replaced by a digital filter with adjustable bandwidth that operates on the resampled timestamps of the signal edges.The filtered timestamps are subsequently mapped to frequency measurements.We show that it is crucial to perform the filtering before the conversion of the timestamps to frequency values, especially in cases where transduction noise is dominant.While the proposed counter is not suitable for directly estimating the raw AD of the signal source anymore, it offers better trade-off characteristics as a frequency shift monitor.Conceptually, the output of the FC could be used to synthesize a cleaner oscillatory signal that tracks the frequency shifts of interest but with subdued unwanted frequency fluctuations.We characterize the precision of the counter output by computing the AD of this conceptually synthesized signal.Furthermore, we address an issue that relates to input signal dictated sampling rate in FCs.Our approach introduces a robust resampling technique that results in a consistent, fixed sampling frequency.This facilitates subsequent digital signal processing (DSP) on the output of the FC, enhancing its versatility and application scope as compared with standard FC designs.This method marks an improvement over conventional implementations as documented in [3], [4], and [5]. The standard and well-established technique for tracking the frequency changes of an oscillatory signal source is a phase-locked loop (PLL), where the signal generated by a clean controlled-oscillator (CO) is phase-and frequency locked to the noisy signal source with a closed-loop feedback system [6].The feedback loop is designed so that the CO tracks the frequency shifts of interest while suppressing rapid fluctuations due to noise, with the loop bandwidth serving as the control knob for trading off tracking speed versus precision.The precision of the PLL output is characterized by computing the AD of the CO signal.In a PLL implementation, in addition to the CO, a phase difference (between the signal source and the CO output) detector is needed to generate the error signal in the feedback loop.In the context of NEMSbased sensors, PLLs are usually realized using a lock-in amplifier-based setup [7], [8], [9].Instead of locking a CO to the sensor signal to track its frequency, we recommend a new design using an FC to directly measure the resonance frequency of the sensor.The sensor itself is excited by narrow pulses with low energy and oscillates freely [9].We use a reciprocal FC in a continuous measurement mode where the counter hardware is not reset between measurements.This technique was first used in the HP 5371 frequency analyser [4].It greatly increases the number of samples.The use of continuous time interval measurements makes it easier to study the dynamic frequency behavior of a signal. We compare the proposed self-sustaining oscillator (SSO) with FC scheme to the PLL approach both in theory and experiment, and show that similar or better performance can be achieved with respect to frequency resolution and stability of operation.We describe a low-cost field-programmablegate array (FPGA)-based implementation of the proposed FC.While the DSP in a lock-in amplifier can also be implemented on an FPGA, considerably more resources are needed to implement the CO, the phase demodulators, and the rest of the PLL functionality.Furthermore, only a low-Q bandpass filter is used to condition the signal for the FC.Thus, the proposed FC-based scheme offers similar or better performance but at a substantially lower-cost and improved ease-of-use. II. THEORY A. Interpolating Reciprocal FC With Continuous Timestamping We consider a state-of-the-art counter, namely an interpolating reciprocal counter with continuous timestamping [3], [5].In order to understand the speed and precision properties of such a counter used as a frequency shift monitor, we develop a simple model that captures its characteristics.Let f s (t) denote the instantaneous frequency (measured in units of Hz) of the signal source, which includes any fluctuations due to noise as well as shifts due to events of interest.We define φ(t) = f s (t) dt as the signal phase (unitless, equal to phase in radians divided by 2π ).In the FC, timestamps for the boundaries of full signal cycles, i.e., at the rising signal edges, are generated using a high-frequency, high-precision internal clock and an interpolator.That is, time t n where φ(t n ) = n (n is an integer) is measured with a clock counter for full clock cycles and an interpolator between two clock edges that precede and succeed a signal edge [5]. In the typical setting where an FC is used to characterize the frequency stability of a high-quality signal source, the resolution of the timestamps may be limited by the clock frequency and the quality of the interpolating circuitry.In the application, we consider here, the signal source exhibits relatively large frequency fluctuations, resulting in deviations in the timestamps that are much larger than this resolution limit.In the model, we thus assume that timestamps t n can be measured precisely.In a reciprocal counter, the frequency of the source is estimated from the timestamps for one signal cycle with Thus, f c (t n ) represents the average of the instantaneous frequency f s (t) over one signal cycle between t n−1 and t n .The highest rate at which an FC can generate an output is limited by the signal frequency (or twice the signal frequency if falling signal edges are also used with a 50% duty cycle).With (1), the gate time of the counter is set to the cycle time of the signal source.A frequency estimate with gate time of k cycles can be computed with If a sudden frequency shift occurs in the signal source between t n−1 and t n , its effect will be fully reflected in the frequency estimate in (1) at t n+1 (within two cycles), whereas it will be at t n+k (within k + 1 cycles) for the one in (2).However, the precision of the estimate in (2) is higher since rapid frequency fluctuations are suppressed due to the inherent averaging over k cycles instead of just one.Thus, gate time can be used as a control knob for trading off response speed versus precision in an FC that is used as a frequency shift monitor. The averaging inherent in (2) corresponds to a simple moving average filter (MAVGF).Instead, any filter that may offer a better response speed versus noise filtering characteristics can be used.In order to pursue this idea, we first need to better understand how frequency fluctuations affect the frequency estimates computed with an FC.For a constant nominal signal frequency f o , we consider where α(t) represents the time (phase) noise of the source.Ideally, the instantaneous frequency f s (t) and the fractional frequency y s (t) can be computed from φ(t) with a timederivative as follows: where α(t) represents the fractional frequency noise. Let us now derive the fractional frequency estimate for a reciprocal counter.Based on (3), the timestamps t n satisfy is the nominal cycle time of the signal.We substitute this expression for t n in (1) to derive the following: for the fractional frequency estimate computed in a reciprocal counter.The operation ((α(t n ) − α(t n−1 ))/T o ) in ( 5) above corresponds to the (discrete-time) derivative of α(t) over one cycle.Ideally, as in (4), the conversion of phase to frequency is a linear transformation.However, in a reciprocal counter, this conversion involves a nonlinear operation, as seen in (5). If z (fractional frequency noise) denotes the time derivative of α(t), the (ideal) linear transformation to fractional frequency can be represented by 1+z as in (4), whereas it is given by the nonlinear function (1/(1 − z)) in (5) for a reciprocal counter.The power series expansion indicates that the two transformations are (approximately) equal only when z is small.Detection noise (generated in the transduction of mechanical motion into an electrical signal) in a NEMS device results in white phase noise [7], [8], [9].This corresponds to a frequency fluctuation spectrum that increases with frequency.Thus, z may contain strong high-frequency components.In this case, the quadratic z 2 term in the power series expansion needs to be taken into account to accurately characterize the fluctuations in the frequency estimated by a reciprocal counter.The high-frequency spectral components in z mix with each other through z 2 to produce low-frequency fluctuations, known as intermodulation noise.In order to prevent or minimize the degrading impact of this nonlinear phenomenon on the accuracy of the frequency estimates computed by an FC, a low-pass digital filter can be applied to the timestamps t n , and hence to phase noise samples α(t n ), before the timestamps are converted to frequency estimates.Ideally when phase to frequency conversion is linear, a (linear) filter may be applied to the phase, or equivalently, to the frequency data, since the ordering of linear transformations does not change the final outcome.In the case of a reciprocal FC, the low-frequency intermodulation noise generated inherently in the conversion of timestamps to frequency estimates cannot be removed with subsequent low-pass filtering.While timestamp-to-frequency conversion always generates intermodulation noise, its effect will be minimal if high-frequency fluctuations are suppressed first, before the conversion, with a digital filter.The bandwidth and the characteristics of this filter can be chosen to trade off speed versus precision when the proposed counter is used as a frequency shift monitor. B. Sampling Rate and Decimation In a reciprocal counter, the sampling rate at the output is determined directly by the frequency of the input signal, given by f rate = f s /k, where k is the number of cycles counted within one gate time.This input dependency results in problems when subsequent DSP is performed on the sampled FC output.For instance, if any sort of filtering is performed, a change in the input signal frequency will consequently alter the sampling rate, thereby affecting the dynamics of the filter.To mitigate this issue, the input signal-dependent sampling rate should be converted into a fixed one. One approach to achieving a fixed sampling rate is to implement continuous event-triggered timestamp counting as proposed in [5].This involves using a dedicated counter that generates trigger events at regular intervals of T int , which determines the sampling rate.However, the samples cannot be taken precisely at multiples of T int .Instead, they are generated at the next rising edge of the input signal.As a result, there is an inherent uncertainty in the sampling time, up to one period of the input signal.The impact of this uncertainty on the overall measurement varies depending on the number of signal periods encompassed within one interval.When there are numerous signal periods, the uncertainty has a lesser effect.However, if there are only a few signal periods, the irregular sampling interval introduces errors that can affect the accuracy of the measurement. Although the sampling rate f s /k is directly linked to the input frequency f s , the sampling instants always align with the internal clock (with frequency f CLK ) edges of the FC.The sampling rate can be transformed up to f CLK by simply interpolating the acquired data through the use of a zero-order hold.However, this introduces high-frequency harmonics due to the abrupt transitions between the samples.To address this, a lowpass filter (LPF) can be applied to attenuate the harmonics.The combined process of low-pass filtering and decimation (to a fixed fraction of f CLK ) after the zero-order hold can be efficiently achieved using a cascaded integrator-comb filter (CIC), as described in [10].Fig. 1(a) depicts the second-order CIC filter employed in this study, featuring two integrator sections and two comb sections.The comb sections introduce a delay of N = 2.The downsampler, with a value of R = 2 13 , is positioned between the integration and comb sections.Thus, the sampling rate of the final output is given by f new = f CLK /R, independent of the input frequency f s .Fig. 1(b) shows the transfer function of the CIC filter after decimation.Notably, it does not exhibit a distinct separation between the passband and stopband, thus necessitating further filtering with Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.a finite impulse response (FIR) or infinite impulse response (IIR) filter. The speed of the response to a frequency jump is intrinsically limited by the input signal frequency as described in (1).When the data is resampled using the CIC filter, it results in additional filtering.This additional filtering could potentially slow down the response, particularly if the new sampling frequency ( f new ) is less than the original sampling rate ( f rate ).However, the range of frequency steps that this method can handle is theoretically unlimited.It is capable of tracking frequency steps of any magnitude, making it exceptionally suitable for gathering data from devices that require monitoring across a broad range of frequencies. C. Allan Deviation AD σ y (τ ) is a widely used and well-established method for characterizing frequency fluctuations [7], [11], [12].For AD, the frequency values need to be normalized, resulting in a fractional frequency AD is the square root of the Allan Variance, which can be computed from sampled frequency data using the following equation: Here, y i represents the ith sample of the averaged frequency over the averaging time τ .It is calculated as Thus, the averaging operation above has to be part of the frequency measurement and data acquisition process.An FC naturally performs this operation when it is used to characterize the raw frequency fluctuations of its input signal.On the other hand, ( 8) is routinely used in practice on measured frequency data, also in cases where an FC is not used and/or the averaging operation is not inherently included in the measurement process.Such use is justified only when the sampling rate is sufficiently large when compared with the system bandwidth, i.e., the smallest time τ that determines the sampling interval is small enough.In this case, the frequency that is being measured is (almost) constant over the sampling interval (smallest τ ) resulting in Averaging when τ is equal to an integer m multiple of the sampling interval is performed by simply averaging m consecutive samples of the acquired raw data.When an FC is employed as a frequency shift monitor, as we propose in this work, in contrast with its use in characterizing the frequency fluctuations of its input signal, AD should be computed for the conceptually synthesized oscillatory signal based on the counter output.Thus, the averaging in (9) needs to be performed in addition to the inherent averaging performed by the counter itself.While the inherent averaging of the counter is over a time interval determined by its gate time specification, the average in ( 9) is computed for all τ that is of interest in the AD characterization. The Allan Variance can alternatively be computed in the frequency domain if the power spectral density of fractional frequency fluctuations, denoted by S y (ω), is known.The equation for this computation, shown below, includes the frequency domain equivalent of the averaging operation in (9), should thus be used on S y (ω) which was measured or computed without inherent averaging.For white frequency fluctuations, where S y (ω) is constant, (11) simplifies to Hence, in systems limited by thermal white noise, the resulting AD exhibits a σ y ∝ 1/ √ τ dependence on the averaging time τ . This study focuses on two primary noise sources, thermomechanical noise and detection noise.Thermomechanical noise is regarded as the fundamental noise source in NEMS resonators, resulting from the random movement of resonator molecules.On the other hand, detection noise arises from the transduction of the resonator's mechanical motion into an electrical signal, as well as from electronic components involved in the detection process. The power spectral density of frequency noise, S ω (ω), can be computed as a superposition of the power spectral densities of thermomechanical and detection phase noise, multiplied by Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. their corresponding transfer functions (magnitude squared) as derived in [7] and [9].This yields the spectral density of fractional frequency noise required for computing the AD where K = (S θ d /S θ th ) 1/2 is the ratio between the thermal and detection noise and H θ th and H θ d are the transfer function of the thermomechanical and detection noise to the frequency output, respectively.For the SSO frequency tracking scheme, the transfer functions are given by [9] where H r is a single-pole LPF with the time constant of the resonator.H L has low-pass characteristics and represents the bandwidth limiting (noise filtering) mechanisms in the frequency detection device.The theoretical computation of AD is performed with the frequency domain approach in (11), where the spectral density of fractional frequency noise is first computed using a frequency domain model of the SSO and the FC, as in ( 13) and ( 14), that includes intermodulation noise generated in the timestamp to frequency conversion. III. METHODS We describe the experimental setup for the comparison of the two methods, the proposed FC and the PLL frequency detector (PLLFD), for frequency shift monitoring of an oscillator.Our measure of assessment for precision is the AD.Central to our investigation is the SSO, as detailed in [9], driven by narrow pulses and oscillating freely.The pulse duration and timing is automatically adjusted to attain the desired resonance frequency within a closed-loop setup involving the resonator and the pulse generation mechanisms.Notably, the frequency measurement and detection operate outside of this loop.Our study involves resonance frequency measurements using the proposed FC and the established PLLFD, as depicted in Fig. 2(b).We compute the AD in two experimental conditions, using the FC output and the PLLFD output.For the FC, we explore filter configurations, cut-off frequencies, and timestamp versus frequency filtering. A. NEMS Resonator Setup In this study, we utilized a NEMS resonator consisting of a square 50-nm thick silicon nitride membrane measuring 1018 µm on each side, as introduced in [9] and shown in Fig. 2(a).To achieve electrical transduction, we incorporated two 5 µm-wide gold (Au) electrodes spanning over the resonator.The membrane was placed within a static magnetic field of approximately 0.8 T, generated by a Halbach array composed of neodymium magnets.The orientation of the traces was perpendicular to the magnetic field.By capitalizing on the resulting Lorentz force, one metal trace is served for the purpose of driving the resonator with an ac current and the second electrode for detecting its motion through the magnetomotively induced voltage. To amplify the detected signal from the metal trace, we employed a custom-made, low-noise differential pre-amplifier with a gain factor of 10 4 .The NEMS was operated in vacuum with a pressure of 8.2 • 10 −6 mbar.The NEMS resonator had a resonance frequency of f r = 119 kHz and a quality factor of Q = 57.5 k.Consequently, the response time of the NEMS resonator is calculated with τ r = 2Q/ω r , is 154 ms. B. Self-Sustaining Oscillator The resonator utilized in this study operates as an SSO (implemented in PHILL from Invisible-Light Labs GmbH), Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.as described in [9] and depicted in Fig. 2(b).The transduced output of the NEMS resonator is connected to a preamplifier (PA).The amplified signal is then directed to a bandpass filter (BPF) (with gain 1 and bandwidth 5 or 20 kHz for different measurements).The BPF inside the SSO loop is not needed to improve the SSO performance, and in some cases, it is not necessary.However, it gives the system the ability of mode selection by suppressing the buildup of unwanted modes.On the other hand, the BPF bandwidth limits the system response speed.Therefore, it is important to ensure that the bandwidth of the BPF is large enough in order to prevent it from becoming a limiting factor in the system's overall response speed.In the context of this work, where the NEMS is employed as an infrared detector with a thermal response time constant (τ resp ) ranging from 50 to 200 ms, a BPF bandwidth exceeding 1 kHz is deemed adequate.The output of the bandpass filter is linked to a comparator (COMP) with a 50 mV hysteresis, which transforms the sinusoidal signal into a rectangular waveform, triggering the pulse generation mechanism that drives the NEMS resonator.The pulse generation mechanism is comprised of two components: one generates a pulse with a width of T w , while the other delays the pulse generated at the feedback output by a time of T d .Since the NEMS resonator can only tolerate low currents, the generated pulse needs to be attenuated (ATT) by a factor of 10 5 before being applied to the NEMS. Frequency detection is accomplished using two different methods, the PLLFD and our proposed FC.In principle, the PLLFD and FC can be connected anywhere in the SSO loop.Before the data is acquired by both frequency detectors, antialias filtering has to be performed.The anti-aliasing filter will limit the noise going into the frequency detector and will take part in the H L filtering term of (14).It will also prevent noise folding which would degrade the resulting AD as shown in [9].For the PLLFD, this anti-aliasing function is served by the LPF in its phase detector, and the PLLFD is out of convenience connected after the PA.On the other hand, to avoid aliasing in the FC, the signal must first pass through a bandpass filter with a bandwidth less than half of the input signal frequency, which also coincides with the maximum sampling rate of the FC.For devices similar to the one used in this work, BPF bandwidths below 50 kHz will fullfill the criteria.To eliminate the need for two bandpass filters (one in the SSO loop and one at the input of the FC), the FC can be connected at any position in between the bandpass filter and the resonator, and the in-loop BPF will act as an anti-aliasing filter.Thus, the FC is connected to the output of the bandpass filter. With the proposed technique, tracking the resonance frequency of a device does not require the construction of a complex control system.Knowledge of the readout method, a rough estimation of the resonance frequency and the response speed of the device are sufficient to find an oscillation and track its frequency.The devices utilized in this study are characterized by the resonant frequency 105 kHz < f r < 125 kHz and response times 50 ms < τ resp < 200 ms.Notably, the magnetomotive readout employed measures the velocity, rather than the position, of the resonator.This method induces a (π/2) phase shift relative to the position.Consequently, the −(π/2) phase shift (from the drive to position) intrinsic to the resonator is neutralized by the readout's phase shift, resulting in a net in-loop phase of 0. This analysis assumes minimal electrical delays at lower frequencies, although such delays may warrant consideration at higher frequencies.Hence, in this context, the pulse delay can be set to T d = 0.If an optical readout that does not cancel out the phase delay caused by the resonator is used, the pulse delay should be set to where f BPF is the bandpass central frequency.The pulse delay element will generate a −3(π/2) phase shift (time delay corresponds to a negative phase shift) at f BPF .The BPF exhibits a phase shift of 0 only at this central frequency.By sweeping the central frequency of the BPF, the loop phase condition will only be met if f BPF = f r and an oscillation will be excited. C. Frequency Counter Fig. 3 illustrates the block diagram of the FC architecture.The FC employed in this study is an enhanced version of a state-of-the-art interpolating reciprocal counter with continuous trigger events (implemented in PHILL from Invisible-Light Labs GmbH).It has been modified to better suit the task of tracking resonance frequency changes with improved usability. The front-end of the FC is a frequency divider, which enables counting k cycles of the input signal, where k can be any integer and specifies the number of counted periods in one timestamp.To avoid aliasing and noise folding, it is recommended to keep k as small as possible, i.e., set it to k = 1, since the sampling rate of the counter, f rate , is directly dependent on the input signal frequency, given by f rate = f s /k. The subsequent stage of the system is the main counter with an interpolator.It functions as a standard reciprocal counter, tallying the number of rising edges of the internal clock that occur in an interval of k periods of the input signal and generating the timestamps for the interval boundaries.However, due to the potential error of up to one clock cycle in such a counter, an interpolation mechanism is employed to enhance the precision of the timestamps up to 100 ps at 100 kHz. The output rate of the main counter is directly linked to the frequency of its input, necessitating resampling to achieve a fixed sampling rate.First, the timestamp series is interpolated onto the internal clock transitions time grid using a zeroorder hold.Second, it is decimated to the desired sampling rate through a CIC decimator.Third, the resampled timestamp series is low-pass filtered, which defines the final system bandwidth as specified by H L in (14).Finally, the filtered timestamp series is converted to the frequency output using (1) or (2). Constructing a digital PLL entails three critical components: an analog-to-digital converter (ADC), a digital-to-analog converter (DAC), and field-programmable gate array (FPGA) circuitry.For minimizing time quantization errors, the ADC and DAC typically operate at high sampling rates, often exceeding 40 MHz, and generally require a resolution of 14 bits.It is important to note that both the ADC and the DAC, particularly those with such specifications, are expensive components.On the FPGA, implementing demodulators and proportional-integral-derivative (PID) controllers for the PLL involves multiplicative operations.This necessitates using FPGAs equipped with DSP slices for efficient execution.However, FPGAs with DSP slices are considerably more expensive than their non-DSP counterparts. Conversely, an FC's architecture includes a COMP, an FPGA, and an interpolator.COMPs are relatively inexpensive components and can even be omitted if the counter is connected post the SSO's COMP, as utilized in this study.The interpolator can be integrated internally via a time-todigital converter (TDC), as discussed in [13], or implemented using cost-effective integrated circuits like the TDC7200.As illustrated in Fig. 3, all components of the FC can be incorporated into an economical FPGA without DSP slices.This aspect renders the FC a more budget-friendly alternative compared to the digital PLL. IV. RESULTS AND DISCUSSION We conducted 10 s measurements (as shown in Fig. 4) to analyze the output frequency fluctuations due to thermomechanical and detection noise for various frequency shift detection scenarios and system parameter choices.The experimental setup, which incorporated magneto-motive readout, introduced significant detection noise.Consequently, the thermomechanical noise peak could not be resolved at the resonator output, resulting in a value of K > 1.As discussed in [9] and [11], this leads to an AD with a 1/τ dependency if the system time constant is shorter than the resonator time constant.When K < 1, the sensor is intrinsically limited by the thermomechanical noise of the resonator, not by the noise generated in the transduction and the frequency detection mechanisms.Also in such thermomechanically limited systems, filtering in the frequency detector as represented by H L in ( 14) can be used to trade off speed versus accuracy.However, the enhancement techniques proposed and that are to be demonstrated below, while beneficial for considerably alleviating detection noise influence, will not have the same level of impact on systems where thermomechanical noise is the dominant factor.In the following, we examine the FC under four different conditions. A. Filtering of Timestamp Data Versus Frequency Data In the first experiment, shown in Fig. 5, we compare two alternatives for an FC, with conversion of timestamp series to frequency data after or before low-pass filtering.As articulated in Section II, the expected 1/τ dependence of the AD is altered to 1/ √ τ for large τ .This is due to the mixing of the high-frequency noise components by the nonlinearity of the time-to-frequency conversion process, resulting in intermodulation noise at lower frequencies.This can be alleviated by removing or reducing high-frequency noise before time-tofrequency conversion by low-pass filtering the counter output in the form of a timestamp series.The ADs of these measurements are shown in Fig. 5 showing that filtering the timestamp data first and then conversion to frequency yields better AD than conversion before filtering, as predicted by theory. B. Sweeping the Gate Time of the Counter In the second experiment, we investigate the impact of varying the number of counted periods k of the signal, i.e., the gate time of the counter, on the performance and the filtering process in the FC.As seen in Fig. 6, increasing k seemingly does not lead to an improvement in the AD without suppressing it (raw).In contrast, low-pass filtering the FC output results in improved ADs.The best AD is observed for k = 1 with the LPF. ADs for the unfiltered raw data at the counter output fall exactly on top of each other when k is varied, albeit starting at larger τ values for larger k due to the larger sampling interval of k periods.This result is puzzling at first thought, since theoretical considerations indicate that we should be able to use the gate time of the counter as a control knob for trading Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 5. Effect of the position (before or after time-to-frequency conversion) of a first-order LPF with a cut-off frequency of 200 Hz on the AD.The "Raw" curve is for the raw unprocessed output of the FC without any subsequent filtering.The "LPF time" curve corresponds to the output of the proposed FC with low-pass filtering before time-to-frequency conversion.The "LPF frequency" curve represents the use-case of filtering the output of a conventional reciprocal counter where time-tofrequency conversion has already been performed.The SSO loop BPF with a 20-kHz bandwidth was used for all experiments.Fig. 6.Comparison between the ADs of raw and filtered (in the FC with a first-order LPF with a cut-off frequency of 200 Hz) data for varying gate time as set with k counted periods.The SSO loop bandpass filter (BPF) used in this experiment had a bandwidth of 5 kHz with the corresponding time constant τ BPF . off response speed versus precision.While the response time of the counter is definitely prolonged with increased gate time (sudden frequency shift at the input will be fully reflected in the counter output after k periods of the input, as discussed before), results in Fig. 6 suggest that there is no improvement in the precision of the output. The seemingly unexpected result in Fig. 6 can be resolved and deciphered as follows.The sampling rate at the output of the counter is inversely proportional to the gate time, with the counter producing a measurement for every k periods of the input signal.When AD is computed based on this sampled frequency data for the smallest τ of k periods, the only averaging in the sense of (9) reflected in the data is the one that is inherently performed by the counter front-end, with no additional averaging in the actual AD computation as discussed before.This means that the AD computed as such is actually a characterization of the frequency fluctuations of the counter input signal, as opposed to the conceptually synthesized signal based on the counter output.Thus, it makes perfect sense that computed ADs for varying k fall exactly on top of each other, since they all represent a characterization of the input signal, not the counter output.Another perspective on the seemingly puzzling results in Fig. 6 is as follows.With increased gate time, the sampling rate at the output of the counter is not large enough to justify the use of the frequency data to compute the AD (for the counter output) with (8), since (10) does not hold in this case, and the additional averaging that needs to be part of the AD computation is missing. While we resolved the results in Fig. 6 based on theoretical arguments, it is desirable to experimentally observe the precision improvement one can obtain in an FC by increasing the gate time.This requires somehow increasing the sampling rate at the counter output.With a gate time of k input periods, one can attain a k-fold increased sampling rate by employing a sequence of k parallel counter front-ends, where each front-end counts over k periods of the input signal, but with one period delay relative to the previous one in an interleaved manner.However, this would increase the hardware cost and complexity considerably.Instead, we simply emulate k parallel counter front-ends by first producing output from the counter with k = 1 for every period of the signal, and then processing this output with an MAVGF with a window length of k.Apart from a larger latency at the output, this emulation is hardware-equivalent to having k parallel counter front-ends and does not involve any approximations.One can obtain the frequency data at the k-fold lower sampling rate (output of only one of the k front-ends, used to generate Fig. 6) by simply downsampling the MAVGF output with a downsampling ratio of k. The ADs computed from the raw counter output with k = 1, the MAVGF output with k = 121, the downsampled moving average output, as well as LPF processed versions of the moving average output and its downsampled version, are shown in Fig. 7.As expected, we observe the perfect coincidence of the AD curve for the downsampled moving average output with the curve for the raw counter output.We now also observe the theoretically claimed precision improvement at the counter output with increased gate time (emulated with the moving average output) even when there is no extra lowpass filtering.With increased sampling rate, computation of AD with ( 8) is justified since (10) now holds.The additional averaging required in AD computation for τ values larger than the sampling interval is performed using the data samples available at the higher rate. Further, low-pass filtering the downsampled moving average output results in a precision improvement as indicated by Fig. 7, but not as much as the one we observe for the moving average output at the higher sampling rate and its filtered version.On the other hand, it seems strange that the moving average output has higher precision when compared with its downsampled version.In the end, specific frequency measurement values in the downsampled data are exactly equal to a subset of the values in the higher rate data, and therefore should have the same precision.In fact, use of AD in order to characterize the precision of the downsampled data is not appropriate since (10) does not hold.However, one can simply compute the standard deviation of the downsampled data, which is in fact equal to the standard deviation of the higher-rate date assuming statistical stationarity.Any DSP with memory, that involves dynamics over time (including AD computation), on the downsampled data is not meaningful, and introduces aliasing and noise folding due to the low sampling rate.When gate time is set to the lowest value, the input signal period with k = 1, the counter front-end averages over the signal period T s , which is also set to the sampling interval at the counter output.Therefore, a front-end bandpass filter with a maximum bandwidth set to half the signal frequency is needed to prevent aliasing and noise folding, even when k = 1. We thus conclude that, even though gate time can be used as a control knob for trading off precision versus response speed when an FC is used as a frequency shift monitor, it is better to set the gate time to the smallest value with ADs showing that resampling and filtering the signal at a fixed sampling rate achieve the same performance, whereas the event-triggered timestamp method results in a degradation.(Raw: no filtering; LPF: low-pass filter.)k = 1 and generate output at the largest rate possible, with an appropriate front-end bandpass filter to prevent aliasing.Then, instead of emulating a larger gate time with a simple MAVGF (a specific type of FIR filter), it is better to use an appropriate, bandwidth adjustable FIR or IIR digital filter that can offer a better precision versus speed trade-off.For the results we present in this work, we used a first-order IIR LPF. C. Resampling for a Fixed Sampling Rate In the third experiment, we consider resampling of the FC output, which has an input-dependent sampling rate, to a fixed sampling frequency.We compare filtering of the raw output and the resampled version.We also compare our proposed resampling technique with the continuous event-triggered timestamp method described in [5]. We consider and compute six versions of the frequency data as follows. 1) The raw main counter output (for k = 1) with an input-dependent sampling rate is generated.2) The raw output is passed through a first-order Butterworth LPF with a cut-off frequency of 200 Hz. 3) The raw output is processed using the event-triggered timestamp method with T int = 100 µs.4) The output processed with the event-triggered timestamp method is also passed through the LPF.5) The raw output is first processed with a zero-order hold to increase the sampling frequency to the frequency f CLK = 76.92MHz of the internal clock.Subsequently, it is decimated using a CIC decimator with a decimation factor of R = 8192, resulting in a final sampling rate of 9.4 kHz.6) The resampled data is passed through the LPF.We note that the digital LPF is implemented at the respective Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.sampling rate of the data it is applied to, corresponding to the same cut-off frequency in each case.The ADs for the six versions of frequency data described above are shown in Fig. 8.The event-triggered timestamp method produces an uncertainty in the sampling instants up to one period of the input signal, which manifests itself in the form of additional frequency noise resulting in a larger AD, discernible in Fig. 8 right before thermal drift kicks in.The additional frequency noise produced by this technique, which cannot be suppressed, is even more noticeable after the data is processed with the LPF.On the other hand, the data produced by our proposed resampling technique exhibits a slightly improved AD compared to the raw main counter output.This improvement can be attributed to the inherent low-pass filtering performed by the CIC decimator.Furthermore, if the resampled signal is further processed with an LPF, the AD obtained is identical to that when the raw counter output is low-pass filtered with the same cut-off frequency. D. Proposed FC Versus PLLFD In the final experiment, we compare the proposed FC with a commercial, lock-in-based PLLFD.The PID coefficients of the PLLFD are generated by the software that comes with the equipment, targeting a loop bandwidth of 200 Hz, resulting in the values k p = 2.92 and k i = 13.34Hz/deg/sec.The LPF in the PLL demodulator is a first-order filter with a cut-off frequency of 1 kHz, and the PLL operates at a sampling rate of 27 kHz.The transduced NEMS output after the preamplifier is fed to the PLLFD, whereas it is processed with a bandpass filter (with a bandwidth of 5 kHz) to produce the input for the FC.The FC raw output (with k = 1) is processed with a CIC decimator with a decimation factor of R = 8192, resulting in a final sampling rate of 9.4 kHz.The output of the decimator is processed with a LPF with a cut-off frequency of 200 Hz.Fig. 9 illustrates the comparison between the state-of-theart PLLFD method for frequency tracking and the proposed FC-based technique.It can be observed that both methods exhibit almost the same performance in both measurement and theory. V. CONCLUSION In this study, we investigated various aspects of frequency shift monitoring mechanisms based on FCs for resonant sensors.We characterized their precision, both in theory and experimentally, in the presence of thermomechanical and detection noise.Through theoretical models, analyses, and articulate arguments, combined with a series of experiments and elucidated results, we have gained valuable insights into various scenarios, system architecture and parameter choices. We proposed a novel and cost-effective FC-based frequency shift monitoring scheme, which was overlooked in the NEMS literature.Our FC-based architecture features wide bandpass filtering for signal conditioning combined with digital low-pass filtering in the sampled data domain before timestamp series for the signal transitions are converted to frequency data.This architecture not only alleviates the detrimental effect of intermodulation noise generated by the nonlinearity of time-to-frequency conversion, but also enables a flexible and practical platform for real-world applications where all DSP is performed at a fixed, input-independent sampling rate. We investigated mechanisms for trading-off response speed versus precision in our proposed FC-based scheme.Although we have shown that the gate time of a counter can be used as a control knob for this purpose, it is more effective and efficient if the gate time is set to the smallest possible value, i.e., the cycle time of the input signal, to generate an output at the largest sampling rate possible.Then, speed versus precision trade-off can be conveniently achieved by modifying the digital filtering characteristics in a more flexible manner. The output of a standard reciprocal counter, where gate time is set to the cycle time of the input signal, has an input-dependent sampling rate.This is inconvenient for the subsequent DSP.We developed a resampling scheme that involves a zero-order hold and a CIC decimator to convert the output to a fixed, input-independent sampling rate.We have shown experimentally that the resampled and filtered output has the same (or slightly better) precision as the filtered raw output of the FC as characterized by ADs. Finally, we compared the precision of the proposed FC-based scheme with a commercial implementation of the common and standard PLLFD technique in terms of ADs.Our results showed that both methods achieve the same performance, indicating that the proposed FC-based frequency tracking method can serve as a viable and cost-effective alternative to the state-of-the-art PLLFD method. Overall, our experimental results, theoretical models, analyses, and findings contribute to a better understanding of frequency detection mechanisms.This understanding combined with our FPGA-based flexible and cost-effective platform paves the way to developing new frequency shift Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. monitoring schemes with enhanced precision, response speed, and/or reliability. Looking forward, the insights gained from our study open up several avenues for further research and development in the field of precision frequency measurement for sensing applications.There is potential for advancing the DSP techniques used in our architecture.Machine learning-aided algorithms, for instance, could be employed to dynamically optimize and adapt filtering parameters in real time, potentially leading to even more precise and responsive frequency tracking.Our research lays the groundwork for exciting developments, and we anticipate that the concepts and techniques presented in this work will contribute to innovations in sensors based on frequency shift monitoring in the near future. Fig. 1 . Fig. 1.(a) Block diagram of the second-order CIC decimator with the decimation factor R and the comb section delay N. (b) Transfer function of the decimated output. Fig. 2 . Fig. 2. (a) Picture of the NEMS resonator used in the experiments and its connection to the setup.(b) Block representation of the SSO tracking scheme with the FC and the PLLFD. Fig. 3 . Fig. 3. Block diagram of the proposed FC architecture, where s in is the input signal, f is the frequency output. Fig. 4 . Fig. 4.Time-domain steady-state measurements acquired by the main counter (with interpolator block), using two different bandpass filter bandwidths: 20 and 5 kHz. Fig. 7 . Fig. 7. Measurement-based model of the FC acquisition mechanism emulating a large gate time with k = 121 by applying an MAVGF to data acquired with k = 1 and downsampling, with comparison of low-pass filtering at different stages.(Raw: no filtering; MAVGF: moving average filter; LPF: low-pass filter.) Fig. 8 . Fig. 8.ADs showing that resampling and filtering the signal at a fixed sampling rate achieve the same performance, whereas the event-triggered timestamp method results in a degradation.(Raw: no filtering; LPF: low-pass filter.) Fig. 9 . Fig.9.ADs of raw (unfiltered) and low-pass filtered FC output in comparison with a PLLFD with the same bandwidth.Results show that there is no difference in performance between a commercial PLLFD with the FC proposed in this work.
10,914.4
2024-03-15T00:00:00.000
[ "Engineering", "Physics" ]
Discovering Weighted Patterns in Intron Sequences Using Self-Adaptive Harmony Search and Back-Propagation Algorithms A hybrid self-adaptive harmony search and back-propagation mining system was proposed to discover weighted patterns in human intron sequences. By testing the weights under a lazy nearest neighbor classifier, the numerical results revealed the significance of these weighted patterns. Comparing these weighted patterns with the popular intron consensus model, it is clear that the discovered weighted patterns make originally the ambiguous 5SS and 3SS header patterns more specific and concrete. Introduction Pre-mRNA splicing was a critical event in gene-expression pathways and mainly involved in intron removing [1]. Introns were noncoding segments in gene sequences conjoined with the protein-coding exons at splicing sites (see Figure 1). Identifying introns was the foundations for predicting the gene's structures and functions; therefore, predicting introns effectively and precisely would provide great helps in uncovering the secrets of genes [2]. Intron splicing accomplished by the spliceosome is closely related with four cis-acting elements, that is, the 5 splicing sites (5SS), the 3 splicing sites (3SS), the poly-pyrimidine tract (PPT), and the branch point (BP) [3]. Intron identification and qualification heavily depend on the four splicing signals, and, consequently, intronic sequence patterns are crucial in intron-related researches, especially in predicting the 5SS and 3SS. Some efforts have been devoted to specifying sequence features of introns, and conceptual information such as bimodal GC% distribution [4], statistical features [5], and motifs [6] were found, but these patterns lacked concrete and specific descriptions, thereby making them hard to be used as basis of computational predictions and analyses. One more thing should be noticed is that the above-discovered patterns were all statistically significant only, and prejudging weights without testing the effectiveness might take a lot of risks in biased decisions. If going one step further to make the patterns biologically significant, it would be very inspiring. The essentials comprising patterns were seriously explored and termed computational concerns. Three computational concerns were firstly identified as expressions, locations, and ranges. Expressions are the representations of patterns such as consensus, locations are start positions in sequences, and ranges are their possible lengths. Furthermore, for discovering biologically meaningful patterns, the weight concern was proposed for specifying the biological significance. In this paper, patterns with four concerns were termed the weighted patterns. A postjudged weights discovering the methodology using hybrid self-adaptive harmony search (SAHS) and back-propagation (BP) algorithms were devised and implemented to fulfill the idea of weighted patterns. The entire processes of discovering weighted patterns were fulfilled through a frame-relayed search method [7] together with a hybrid SAHS-BP and sensitivity analysis as depicted in Figure 2. SAHS-BP and Sensitive Analysis In [8], Liou and Huang divided the intronic sequence features (ISF) into two categories: the uniframe pattern (UFP) and the multiframe pattern (MFP), where UFPs are the intraframe patterns and MFPs are the interframe patterns. Based on their frequencies and distributions, the significant UFPs focus on vertical distributions of tandem repeats, and the significant MFPs focus on horizontal ones, as shown in Figure 3. For detailed discussions on intronic sequence features and framerelayed search method, see [7,8]. After obtaining the patterns by frame-relayed search method [7], their relative importance could be derived from a new hybrid SAHS-BP mining system. The basic idea is to extract the instinct relationships between the input attributes and the output responses from the trained network by means of a postsensitivity analysis. Subsequently, the relative importance of input attributes could be determined according to these relationships. Thus, the quality of the relative importance is highly dependent on the network. Hybrid SAHS-BP. Artificial neural networks (ANN) are robust and general methods for function approximation, prediction, and classification tasks. The superior performance and generalization capabilities of ANN have attracted much attention in the past thirty years. Back-propagation (BP) algorithm [9] (i.e., the most famous learning algorithm of MLP) has been successfully applied in many practical problems. However, the random initialization mechanism of ANN might cause the optimum search process (the learning problem can be though as search through hypotheses for the one best fit the training instances [10]) to fail and return an unsatisfied solution, since the back-propagation is a local search learning algorithm [11]. For example, once the random initialization of the synaptic weights led to the search process start from hillside 1 as shown in Figure 4, BP algorithm would update the synaptic weights and go along the gradient direction. Consequently, it seems hopeless to reach a better solution near the global optimum in valley 2. Therefore, lots of trials and errors were the general guideline in most practical usage. On the other hand, a new metaheuristic optimization algorithm-harmony search (HS) with continuous design variables was proposed recently [12]. This algorithm is conceptualized using the musical improvisation process of searching for a perfect state of harmony. Harmony search exhibits a nice global search property and seldom falls into a trap. Moreover, the HS has been successfully applied to several real-world optimization problems [13]. A recently developed variant of HS, called the self-adaptive harmony search (SAHS) [14], used the consciousness (i.e., harmony memory) to automatically adjust its parameter values. The self-adaptive mechanism not only alleviates difficulties of parameter setting but also enhances precision of solutions. According to these observations, we are motivated to combine the advantages of SAHS and BP together and complement their own weaknesses. SAHS is used as an initializer of the neural network, that is, the generator of initial synaptic weights of BP. In other words, the lowest valley in Figure 4 is first found by SAHS; then a gradient descentbased ANN would go down carefully to obtain a precise solution. Finally, a sensitivity analysis was conducted on the well-trained network to estimate the relative importance of input attributes. Sensitivity Analysis. Sensitivity analysis is a common technique to realize the relationships between input variables and output variables. It could be used to check the quality of a hypothesis model as well. The basic idea behind sensitivity analysis is to slightly alter the input variables, and then the corresponding responses with respect to the original ones would reveal the significance of the variables. Therefore, the most important part of sensitivity analysis is to determine the adequate measurements as disturbance of input variables. Although applying sensitivity analysis to neural networks had been studied in some works [15,16], their purposes were usually identifying important factors only, while we go one step further, in this work, not only significant input attributes will be recognized but also the relative important of them will be estimated. We proposed a new measurement, disturbance, for the relative sensitivity. Definition 1. The elements of disturbance instances used in the sensitivity analysis are defined as follows: where ↑ is the th instance in the training set, with the th attribute increased according to the disturbance ratio ; that is, the symbol ⊗ denotes a plus sign. In other words, except the th attribute, all other attributes of the th instance are fixed. Similarly, ↓ is with the th attribute decreased; that is, the symbol ⊗ denotes a minus sign. Definition 2. The relative sensitivity of th attribute is defined as follows: where function net is the trained network, and the relative sensitivity is normalized by the minimal sensitivity attribute among all attributes. Data Sets. Since the lengths of introns are varied violently, for determining an adequate sequence length for pattern discovery, a pilot study on sequence compositions of introns is performed (data not shown here). As a result, we found that introns are very different from random sequences around 97 bps in the flanking regions of 5SS and 3SS. Therefore, we defined position 97 as the start position of the last frame, and then the final sequence length in the data sets would be 101 bps. For the completeness of analysis, all introns in human chromosome 1 (NCBI human genome build 36.2) were extracted, and the final data set comprised 22,448 sequences. Weighted UFPs and MFPs. The weighted UFPs and MFPs discovered by the proposed SAHS-BP mining system and sensitivity analysis are listed in Tables 1 and 2, respectively. To verify the effectiveness of these weighted codons for qualifying human introns, a two-layer classifier was constructed to test the significance of these weights. Two-Layered Classifier. In order to reveal the strength of discovered weighted patterns, a simple two-layered lazy classifier was constructed. The well-known nearest neighbor classifier was adopted as the based classifier due to its simplicity and efficiency. In contrast to an eager classifier, the lazy nearest neighbor classifier only memorizes the entire training instances in the training phase and then classifies the testing instances based on the class labels of their neighbors in the testing phase. In other words, the basic idea behind the nearest neighbor classifier is well explained by the famous idiom "Birds of a feather flock together. " The Euclidean distance is the original proximity measure between a test instance and a training instance used in the nearest neighbor classifier. A weighted Euclidean distance could be extended as ( , ) = (∑ =1 ( − ) 2 ), where is the number of dimensions and , , and are the th attribute of weight vector , training instance , and test instance , respectively. The experiment was carried out with the 10-fold crossvalidation for each specific (i.e., the closest neighbor). First, the whole sequence was randomly divided into 10 divisions with the equal size. The class in each division was represented in nearly the same proportion as that in the whole data set. Then, each division was held out in turn and the remaining nine-tenths were directly fed into the two-layered nearest neighbor classifier as the training instances. Since every sequence could be expressed as two parts (i.e., uniframe patterns and multiframe patterns), the first layered nearest neighbor classifier filtered out those nonintron candidates based on the weighted uniframe patterns. Finally, the prediction was made by the second layered nearest neighbor based on the weighted multiframe patterns. 4 The Scientific World Journal The flowchart of two-layered nearest neighbor classifier is shown in Figure 5. Numerical Results. In this subsection, the performance comparisons between the weighted -NN classifier and the conventional one are presented. Although no explicit weight vectors were used in the conventional -NN classifier, the Euclidean distance indirectly implied the same importance of all input attributes. Here, we used identity vectors (i.e., all elements in the vector are one) as its weight vectors and conducted the experiment in the same process as shown in Figure 5 for the performance comparisons. The reported values of performance evaluation measures here are the averages from the 10-fold cross-validation. As shown in Figures 6, 7, 8, and 9, the numerical results clearly indicate that the weighted -NN classifier performs much better than the conventional one in terms of error, Fmeasure, and the recall on different , except precision. In addition to error decreased from 25.21% to 16.88% on average, F-measure (or recall) is also increased 12.73% (or 14.21%), = 3. Furthermore, one might argue that both weighted and conventional -NN achieve such high scores in precision and relatively low scores in recall; that is, there are few predicted false positives and lots of predicted false negatives in both models. However, we believe that the reason for this circumstance is due to the inherent model bias and lazy characteristics of the nearest neighbor method. It lacks the ability to well describe the learning concept because the basic idea is merely distance comparisons. Nevertheless, such a simple weak classifier is appropriate to demonstrate the effectiveness of the weighted patterns. Besides, since a limited number of samples were used to compare the performances of two models, we want to know whether the better performance of the weighted -NN classifier is just as a result of the chance effects in the estimation process (i.e., the 10-fold cross-validation). More precisely, we should determine whether the observed difference of performance measures between two classifiers is really statistically significant (i.e., significantly better). Therefore, we used a paired -test [17] on the weighted -NN classifier and the conventional one with a 95% confidence coefficient. Table 3 reveals that the weight vectors not only significantly reduce the classification error of simple nearest neighbor classifiers but also significantly improve recall and F-measure. In other words, the predicted true positives are enhanced, and the false negatives are reduced as well. Thus, we could claim that some meaningful characteristics for intron identification are really enclosed in the weighted patterns. Discussions Intron identification played a key role in gene-expression researches, and pattern recognition was the basis for computationally predicting exon-intron junction sites. For discovering biologically meaningful patterns in introns, three computational concerns (pattern representation, position in sequences, and the spread range of patterns) were firstly identified by frame-relayed search method [7]. After that, a hybrid self-adaptive harmony search (SAHS) and backpropagation (BP) mining system was devised and implemented to fulfill the idea of mining weighted patterns. The weighted patterns clearly provide more specific and concrete information about introns. Thus, they should be of potentials in promoting the progress of gene analyses, providing great helps in discriminating authentic splicing sites from fictitious ones and revealing the visions of in silico validation of intron candidates.
3,075.8
2013-05-08T00:00:00.000
[ "Computer Science" ]
CFD-based analysis of entropy generation in turbulent double diffusive natural convection flow in square cavity . The present study concerns the problem of natural and double diffusive natural convection inside differentially heated cavity filled with a binary mixture composed of air and carbon dioxide (CO 2 ). Temperature and CO 2 concentration gradients are imposed on both perpendicular left and right walls. Simulations have been performed using the CFD commercial code ANSYS Fluent by solving continuity, momentum, energy and species diffusion equations. Numerical results obtained have been compared to data from the literature for both natural convection thermosolutal cases under laminar and turbulent regimes. For turbulent runs the RNG k-ɛ model has been selected. A good agreement has been noted between the different types of data for both cases for Rayleigh number ranging between 10 3 and 10 10 and buoyancy ratio between -5 and +5. Entropy generation rates due to thermal, viscous and diffusive effects have been calculated in post processing for all cases. Introduction In the last decades great effort has been devoted to the study of natural convection [1][2][3]. Fluid motion due to pollutant concentration was infrequently taken into account by most of the previous studies. This phenomenon is known as double diffusive convection, also called thermosolutal convection. It has gained researchers interest only in the last few years, due to its importance and utility in multiple fields such as: oceanography, geology, biology, chemical processes, solar engineering equipment's, nuclear reactors, and electronics devices. Beghein et al. [4] were amongst the earliest researchers who took into account the effect of both thermal and solutal convection inside enclosures. An intensive study was performed in order to determine the influence of a wide range of non-dimensional parameters on double diffusive convection by means of numerical simulation. They concluded that the thermal and solutal Rayleigh number beside of Lewis number has an observable impact on thermosolutal convection. Koufi et al. [5] carried out CFD numerical simulation in order to investigate laminar double diffusive free convection in square enclosure subjected to uniform temperature and concentration gradients. They found that the thermoslutal flow depends strongly on the buoyancy ratio. Nazari et al. [6] carried out numerical study by means of lattice Boltzmann method in order to examine conjugate heat and mass transfer inside cavity occupied with hot square obstacle. The results have demonstrated that the Nusselt and Sherwood number decrease as the buoyancy ratio rises when N is lower than 1. It was also remarked that they increase as the buoyancy ratio increases in range of N > 1. Similar studies have been the subject of several authors [7][8]. Entropy minimization is the main branch in the direction of energy systems designing. It is the most effective way to quantify unavailable energy and work destruction. As a result, an extensive attention was drawn to the topic of entropy generation due to heat and mass transfer. Various studies have been found in the literature dealing with laminar double diffusive convection and entropy generation inside cavities for instance reference [9]. Buoyancy ratio effect on entropy evolution was the subject of multiple authors [10][11]. It was revealed that an increase in buoyancy ratio implies an increase in total entropy generation for thermal dominated flow, while a decrease was observed for solutal dominated flow. In contrast, Oueslati et al. [12] revealed that the total entropy generation decreases as the buoyancy ratio raises regardless the variation of aspect ratio. Most of the existent studies didn't pay attention to entropy generation under turbulent thermosolutal convection. Even though, Chen and Du [13] as well as Chen et al. [14] explored numerically the entropy generation due to the turbulent thermosolutal free convection. These authors have determined that likewise Nusselt and Sherwood numbers, the entropy generation was affected by Rayleigh number and considerably improved in turbulent regime and reach its lowest value when N = 1. It is clearly shown from the bibliographic research examined so far, entropy generation on double diffusive natural convection inside square cavity in turbulent regime was handled only by [13] and [14]. Consequently, the present work was conducted aiming mainly to explore entropy generation due to heat and mass transfer inside two dimensional cavity under turbulent flow regime. Mathematical modeling 2.1 Physical problem The geometric model considered in the present study was the topic of huge interest by numerous authors. It consists of two-dimensional square cavity filled with air or perfect binary gas mixture (air+CO 2 ). The variation of thermal Rayleigh number is defined as function of gravity. Governing equations Continuity, momentum, energy and species diffusion are the equations governing double diffusive natural convection inside enclosures. To simplify the mathematical model, the following approximations are made:  The fluid is Newtonian and incompressible.  2D laminar or turbulent flow.  Absence of radiation.  The fluid physical properties are constant except density which verifies the Boussinesq approximation: where thermal and concentration expansion coefficients can be calculated, respectively, as follows: Turbulence modelling Turbulence occurs in most of engineering applications, thus the choice of turbulence model is the key parameter in order to have accurate results. In the present study the RNG k-ɛ model was employed to close equations system. This model was highly recommended to handle indoor air flows [15]. Boundary conditions The left perpendicular wall of the enclosure is heated (T H ) while the opposite wall is cooled (T C ) with a constant temperature, however the remaining horizontal walls were supposed to be adiabatic with no slip condition imposed. On the other hand, as depicted by Figure 1, the same boundary conditions were set for both, the thermosolutal convection and the natural convection cases. The only exception is that the vertical walls were submitted to contaminant concentration gradients with different locations. The reason behind this is to ensure the variation of the buoyancy forces i.e. aiding or opposing flow. Entropy generation Entropy generation rate inside two dimensional square cavity filled with Newtonian perfect gas mixture with one species diffusion is generated due to friction, heat and diffusion [9]. It can be written as follows: . Bejan number is considered as a good tool in order to determine the dominant between thermal, diffusion, and friction effects on the total entropy. In this case, it is defined as the ratio of entropy due to thermal and diffusive effect over total entropy. Numerical procedure CFD commercial code ANSYS Fluent, based on the finite volume approach, has been used to solve the equations governing the thermosolutal flow inside the cavity (Continuity, momentum, energy and species diffusion equations). For the discretization of the momentum equations, QUICK scheme was employed. However, the discretization of species diffusion, turbulent kinetic energy, and dissipation rate equations terms was ensured by the central upwind second-order scheme. SIMPLEC was chosen, as the velocity-pressure coupling algorithm while the RNG k-ɛ turbulence model was selected to handle equations system closure in turbulent regime. The residual error was set to 10 -7 or less for all cases. Results and discusion Thermosolutal convection phenomena can be controlled through number of non-dimensionless parameters namely Ra T , Le, Pr, and Ra S or N. Each one has a noticeable effect on the flow pattern inside enclosures. The present section deals not only with double diffusive convection, but also with entropy generation due to conjugate heat and mass transfer. Mesh study It should be noted at this level, that a grid independence study is necessary in order to obtain an acceptable CFD numerical solution. For this purpose, various quadratic fine grids were tested to verify the mesh independence towards mean, maximum and minimum Nusselt numbers in addition to dimensionless horizontal and vertical velocities components (Numean, Numax, Numin, Umax and Wmax). Table 1 summarizes the main results: Validation The validation of the numerical results is of prime importance for every simulation. For this reason, the verification of our code was built around two different problems reported in the literature. The numerical simulations for the first case consist of two dimensional square enclosure filled with air to handle the issue of free convection in steady laminar and turbulent regime reported by several authors . The second validation correspond to the problem of double diffusive natural convection inside two dimensional square cavity reported by various authors. The computations were verified for different buoyancy ratio in laminar regime. Table 2 and Table 3 resume simulation results gathered in laminar and turbulent regimes respectively. A wide range of thermal Rayleigh number extending from 10 3 to 10 10 was explored. There is a good match between our results and numerous authors. Figure 2 illustrate the evolution of local Nusselt number alongside the left perpendicular wall versus Beghein et al. [4] numerical results. The outcomes show an excellent agreement. Table 4 shows the calculated mean Nusselt number as function of buoyancy ratio along left vertical wall in laminar regime (Ra T =10 7 ) .The obtained results agree well enough with various authors and the maximum difference was estimated by only 1% compared to [4]. Heat transfer results Figure 4 exhibits temperature distribution within square cavity under turbulent and laminar flow regime. It should be pointed out that the same comportment have been found for assisting flow and pure natural convection (N = 0). Right handed flow motion was mentioned. The effect of thermal Rayleigh number was significant and the convection was accelerated and amplified in turbulent regime. In contrast, when N=-1, thermal and solutal buoyancy forces are equal but in opposite sign, therefore, they cancelled each other out. This situation leads to convection non-existence and the flow is driven only by diffusion. Figure 5 shows the evolution of total entropy generation as function of buoyancy ratio (N) in laminar and turbulent regime. For turbulent flow, the graphs shows that gradually as the buoyancy ratio increases, the total entropy decreases in range of -5 <N< -1 till reaching its minimum at N= -1 as result of buoyancy force nonexistence. Thus, the obtained results are compatible with reference [11]. The entropy then tends to increase in range of -1 < N < -0.5, to decrease again moderately when -0.5 < N < 0 as consequent of zero diffusive irreversibility. A decrease in total entropy generation was noted as function of buoyancy ratio reduce when N > 0. Entropy generation Concerning laminar flow (Ra T =10 3 ), the outcomes showed that N=0 produces the lowest entropy amongst all cases due to the absence of mass diffusive irreversibility inside the system (almost Zero), and moderate heat irreversibility. When N < 0, the total entropy drop as function buoyancy ratio rises, otherwise it increases. The results also reveal that turbulent regime amplifies the system entropy generation. Figure 6 depicts a comparison study of Bejan number variation in terms of buoyancy forces ratio and Rayleigh number in laminar and turbulent regime. The study was conducted in order to highlight the dominance entropy .i.e. Be < 0.5: dominance of fluid friction entropy. Be > 0.5 implies heat and mass transfer entropy dominance. It is shown that the Bejan number remains stable in N= -0.5 N= -1.0 N= -1.5
2,584.4
2020-01-01T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
“Management mechanism of agrarian economic system: composition, functions and factors of development in Ukraine” The current state of development of Ukraine’s national economy involves strengthening the processes of transition to social market economy, aimed at accelerating the rates of economic growth of the country. The presence of peculiarities of activity of agrarian producers and increase of the socioeconomic importance of the agrarian sector of the economy contribute to the need to develop and substantiate the methodological conditions aimed at solving multidimensional and diverse problems of ensuring the development of agrarian economic systems. In the work, the directions of the formation of the mechanism of management of agrarian economic systems, which consist in the transition to a new quality level of the use of information and intellectual resources of the management system on the basis of modern information technologies and modeling, which allowed to develop a structural scheme of the mechanism of management of the agrarian sector of the economy. The understanding of the mechanism of management of agrarian economic systems, factors and functions of this mechanism are determined. The mechanism of management of agrarian economic systems should be considered as a system of principles, rules, norms and procedures, within the framework of which the goals and objectives of the agrarian economy are developed in accordance with the current economic laws. This mechanism should be in line with the ownership of the subjects of state domination of the region, organizational structures of management, market social and economic relations, natural conditions of economic activity and state economic policy concerning the development of agrarian production, etc., and also take into account the main economic relationships that exist between the individual structural components of the agrarian economic system. Nataliia Vdovenko (Ukraine), Viktoriia Baidala (Ukraine), Nelya Burlaka (Ukraine), Anna Diuk (Ukraine) BUSINESS PERSPECTIVES LLC “СPС “Business Perspectives” Hryhorii Skovoroda lane, 10, Sumy, 40022, Ukraine www.businessperspectives.org Management mechanism of agrarian economic system: composition, functions and factors of development in Ukraine Received on: 22nd of February, 2018 Accepted on: 2nd of May, 2018 INTRODUCTION The current state of the Ukrainian national economy implies an intensification of the processes of transition to social market economy, aimed at accelerating the rates of economic growth of the country. The presence of specific features in the activities of agrarian producers and the growth of socio-economic importance of the agrarian sector of the economy encourage the development and substantiation of theoretical and practical provisions aimed at solving multifaceted and diverse problems of ensuring the development of the agrarian sector of Ukraine. In past decades, finding ways to improve management of the agrarian economic systems has increased significantly and will certainly continue to grow. This is confirmed by the experience of advanced economies. As the market relations develop, these problems are all the most acute in Ukraine. This situation is due to the fact that the agrarian sector is one of the leading sectors of the economy of Ukraine and its economic development depends to a large extent on the economic situation of the country. The efficiency of the agrarian sector is based on its integration with a number of other sectors of the economy, which include the branches of production of tractors and agricultural machinery, mineral fertilizers and pesticides, fuel oil materials, vehicles, energy carriers, etc. They should also be added to the system of material and technical supply, wholesale and retail enterprises, focused on meeting the needs of agricultural production, repair enterprises, agricultural buildings and other enterprises. The successful experience of the agrarian industry in the developed countries demonstrates the need for its economic integration with other branches of economy. This was reflected in the formation of the socalled agro-industrial complex (AІC), in which the main tasks of economic integration of the agrarian sector were realized. Practical demand and scientific relevance of the given problem and purposeful search of effective mechanisms of the sustainable development management by agrarian economic systems have determined the direction of scientific research. LITERATURE REVIEW Using the method of system analysis, let us analyze such subsystem (component) of the agro-industrial complex as an agrarian economic system (AES). According to Moldavan (2010), agrarian economic system is a set of independent economic units, ranging from peasant (farmer) farms to regional and interregional agro-industrial associations. This set is integral, its integrity is based on the unification of all economic entities on the principle of the commonality of economic relations in the presence of each of these subjects of a certain economic independence. Kozlovskyi (2017) determines the main purpose of the agrarian economic system as stable supply of quality agricultural products in sufficient quantities and in a wide range. The necessity of using modern information technologies in applying the method of a systematic approach to the analysis of agrarian economic systems is indicated by Moiseev (1981) who writes: "System analysis ... requires the analysis of complex information of different physical nature". This allows to use a wide range of mathematical methods, modelling techniques, techniques of information theory, etc. in system analysis. With full confi-dence, it can be argued about the possibility and, moreover, the need for the use of modern information technology using the method of system analysis for the study of AES, which corresponds to Kamenskyi (1984), that "... systematic explorer yes or otherwise, almost every specialist and leader". In the opinion of Kozlovskyi (2010), Vdovenko (2015), the following methodological approaches should be based on the formation, operation and improvement of the AES management mechanism: • the definition of the main objective of the system management and its unconditional priority over the local goals; • system synthesis, which should be aimed at ensuring the community of the strategic, tactical and operational goals reimplementation of the system, provided that the material, energy, labor and informational resources are effectively used; • consideration of the system as a whole and as a set of protocols of independent subsystems (constituents); • the system is a synergistic set of material, energy, information and financial flows, the system provides for its division into objects and subjects of management; • identification of all significant connections between the subsystems both inside the system and with the external se-field, avoiding unnecessary detail, hierarchical structuring of the system; • the optimal combination of centralization and decentralization, the priority of functions over the structure of the system; • accounting uncertainties as an integral attribute of a system; • accounting of the processes of system development, its variability, transformation and the ability to adapt while maintaining the stability of the system; • consideration of the system as a set of subsystems with the possibility of including new subsystems and the exclusion of existing ones that already do not correspond to the goals, tasks and functions of the system; • controlling and informational and analytical support for the maximum number of administrative and technological processes taking place in the system. Burlaka (2014) notes that the methodological approaches and tools of the system analysis of the AES are differentiated according to the level of their hierarchy, the type of these systems, their content and the state in the time and space aspects, taking into account the existing differences in external conditions. According to Baltremus (2016), the global goal of the agrarian economic system is to sustainably and reliably meet the public needs of the country in agricultural products in sufficient quantities and assortment, while maintaining high qualitative indicators. Within this goal, the main goals of each agrarian economic system are formulated, and on the basis of them -local goals. Also global and local criteria for the effectiveness of the agrarian economic system are determined. It is necessary to pay attention to the requirement to ensure a certain stability of the target and functional purpose of the studied systems. This is due, first of all, to the peculiarities of agricultural production, which often requires considerable time periods to change the direction of its main activity. Local goals and criteria for the effectiveness of the agrarian economic system should be as flexible as possible within the limits defined by the main goals and functional purpose of the given agrarian economic system. Kozlovskyi (2017) had proved that in connection with the growing complexity of modern agrarian economic systems and the diversity of interactions between their subsystems of different levels and with the external environment, as well as the level of achievements of scientific and technical progress and accumulation of the necessary information resource, the role of quantitative and qualitative economic and mathematical methods of modelling and forecasting, which are based on the use of wide opportunities of modern information technologies, will constantly grow. Maciejczak (2015) argues that from the point view of systematic approach, agrarian economy is a phenomenon, which has a positive impact on the environment and society and economy as a whole by applying the innovative technologies in traditional branches, for example, in food. The purpose of this work is to theoretically study the conceptual approaches to the management of the Ukrainian agrarian economic system, to define functions, development factors of the Ukrainian agrarian economic system. GOAL AND RESEARCH METHODOLOGY The methodological foundations of the study include the conceptual foundations of the theory of management, the concept of dialectical logistics, the study of Ukrainian and foreign scientists in the field of economic and agrarian management, the use of strategic management methods to describe the functions of the mechanism of management of agrarian economic systems. The research used the following methods: expert assessments -in assessing the factors of influence on the development of the agrarian sector; systemic -in studying the principles of development of agrarian economic systems; synthesis, analysis, grouping -in determining factors of influence and substantiation of the choice of factors influencing agrarian economic systems; abstraction -in determining the factors of the mechanism of management of agrarian economic systems; generalization -in developing a methodology for managing the development of the agrarian economic systems. MAIN RESULTS OF RESEARCH The main purpose of the agrarian economic system is to provide the population with sufficient quantity and the assortment of agricultural products with the necessary qualitative indicators. In general, the AES consists of eight subsystems: 1. A subsystem of the first order is a separate worker, or more specific, according to (Kravchenko, 1978), "a worker who has certain skills, qualifications, experience, the objects of labour he is working on; production algorithm". In the study of AES management problems, this subsystem can be understood as an integral element. However, in our opinion, as an "early date" of an indivisible element of the AES it is expedient to consider an employee of the agricultural production cycle. 2. Subsystems of the second level -these are labor collectives, which consist of several subsystems of the first level. These subsystems perform more complex functions within the complete technological stages of agrarian production. These include primary labor collectives of specialized agricultural producers, peasant (farmer) farms, in which labor processes are carried out by two or more persons specializing in the production of certain types of agricultural products, and so on. 3. Subsystems of the third level are peasant (farmer) farms, cooperatives, which include subsystems of the second level. Systems of this level should be similar to the types of products produced by them, as well as methods for the implementation of basic production functions. 4. Subsystems of the fourth level are structural elements with more complex functions than at the previous level and which, as a rule, are not limited to one type of activity: brigades, branches, cooperatives (Drabovskiy, 2011), as well as peasant (farmer) farms that employ hired labor, and so on. 5. Subsystems of the fifth level is an association of systems of the previous levels: agricultural of various organizational-legal forms (joint stock products, agricultural companies, etc.). 6. Subsystems of the sixth level are inter-branch and territorial associations of district and regional scale (agrarian holdings, associations of peasant (farmer) farms, etc.). 7. Subsystems of the seventh level -regional agrarian structures (or agro-structures of regional scale). 8. Subsystems of the light level are interregional (national) agrarian economic entities. The mandatory components of any system, in addition to its components (subsystems), must necessarily include links between these subsystems, the stability and effectiveness of which are equally important for the development of a recommendation to ensure the sustainability of the economic system. In modern conditions, the methodology of systematic research of agrarian economic systems cannot be limited to material, energy, financial relationships. A special role in the study of agrarian systems lies in the information links. And it's not just that information flows are a reflection of the material, energy and financial flows. It should be borne in mind that in connection with the active development of IT technologies, information flows increasingly focus on the research of various problems of modern society, while playing not only intermediate, but also often an independent role. Today, information communications play an integrative and generalizing role in management. They allow to group individual subsystems into a single organized system. Only with their use coordination and effective interaction of all independent subsystems (components) of any system are possible. Information communications should reflect the changing state of both the AES itself and the environment at the same time, thus ensuring that the subject of the management of the system is able to respond adequately and timely to these changes. The main indicator of the development of the agrarian economic system of the country is the indicator of gross agricultural production in Ukraine A necessary condition for ensuring the effective functioning of the agrarian economic system is the creation and steady operation of its management mechanism. "Mechanism of management" is a quite often mentioned and used category, the meaning of which is unambiguously not defined (Kaletnik, Zabolotnyi, & Kozlovskyi, 2011). The most widespread is the notion that the control mechanism is understood as the system of principles, rules, norms and procedures that determine the order and content of management activity. Particularly important is the perception of the management mechanism as a system that uses the system approach as the basis for its research. That is, the mechanism of management of the economic system is a complex hierarchical system that determines the internal structure, the procedure for the formation and functioning of the management system in accordance with the adopted methodology of management. The following methodological approaches should be based on the formation, operation and improvement of the AES management mechanism (Kozlovskyi, 2017): • definition of the main goal of the management of the system and its unconditional prioritization over local goals; • the synthesis of the system, which should be aimed at ensuring the coherence of the implementation of strategic, tactical and operational goals in ensuring the effective use of material, energy, labor and information resources; • consideration of the system as a whole and as a set of separate independent subsystems (constituents); • the system is a synergistic set of material, energy, information and financial flows; • the system provides for its division into objects and subjects of management; • identification of all significant connections between subsystems both inside the system and in the external environment; • avoiding excessive detail; • hierarchical structuring of the system; • an optimal combination of centralization and decentralization; • the priority of functions over the structure of the system; • accounting uncertainties as an integral attribute of a system; • accounting of the processes of system development, its variability, transformation and the ability to adapt while maintaining the stability of the system; • consideration of the system as a set of subsystems with the ability to include new modules and exclude the existing ones that already do not correspond to the objectives and functions of the system; • controlling and informational-analytical support for the maximum possible number of managerial and technological processes. The methodological approaches and tools of the system analysis of the AES are differentiated according to the level of their hierarchy, the type of these systems, their content and the state in the time and space aspects, taking into account the existing differences in external conditions. Let's consider formulated methodological approaches to system analysis of AES in detail from the standpoint of agrarian production. The global aim (Baltremus et al., 2016;Mescon, Albert, & Khedouri, 1988) for AES is to sustainably and reliably meet the public needs of agricultural products in sufficient quantities and assortment while maintaining high qualitative indicators. Within this goal, the main objectives of each AES are formulated and based on local goals, and the global and local criteria for AES efficiency are formulated. It is necessary to pay attention to the requirement to ensure a certain stability of the target and functional purpose of the studied systems. This is due, first of all, to the particular features of agricultural production, which often requires considerable time periods to change the direction of its core activity. Local same criteria and criteria should be as flexible as possible within the limits defined by the main-goals and functional purpose of the controlled AES. For the agrarian sector, the mechanism of management of the AES should meet both the general for economic systems of the laws and the specifics of agrarian production. In general, such a concept of the management mechanism of agrarian economic systems is shown in Figure 3. The mechanism of management of agrarian economic systems should be considered as a system of principles, rules, norms and procedures, within the framework of which the goals and objectives of the agrarian economic system are realized in accordance with economic laws that determine its existence and development. The mechanism of management of agrarian economic systems must be consistent with a systemically agreed set of factors -form of ownership, organizational structure of production, market social and economic relations, native conditions of management and state policy in relation to agriculture. The management mechanism is a system where components (elements) are first and foremost combined by certain economicrelationship. The realities of modern market economy and the level of social development emphasize the informational and intellectual components of the mechanism of management of agrarian economic systems to ensure its sustainable development. This is manifested in the fact that management is primarily a glass-bottom system of information processes. Such views on the place of information in management are followed by many scientists, in particular, Vikhanski and Naumov (2002) who argues: "At present, the role of the information-behavioral subsystem of the management system is increasing dramatically". According to Kuznetsov (2003), among the subject of the science of management, information is in the first place. The main factors that determine the mechanism of management of agrarian economic systems are shown in Figure 4. The current state of the economy of Ukraine and its agrarian sector is characterized by an ever-increasing dynamism of all socioeconomic processes, a complication of the system of economicrelationship, resource constraints, demographic and environmental problems, increased competition, etc., which urges the need for purposeful work with large volumes of information and leads to multivariateness and uncertainty in the development and realization of managerial influences. All this complicates the management of agrarian economic systems. The natural consequence of this is an increase in the share of the intellectual component in management activity due to the vital importance of the purposeful work with a variety of information, the volume of which is constantly increasing. Based on the foregoing, the following tasks should be attributed to tasks and functions solved within the information and analytical block of the mechanism of management of agrarian economic systems (see Figure 5) (World Bank, 1997). The separation of these functions, shown in Figure 5, is mainly due to methodological considerations and quite arbitrary in nature, since their effective use can only be carried out in an inseparable system of aggregates. Thus, to ensure the development of the agrarian sector (Kozlovskyi, Herasymenko, & Kozlovskyi, 2010), efficient management of the agrarian economic system is essential for the creation of management mechanism implemented in the form of an integrated automated control system oriented towards a new level of use of information, tools and methods based on the achievements of AES (see Figure 6). The complexity of solving these problems is that many indicators of the agrarian sector are of a qualitative nature, and the criteria of comparison are the vector with a large number of diverse combinations. Measuring quantitative indicators of the state of the agrarian economic system is not always easy enough in practical implementation. Therefore, in order to solve the above problems, we propose to use the theory of fuzzy logic (Kozlovskyi, 2017). The block diagram of the mechanism of management of the agrarian sector, presented in the form of the agrarian economic system, which provides for the sustainability of its development, can be presented as follows (see Figure 7). Analyzing the structural diagram of management mechanism of the agrarian economic system, shown in Figure 7, it should be noted that such a function Figure 7. The structural diagram of the management mechanism of the agrarian economic system as prediction and research of seasonal and cyclic manifestations of agrarian production falls within the competence of the settlement and analytical unit of the management system of the agrarian economic system, although such control (lack of reliable information, complexity of control, etc.) is carried out not on all controlled indicators. An important point in developing the management mechanism of the agrarian economic system is that all significant stages of the implementation of the control function are fixed in the information block of the management system with a view to the possible use of the results of control in subsequent management activities, for example, to forecast the results of the agrarian economic system (the so-called prognostic control). The most important function in this management mechanism of the agrarian economic system is the planning function. Planning function is intended to systematically reduce and overcome uncertainties about the goals, functions, structure, properties and laws of the functioning and development of the agrarian economic system. Typically, the implementation of this function is divided into two main parts: a. statement and justification of goals -it is a promising, strategic planning; b. development of ways to achieve the chosen goals -it is tactical (long-term, operational, current) planning. CONCLUSION Since the agrarian economic systems are among the most complex systems, multicriterial nature of the management tasks is inevitable. In such cases, a generalized system quality indicator, represented in the form of a vector, where coordinates are indicators of the individual properties of the system, should be used. Such an approach has not yet been widespread. As a rule, managers use a certain set of criteria without putting or solving the problem of multi-criteria optimization. And this can lead to violation of the study system. Summarizing this study, the following conclusions can be drawn: 1. Agrarian economic systems include a wide range of economic entities from peasant (farmer) farms to interregional agrarian economic structures. Characteristic features of the NPP are the affiliation of the AIC and the fact that their integrity is based on the unification of subsystems (elements) of the common economic relations in the presence of certain economic autonomy. 2. The mechanism of management of agrarian economic systems should be considered as a system of principles, rules, norms and procedures, within the framework of which the goals and objectives of the agrarian economic system are realized in accordance with economic laws that determine its existence and development. The mechanism of management of agrarian economic systems must correspond to a systemically agreed set of factors -ownership, organizational structure of production, market social and economic relations, natural conditions of economic activity and state policy in relation to agriculture. It should be emphasized that the formation and functioning of the management mechanism of the agrarian economic systems and their constituents affect the composition and correlation of management functions, their distribution, content and methods of implementation. That is, the construction of the management mechanism of the agrarian economic systems is directly related to the development of its functional content. Therefore, we consider the functions of management of the agrarian economic systems, taking into account the peculiarities of agricultural production, and from the point of view of building the management mechanism of the agrarian economic systems on the basis of reorientation of management to a new qualitative level that involves the realization of intellectual and informational resources with the maximum possible use of modeling methods and modern information technologies. This involves conducting continuous monitoring and analysis of changes in the composition, proportion and content of management functions of the agrarian economic systems.
5,700.8
2018-05-18T00:00:00.000
[ "Economics", "Agricultural and Food Sciences" ]
ON THE NORM CONTINUITY OF THE HK-FOURIER TRANSFORM . In this work we study the Cosine Transform operator and the Sine Transform operator in the setting of Henstock-Kurzweil integration theory. We show that these related transformation operators have a very different behavior in the context of Henstock-Kurzweil functions. In fact, while one of them is a bounded operator, the other one is not. This is a generalization of a result of E. Liflyand in the setting of Lebesgue integration. Introduction If f belongs to the space of real valued Lebesgue integrable functions, L 1 (R), the Fourier transform is defined for every real number s as where the integral is taken in the Lebesgue sense. When f is in L 2 (R), the Fourier transform of f can be defined as where the limit is taken in the norm topology of L 2 (R) and (f n ) n≥1 is a sequence in L 1 (R) ∩ L 2 (R) such that f n − f 2 → 0, as n → ∞. While the operator F 1 has an integral representation on its domain, the operator F 2 shares this property only on a dense subspace of its domain. This happens also for the Fourier transform operator F p defined on L p (R) for 1 < p < 2. Recently, it was shown in [13] that having an integral representation implies additional properties for the Fourier transform operator. Pointwise continuity and the Riemann-Lebesgue lemma were shown to be valid on a larger subspace of the domain of each of the operators F p for 1 < p ≤ 2. The proof relies on the Henstock-Kurzweil integral which has the remarkable property that every Lebesgue integrable function is also integrable in the setting of the Henstock-Kurzweil theory with values of both integrals coinciding. If f ∈ L 2 (R), then e −isx f (x) is not necessarily Lebesgue integrable. However, the HK-Fourier transform operator defined by the same formula (1.1) is well defined as a Henstock-Kurzweil integral for each s = 0 and any function of bounded variation vanishing at infinity [11,10]. See definition below. We say "HK-Fourier transform" in order to emphasize the use of Henstock-Kurzweil integral [17]. Furthermore, it was shown in [13] that F p and the HK-Fourier transform operator coincide in the intersections of their domains. In this paper we look at norm continuity of the Henstock-Kurzweil Fourier transform operator. There is also a pending question concerning pointwise continuity of a HK-Fourier transform function F HK f (s) at the origin. We have not answered this question but we show below that there is a type of smoothness even at s = 0 in the case of the "real part" of the HK-Fourier transform operator, namely the Cosine transform operator. Henstock-Kurzweil Fourier transform where the supreme is taken over all (finite) partitions P of I. If I = R, then f is of bounded variation if and only if exists in R. We will denote the set of bounded variation functions over an interval I ⊆ R as BV (I). If I ⊆ R is an unbounded interval, we define BV o (I) as the subspace of BV (I) consisting of the functions which have limit zero at ±∞: In [17] the Henstock-Kurzweil integral was employed to study the Fourier transform. Later, in [11,10], it was proved that (1.1) makes sense as a Henstock-Kurzweil integral over the space BV o (R). The norm in BV (I) is taken as Note that over BV o (R) the norms · BV (R) and · BV (R) are equivalent: Definition 2.2. Let 0 < p < ∞ and X ⊂ R. For any Lebesgue measurable function f : X → R we define The real vector space of functions f such that f p < ∞ is denoted by L p (X) and W p denotes the subspace of function on which · p vanishes. For real numbers p ≥ 1, · p is a seminorm on L p (X) and induces a norm in the quotient space L p (X)/W p . We will denote the completion of this space with respect to its norm by L p (X). Similarly, for p ≥ 1 we define L p (X, C) and L p (X, C) by considering functions f : X → C. For p = ∞ and f : X → R, we define f ∞ to be the essential supremum of |f |, and L ∞ (R) denotes the vector space of all Lebesgue measurable functions f for which f ∞ < ∞. If A X is a Lebesgue measurable set and m denotes the Lebesgue measure, then given a Lebesgue measurable function f defined on A such that m(X A) = 0, we will denote by the same symbol f the trivial extension of f to a (measurable) function on X. That is, we extend the function as zero on X A. Furthermore, for a function f ∈ L p (X), or f ∈ L p (X, C), we will denote by the same symbol f the (unique) element that defines the function in L p (X) or in L p (X, C), respectively. In order to introduce the definition of the Henstock-Kurzweil integral, we consider the system of extended real numbers R := R ∪ {±∞}, and for each interval of [a, b] is called γ-fine according to the following cases: For a ∈ R and b = ∞, For a = −∞ and b ∈ R, For a = −∞ and b = ∞, The number A is the integral of f over [a,b] and it is denoted by Using the convention 0 · (±∞) = 0, an extra condition for f is f (±∞) = 0 [1]. The space of Henstock-Kurzweil integrable functions defined on an interval I R will be denoted by HK(I). Two fundamental theorems over the Henstock-Kurzweil integral are the following, the cases on [−∞, ∞] and [−∞, b] are analogous, see [1]. The second integral on the right side of the equation is a Riemann-Stieljes integral and The space HK(I) is a seminormed space with the Alexiewicz seminorm, which is defined as The quotient space HK(I)/W(I) will be denoted by HK(I), where W(I) is the subspace of HK(I) on which the Alexiewicz seminorm vanishes [3]. By HK(R, C) will be denoted the space . The completion of the spaces HK(R) and HK(R, C) with respective given norms will be denoted by HK(R) and HK(R, C). Let us consider S(R), the Schwartz space of real valued functions defined on R. We know that the Fourier transform operators F 1 and F 2 are well defined on S(R) and L 1 (R)∩L 2 (R) and have an integral representation given by (1.1) valid for every s ∈ R. Because of their density in L 2 (R), both spaces are used to extend the Fourier transform over L 2 (R), see [4] and [16]. We also know that HK(R) ∩ BV (R) is a dense subspace of L 2 (R). An important point is and for f ∈ BV o (R) the integral in (1.1) is well defined as a Henstock-Kurzweil integral for each s = 0. This means that on a dense subspace of L 2 (R), not contained in L 1 (R), the Fourier transform operator F 2 is represented by an integral. Furthermore, a similar asseveration holds true for the Fourier transform operator with domain L 2 (R, C). See [11] and [13]. For any unbounded subset X ⊂ R, we denote by C ∞ (X) the space of complex valued continuous functions on X vanishing at infinity [14]. Definition 2.7. The HK-Fourier transform exists for every s = 0 and is defined by where the integral is a Henstock-Kurzweil integral. We define the norm The next proposition is a corollary of [12, Theorem 1]. Proposition 1. The HK-Fourier Transform operator with domain HK(R)∩BV (R) and codomain L 2 (R, C) is a bounded operator. Proof. From the Plancherel Theorem and [12, Theorem 1] we get The Henstock-Kurzweil Fourier Sine Transform is given by We have that f ∈ HK(R) ∩ BV (R) and for s > 0, it follows that Note that this function is not an element of HK(R). Therefore, the image of the space HK(R) ∩ BV (R) under the action of F s HK is not contained in HK(R). The previous example shows that the HK-Fourier Sine transform cannot be defined as a bounded operator from BV o (R) into HK(R). However, for the HK-Fourier Cosine transform a different situation occurs. The integrability of the Fourier Cosine and Sine transforms of functions in BV o (R) is a problem that has been attacked in different ways. The aim is to obtain a wide variety of subspaces of BV o (R) where the transforms are integrable. In [9], Liflyand studied the integrability of these transforms in the sense of Lebesgue. Among others, he showed that when f ∈ BV o (R) is locally absolutely continuous with its derivative in a space W, then the Fourier transform of f belongs to L 1 (R). Here W being the subspace of functions g ∈ L 1 (R) such that In analogy with the above we take the space which is not empty because S(R) ⊂ Λ. Let AC loc (R) be the space of locally absolutely continuous functions on R. In this setting, we provide the next proposition. Proof. The proof for the HK-Fourier Sine transform is obtained from the Multiplier Theorem and the equality A similar formula for (F c HK g)(s) is valid, which proves the proposition. This shows that taking into account the Henstock-Kurzweil integration theory, the subspace of BV o (R) where the HK-Fourier transform of each of its elements is integrable, it is larger than the one considered by Liflyand. For might not exist, so that (F c HK f )(s) is not well defined at the point s = 0. Without resorting to the condition over the space Λ, we prove in theorem 1 below that F c HK can be extended to a bounded linear transformation from BV o (R) into HK(R). We will need some lemmas. We set R + = [0, ∞) and Proof. This follows from elementary properties of the integral b a f (x) dx and consideration of the cases a · b ≥ 0 or a · b < 0. Remark 1. The Sine Integral function, see [6] and [15], is given by sin y y dy. It has the following properties: Let us consider the set of functions Ω : . For given 0 ≤ u ≤ v and 0 < t, we make the change of variable y = tx. It follows that Therefore, because h t is an even function, we have that Ω is a bounded set in HK(R) and Proof. We have as a consequence of the Multiplier Theorem: Therefore, applying (2.7) we obtain the proof. As we already mentioned, (F c HK f )(s) might not be well defined at the point s = 0. However, it does have certain regularity even there. This regularity implies the Bounded Linear Transformation theorem for the HK-Fourier Cosine Transform operator. Proof. We know that By Fubini's Theorem, Therefore, by Lemma 2.9, we get that Moreover, since F c HK (f ) is an even function, (2.9) with a = 0 and (2.10) yield: Therefore, the norm of F c HK (f ) is finite. To show that F c HK (f ) belongs to HK[0, 1], we prove the existence of the limit Given ε > 0 take R > 0 great enough such that (2.14) if b, b < δ for some positive δ. Here C is a constant depending only on f and R. By using (2.8), and the two previous estimations one proves the existence of (2.12). Similarly, we prove that F c HK f ∈ HK(R) by showing existence of We get as before the same estimation in (2.13), for any b, b > 0 and R > 0 great enough. Now to show that the integral in (2.14) is small, we estimate the integral The main argument is to show that each of these integrals can be viewed as convergent alternating series. First we take f continuous and nonincreasing in [0, R] such that f (R) > 0. Note that for a given y > 0 andỹ = y + π, theñ It follows that the first two integrals on the right side of (2.16) tends to zero for b ≥ b 1 great enough. The last integral on the right side of (2.16) can be written as Note that |f (y/b)−f (0)| and |f (y/b )−f (0)| are arbitrarily small whenever |y| ≤ b δ and δ > 0 is small enough under the hypothesis that f is continuous. Because f is nonincreasing these two integrals define alternating series with respective initial terms See [5], [18]. Summing up, the previous arguments applied to f 1 and f 2 give (2.15), which proves the theorem. This theorem has its implications to interpolation theory for the classical Fourier Transform on the space L p (R Also, we consider the spaces L p (R) ∩ BV o (R) and L q (R) ∩ HK(R) with given norms . Similarly for f L q ∩HK . In [7, Theorem 6.3.1] it is proved that the space of bounded variation functions defined on a compact interval [a, b] is a Banach space. With a few changes over that proof it is possible to show that the space BV 0 (R) is a Banach space. Therefore, L p (R) ∩ BV 0 (R) is a Banach space of real valued functions defined on R, whereas elements in the Banach space L q (R) ∩ HK(R) are classes of functions. Proof. The density of D( ) in L 2 (R, C) follows since: S(R, C) ⊂ D( ), and it is a dense subspace of L 2 (R, C). Moreover, F 2 restricted to S(R, C) is a bijection onto S(R, C) ⊂ HK(R, C). In order to prove that is a closed operator, we take a sequence (f n ) in D( ) such that f n → f in L 2 -norm and f n → Υ in HK-norm. Both together must imply f ∈ D( ) and f = Υ. Note that Υ might belong to the completion of HK(R, C). Since F 2 is an unitary operator on L 2 (R, C), one has F 2 f n ∈ L 2 ([s, t] , C) ∩ L 1 ([s, t] , C) for every s, t ∈ R. Therefore, One can use the Cauchy-Bunyakovsky-Schwarz inequality to prove that the last equality holds true for every [s, t] ⊂ R. This shows that F 2 f = Υ ∈ HK(R, C), proving that f ∈ D( ).
3,497.2
2018-05-30T00:00:00.000
[ "Mathematics" ]
Gaussian Basis Sets for Crystalline Solids: All-Purpose Basis Set Libraries vs System-Specific Optimizations It is customary in molecular quantum chemistry to adopt basis set libraries in which the basis sets are classified according to either their size (triple-ζ, quadruple-ζ, ...) and the method/property they are optimal for (correlation-consistent, linear-response, ...) but not according to the chemistry of the system to be studied. In fact the vast majority of molecules is quite homogeneous in terms of density (i.e., atomic distances) and types of bond involved (covalent or dispersive). The situation is not the same for solids, in which the same chemical element can be found having metallic, ionic, covalent, or dispersively bound character in different crystalline forms or compounds, with different packings. This situation calls for a different approach to the choice of basis sets, namely a system-specific optimization of the basis set that requires a practical algorithm that could be used on a routine basis. In this work we develop a basis set optimization method based on an algorithm–similar to the direct inversion in the iterative subspace–that we name BDIIS. The total energy of the system is minimized together with the condition number of the overlap matrix as proposed by VandeVondele et al. [VandeVondele et al. J. Chem. Phys.2007, 227, 114105]. The details of the method are here presented, and its performance in optimizing valence orbitals is shown. As demonstrative systems we consider simple prototypical solids such as diamond, graphene sodium chloride, and LiH, and we show how basis set optimizations have certain advantages also toward the use of large (quadruple-ζ) basis sets in solids, both at the DFT and Hartree–Fock level. INTRODUCTION When dealing with the quantum chemical modeling of crystalline solids, the existence of various types of chemical bonding is clearly evident. For instance, the polymorphism of carbon in the graphite (or graphene) and diamond allotropes is just one of many examples, in which the profoundly different chemical behavior is manifested by the same chemical element in different crystal packings. Another exemplary case is that of rocksalt NaCl: sodium is by nature metallic as a bulk material, and chlorine is commonly found in the form of a molecular crystal Cl 2 . NaCl is a prototypical ionic salt. The chemical differences in those materials can be made evident by looking at their electron density (see Figure 1): the electrons involved in the metallic bond in Na are quite spread out over the whole space, while in Cl 2 the density is somewhat more localized on molecules, with empty space between them. Conversely, the wave function in an ionic system like NaCl is strongly confined in a vicinity of the ions and features nodes in the planes in between neighboring atoms. NaCl is also considerably more densely packed. This variety of chemical bondings in the solid state then reflects the choice of the type and quality of the basis set adopted in the mathematical form of the wave function when solving the Schrodinger equation within periodic boundary conditions (i.e., Bloch functions). 1−3 The situation in the field of molecular modeling is somewhat simpler as isolated molecules or molecular aggregates have nearly comparable atomic densities, and there are commonly no analogue extended systems featuring metallic, ionic, or covalent bonds. Therefore, in molecular calculations, atom-centered basis sets as Gaussian-type orbitals 4 are almost universally adopted, 5 although other basis sets can be and are eventually used. On the other hand, for solid-state calculations, 2 plane waves, 6−8 atom-centered Gaussians 9 (or their combinations 10 ), and numerical basis sets 11,12 are all popular choices. The plane wave basis, that is naturally suited for nonlocal wave functions such as in the uniform electron gas or in a metal, has the undeniable advantage of a one-knob tuning of accuracy and cost through the kinetic energy cutoff parameter. However, the correct description of local orbitals, core states, or the void can result in a rather high computational cost. Similarly, the inclusion of exact HF exchange in hybrid HF/DFT calculations leads to a steep increase in computational time. Gaussian-type basis sets are less commonly adopted for the quantum chemical treatment of solids, with respect to plane waves. Gaussian functions have the great advantage of allowing to transfer to the solid state a large part of the technology and knowledge that is the legacy of several decades of advances in molecular quantum chemistry and to retain the chemical intuition when looking at the electronic charge distribution of the investigated system. The price to pay is the mandatory definition of a basis set for each atomic species, that is ultimately left in the hands of the end user. Nowadays, standardized basis set libraries are not commonly available for solids as they are for molecules, 13,14 despite recent attempts in that direction being carried out by Bredow and coworkers. 15−17 The reasons are not only to be ascribed to a lesser effort in a systematic construction of all-purpose basis sets but also more specifically to the wide difference in chemical bonding as outlined above. First attempts to understand the role of basis functions in solids were done by Hess and co-workers, 18 but also more recently Jensen 19 compared atomic, molecular, and solid-state basis sets for carbon and silicon to highlight the differences originating from the different chemical environments. Another aspect related to the adoption of Gaussian-type functions is the basis set incompleteness due to the use of a finite number of basis functions. Basis set incompleteness is an issue in all types of calculations, but most of all in calculations that employ atom-centered basis sets−Gaussians, Slater functions, or numerical orbitals. This is because the atomic basis sets can never be made complete enough in polyatomic systems, as the basis becomes overcomplete−necessitating the removal of variational degrees of freedom−before becoming complete. In molecules it is rather common to adopt a sequence of basis sets of increasing size (e.g., cc-pVXZ (X = D,T,Q,···) 20 and pc-X (X = 1,2,3, ...) 21 ), but this is not yet routinely applicable for solids. Therefore, reaching the basis set limit is not trivial−even for such simple systems as lithium hydride 22−26 −and is not just a matter of computational efforts: as basis sets grow larger, exponents tends to become more diffuse, linear dependency problems arise, and the convergence of infinite Coulomb and exchange series is jeopardized. The problem of linear dependencies with an extended basis set is a matter of active research not only for solids but also for average-sized molecules. 27 While the important role of diffuse functions in solids has been recently highlighted by Kadek et al., 28 too diffuse functions are often not needed for ground state calculations because of the packing of the atoms in the unit cell. Such very diffuse functions can also be added a posteriori through dual basis set techniques. 29 Seen from another viewpoint, the main conceptual difference in basis sets meant for the solid state as opposite to molecular electronic structure calculations is that the latter have to describe the asymptotic exponential decay of the electron density in a finite system, requiring somewhat diffuse functions, whereas diffuse basis functions are generally thought not to be necessary in solid-state calculations because the density is much more uniform throughout the cell. In this work our aim is to (i) show to what extent the basis sets are different in different chemical environments, by optimizing bases of the def2-TZVP quality 30−32 and (ii) attempt to use suitably optimized quadruple-ζ basis sets, also from the def2-family, to verify whether they can be adopted for solids without significant pruning, and outline possible strategies for reaching such goal. To this purpose we present a technique for the optimization of basis set exponents and contraction coefficients, that is based on the Direct Inversion in the Iterative Subspace (DIIS) technique 33−35 and actually quite similar to its geometry optimization variant, GDIIS. 36 The algorithm is implemented in the CRYSTAL code. 9 We show how such optimization allows the retaining of the full number of Gaussians letting the algorithm decide about the diffuseness of the exponents. THEORETICAL FRAMEWORK 2.1. Background. In the linear combinations of atomic orbitals (LCAO) framework, the crystalline orbitals ψ are treated as linear combinations of Bloch functions (BF) ϕ that are, in turn, defined in terms of local atom-centered functions in which g is a direct space lattice vector, k is the lattice vector defining a point in the reciprocal lattice, A are the coordinates of the atom in the reference cell on which the AO φ is centered, and a are the variational coefficients. The sum over μ is limited to the number of basis functions in the unit cell. The Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article sum over g is, in principle, extended to all the (infinite) lattice vectors of the direct lattice; therefore, suitable screening techniques have to be adopted. 1,38,39 As usual, the AOs can be written as a contraction of a number of primitive Gaussian-Type Functions (GTF) G centered on the same atom in which d j are the contraction coefficients, and α j are the exponents of the radial component of the function. The number, type, and contraction scheme of the Gaussian basis set define its quality. Gaussian functions are defined as where R l (r) = Ne −α·r 2 is the radial part−N being a normalization constant−and Y lm (θ, ρ) is a spherical harmonic. 2.2. The BDIIS Method. Our goal is to devise a suitable algorithm for a system-specific optimization of the exponents α j and contraction coefficients d j as in eq 3. Taking inspiration from the well-known Direct Inversion of Iterative Subspace (DIIS) algorithm of Pulay, 33,34 we describe in the following our Basis-set DIIS (BDIIS) method. The idea is that of an iterative procedure in which, at each step n, exponents and contraction coefficients are obtained as a linear combination of the trial vectors obtained in previous iterations In the above, e i α and e i d are, respectively, the changes in exponents and contraction coefficients, as predicted by a simple Newton−Raphson step. In fact the gradients e i are defined by where Ω is a suitable functional to be minimized. Here we decide to minimize the system's total energy to which we add a penalty function including the Overlap matrix condition number, following the proposal of VandeVondele and Hutter: 40 The value γ = 0.001 was adopted as suggested in ref 40. In eq 8, κ({α, d}) is the condition number, i.e., the ratio between the largest and the smallest eigenvalue of the overlap matrix at the center of the Brillouin zone (Γ-point). The purpose of such penalty function is to prevent the onset of harmful linear dependence. Linear dependence issues can give rise to numerical instabilities and, as a consequence of that, the appearance of unphysical states. Such unphysical states generally lead to a catastrophic behavior of the total energy that can drop to a value that is orders of magnitude larger, in absolute value, than the proper one. Although the first of derivatives in (7) could be in principle computed analytically, 41,42 in the present work we evaluate both e i α and e i d by means of numerical derivatives (vide infra). The length of the estimated Newton steps represented by the e α and e d can assume the meaning of an estimated distance from the minimum of Ω and thus be utilized as a measure of the "error" at step n. The DIIS error matrix, that has the size of the iterative space considered, is built from the scalar products , we can obtain the linear combination coefficients of the BDIIS method to be used in (5) and (6) by solving the linear equation system i k j j j j j j j j j j j j j j j j j y { z z z z z z z z z z z z z z z z z i k j j j j j j j j j j j j j j j y { z z z z z z z z z z z z z z z i k j j j j j j j j j j j j j j y where λ is a Lagrange multiplier. Such an approach is, in fact, similar to geometry-optimization DIIS (GDIIS 36 ) adopting an identity Hessian. 2.3. Details of the Implementation. The BDIIS procedure outline above has been implemented in a development version of the CRYSTAL17 code. 9 As already mentioned, in this work we compute the derivatives in (7) by means of a twosided numerical derivative. Which means, for exponents α (11) and similarly for coefficients d. We have also tried to compute a diagonal Hessian using the three points α i + Δα ̅ , α i and α i − Δα ̅ , so to improve the step (error) as defined in eqs 5 and 6 at the same computational cost. However, such a diagonal Hessian seemed not to improve on the quality of the step, and the overall convergence pattern turned out to be similar or slower in all cases we tested. We surmise that the cause can reside in the insufficient accuracy of a three-point numerical estimate of the second derivative. Once a suitable step Δα ̅ n = α ̅ n − α n−1 is obtained from eq 5, a line search is performed for tuning the optimal parameter f l f n n l n 1 by sampling f l from 0.1 to 1 in a suitable discrete point grid. The point with the minimum value of Ω is then retained. The convergence of the iterative optimization procedure is verified by checking the absolute value of the largest component of both the gradients and the penalty function. The iterative space used in the BDIIS procedure is set at most to the 14 previous cycles, and the BDIIS step is active since the second basis set optimization step. The optimization is complete when the absolute value of the difference in the penalty function is less than 1.0 · 10 −5 au and the absolute value of the largest component of gradient converges to 3.0 · 10 −4 . RESULTS In this section we first briefly describe the performance of the BDIIS method in minimizing the Ω energy functional as Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article defined in eq 8. Then we focus on the effect of system-specific basis set optimizations, by showing the differences between optimized exponents of a typical triple-ζ basis in simple systems containing the same atoms but in a different chemical bonding situation. Finally, we analyze how extended basis sets, such as molecular quadruple-ζ quality, can be optimized for dense solids without significant pruning. In the Supporting Information the reader can find full CRYSTAL17 9 inputs for all the calculations presented in the following, including the explicit definition of the atomic basis sets. All of the optimizations in this work have been carried out starting from molecular def2-TZVP or def2-QZVP basis sets. 30−32 Although the implemented algorithm is general, as described in the previous section, in the following we will focus on the optimization of valence and polarization functions only−the ones relevantly changing in a different chemical environment. Since they are usually uncontracted Gaussian functions, the optimization has been performed solely for the exponents. As a general strategy, we took as a starting point the molecular basis sets, upscaled the exponents of all outermost functions so to avoid small values (<0.1) but without pruning the basis set, and finally optimized the corresponding values by minimizing the function Ω. In many cases, we adopted pure GGA functionals such as PBE 43 and PBEsol 44 in order to have a faster time to solution. In other cases we used PBE0 45 or Hartree−Fock. More generally, we do not regard our basis set optimizations to be much dependent on the chosen method, 46 since we do not deal with the reoptimization of the core. As the focus of our work is on accuracy and numerical stability, we will not present timings. 3.1. Performance of the BDIIS Method. In Figure 2 we report the progress of the Ω functional minimization−cf. eq 8−along with the BDIIS iterations, in two exemplary yet challenging cases for Gaussian-type basis sets: graphene and bulk metallic sodium. In graphene, the basis set optimizer, run with the PBEsol functional, leads to a stable result after a few iterations, which represents a significant energy gain with respect to the starting point and remains stable for long. If the optimization is allowed to continue for hundreds of cycles, a rise in the penalty function γ ln κ({α,d}) is observed, which evidently prevents the gaussians to become too diffuse. A corresponding decrease of the electronic energy is observed. We remark that such changes are however minimal with respect to the effect of the first iterations, and the optimization is essentially converged after 50 cycles to all practical purposes. In the same figure we have also reported the curve obtained using the Broyden−Fletcher−Goldfarb−Shanno (BFGS) method. It is seen that such a method reaches the same value of the Ω functional, more slowly but also more stably. We will discuss the differences in the solution in the following. The case of bulk metallic sodium (Figure 2(a)) is different: the electronic energy varies little (and even increases slightly with respect to the starting point); but the penalty function is much more relevant than in other cases, and about 100 iterations are required to reach a plateau. Notably, in this case the basis set optimization was carried out with a hybrid HF/ DFT functional (i.e., PBE0). This level of theory is usually expected to be problematic for metallic systems, but the BDIIS algorithm runs smoothly to convergence. 3.2. Role of the Chemical Environment. We compare here two sets of systems, composed by the same elements: first crystalline diamond, graphene, and carbyne chain and then NaCl with bulk Na and Cl solids. We compare our systemdependent optimized basis sets with the pob-TZVP 15,17 ones. These were also derived from def2-TZVP but differently from ours: (i) the valence exponents were optimized for each system in a comprehensive set of solids with different chemical environments, (ii) for multiple optimization of the same atomic species an averaged value of the exponent was considered, and (iii) most notably, many of the outermost functions were removed, thus reducing the consistent quality of the basis. We will refer to the basis sets optimized in this work as "dcm-TZVP". Since different basis sets of the same nominal Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article quality are obtained through optimization on different systems, we will adopt the more detailed notation dcm[···]-TZVP, specifying in square brackets the system used for the optimization (e.g., dcm[NaCl]-TZVP). 3.2.1. Diamond, Graphene, and Carbyne Chain. Diamond and graphene are two allotropes of carbon. Both are covalently bound systems but differ by hybridization (sp 3 and sp 2 ), as well as crystalline (3D Vs 2D) and electronic (insulator and conductor) structures. Carbyne is a model system with 1D periodicity (polymer), two atoms in the unit cell and alternating bond length. In Table 1 we compare the exponents of the original def2-TZVP, the original pob-TZVP, the recently revised pob-rev2 basis set, and our dcm-TZVP basis specifically optimized for diamond, graphene, and carbyne with the PBE functional. For brevity, we will refer to the latter two basis sets as dcm[C diam ]-TZVP, dcm[C graph ]-TZVP, and dcm[C cby ]-TZVP, respectively. Figure 3 shows a corresponding graphical representation of the radial component of some of the involved gaussians. The first striking effect observed is the overall contraction of exponents with respect to the molecular basis. This is not unexpected 15,19 and is to be ascribed to the higher density of atoms in the solid-state phase. The outermost p-type function shows probably the most significant difference between diamond and graphene. Such difference is due both to the different chemical bonding (sphybridization) and atomic density−graphene is a 2D system surrounded by vacuum in the third dimension. This vacuum offers more space for the Gaussian functions to expand and at the same time requires more extended functions to cover that empty space. Such interpretation is corroborated by the example of the 1D carbyne chain basis dcm[C cby ]-TZVP, which features an even more diffuse p-shell. We take the opportunity here to remind that−conversely to plane-waves−in an atomcentered Gaussian-based approach the true 2D and 1D periodicity is possible, hence the vacuum in the nonperiodic directions is a true vacuum. The effect of the reduced dimensionality is, however, partly counterbalanced by a progressively shorter carbon−carbon distance that is 2.92 Å in diamond, 2.69 Å in graphene, and 2.39/2.46 Å in carbyne due to the different hybridization of the carbon atom in the three compounds. The more diffuse p-function is responsible for the failed convergence when using the graphene dcm-[C graph ]-TZVP basis set in diamond (Table 2). Also d-and ftype functions have a somewhat different spread in the two systems, showing that quadrupole and octupole interactions act differently in the two allotropes. In Table 2 we report some total energies obtained at the DFT/PBE level: in addition to dcm-TZVP and pob-TZVP bases, the dcm[C diam ]-TZVP basis was also tested in graphene and the dcm[C graph ]-TZVP in diamond. From Table 2, we see that the energies relative to the proper dcm bases are lower by about 0.014 E h than the pob-ones. On the other hand, swapping the two dcm-TZVP bases led to an energy similar to (though still lower than) that of pob-TZVP[G] while the more diffuse dcm[C graph ]-TZVP turned out to be unusable in the Let us now compare the optimal basis set obtained with the PBE0 functional for three bulk structures with very different chemical bonding, namely: metallic Na, molecular Cl 2 , and ionic rocksalt NaCl, whose electronic charge densities are reported in Figure 1. As discussed in the Introduction, the significantly different features in the electronic structure expectedly require a different support and hence a specific basis set. The geometries adopted are fully reported in the Supporting Information and have been obtained from experimental references in the literature. 47−49 In Table 3 we see that for the Cl 2 molecular crystal, not unexpectedly, the original def2 basis set undergoes very little modifications when optimized in the solid. Actually, it performs much better than the pob-TZVP basis (see Table 5) with the total energy being 0.1 Ha lower. The removal of the outermost p-function in the pob basis sets leads to an overall decrease of the exponents of the remaining functions that partly compensates the contribution to the total energy of the missing function. If one includes the outermost p-function from the dcm basis set, a further energy lowering of 13 and 17 mE h is observed for the pob and pob-rev2 basis set, respectively. However, this is not enough to reach the final energy of solid Cl 2 as obtained with the optimized dcm basis set thus showing the crucial role of the outermost p-function. The dcm[NaCl] basis for Cl, optimized in the rocksalt structure, features significantly more contracted exponents, as far as s-and p-functions are concerned, while the d exponent becomes more diffuse. As reported in Table 4, a stronger contraction is observed in exponents of the s-type orbitals in going from the molecular def2 to the bulk metal and then the ionic NaCl. In this case we had to remove the most diffuse pfunction (0.03 au) in order to ensure convergence, but at difference with the pob-TZVP case, we were able to keep all the d-functions in. As shown in Table 5 it can be seen that in all cases dcmenergies are significantly lower than pob-ones, and quite surprisingly the dcm[Cl 2 ]-TZVP and dcm[Na]-TZVP basis sets seem to perform well also in the ionic case. Such basis sets effects are also reflected in geometry optimizations. In Table 6 we report the optimized lattice parameters obtained with the different basis sets. It is seen that the dcm-basis sets lead in all cases to an expanded volume with respect to the pob-ones and in the case of Na and NaCl also to a better agreement with experiment at the PBE0 level. In the molecular crystal Cl 2 , dispersion effects act as a key role, hence the plain PBE0 leads to an excessively expanded volume when the dcm[Cl 2 ] basis is used, while the introduction of -D3 dispersion correction restores a more correct description. It is reasonable to assume that the volume expansion associated with the dcm[Cl 2 ] basis is related to a mitigation of BSSE effects−which usually act as spurious dispersion. 3.3. Use of Large, Extended Basis Sets. Solid LiH is a rather standard benchmark for methods assessment in the solid state. Recently 24,26 lithium hydride has been used as a benchmark for estimating the Hartree−Fock basis set limit compared with results from different approaches. 23,50,51 The case of LiH, similarly as NaCl, poses certain difficulties since standard molecular basis sets are designed for neutral atoms, not ions, hence inapplicable to bulk ionic crystals without modification. We optimized the basis set series def2-SVP/def2-TZVP/def2-QZVP with our BDIIS algorithm, obtaining the Figure 4 we compare such energies with previous data from the literature also obtained with the CRYSTAL code. It is seen that with the quadruple basis we reach a value that is very close to that of ref 26 (i.e., −8.06475 E h ), where a much larger basis set was used. This last result was already close to the CBS limit compared to methods employing different basis set types. 26 If the dcm-TZVP and dcm-QZVP total energies are used to estimate the HF complete basis set (CBS) limit by using a twopoint extrapolation scheme based on an exponential formula, a value of −8.065089 is attained. Notably, this energy limit is even lower than the one reached by Usvyat and co-workers 26 by 0.3 mE h . When using the CBS energy for the atoms, 52 the cohesive energy is then −3.60 eV in very good agreement with results from different theoretical approaches. 23,50,51 Similarly, we have optimized a quadruple-ζ basis for diamond and graphene. The original def2-QZVP basis does not allow convergence in either case, while with the basis sets as reported in Table 7 the energies of −76.165178 and −76.174386 E h are obtained for the two systems at the PBE level. The latter value we believe to be close to basis set completeness. Extrapolation to the CBS limit leads to a value of −76.167396 and −76.177298 E h , respectively. In Table 7 we report the reoptimized exponents with respect to def2-QZVP basis sets−all other functions are the same as in the molecular basis set. It is worth noting that g-type functions were also included in the basis set as they were recently made available in the development version of the CRYSTAL code. 53 BSSE effects are reduced much more considerably by the increasing of the basis set quality, rather than by the optimization of the exponents, so that BSSE is quite similar for pob-or dcm-basis sets. For diamond, we have also calculated the Hartree−Fock CBS limit by using the dcm-TZVP and dcm-QZVP basis sets. 54 Interestingly, results show that the basis sets optimized with the PBE functional can also be used for HF even if a tighter setting of the computational parameters is required (see the Supporting Information). In Figure 5 we compare, for graphene, the electronic band structure computed with Gaussian basis sets with the bands for the same systems as obtained from a plane wave code 55 using a considerably high cutoff. It is evident that the bands at the triple-ζ level are different from the reference ones, specially in the Γ and M points of the Brillouin zone. Nevertheless, the dcm[C graph ]-TZVP performs better than the pob-TZVP. A considerably better agreement is attained by using the dcm[C graph ]-QZVP basis (right panel of Figure 5). We believe this is strong evidence of the possibility of reaching converged results with Gaussian basis sets and the effectiveness of a system-specific optimization scheme. CONCLUSIONS In the present work, we have developed a basis set optimizer based on the DIIS algorithm that minimizes the total energy of the system constrained to keep the condition number of the overlap matrix as small as possible in a similar approach as proposed by VandeVondele et al. 40 The latter constraint acts as a pivot in the optimization of the basis set and prevents the lowest exponents of the basis set to decrease too much thus reducing the risk of linear dependency and numerical instability. This is particularly important in solid-state calculations where the use of atom-centered diffuse functions is more delicate and sometime useless. We have then shown that the proposed method is quite effective for solid-state calculations and allows for an easy optimization of basis sets not only of triple-ζ quality but even of quadruple-ζ size. Furthermore, we have demonstrated that the BDIIS method can be used to obtain basis sets for solids of consistent quality as molecules without pruning the original basis sets. Results for simple solids as diamond and graphene for which the definition of an appropriate and systemconsistent basis set is uglily difficult are very promising. Also, the possibility of employing basis sets specifically calibrated on a given system allowed us to easily reach the HF complete basis set limit for LiH which has been a long debated issue and for diamond. While reasonable questions can be raised about the transferability of such optimized basis sets from one method to another, we have seen that a basis set optimized, say, with PBE is very close to convergence when inserted in HF or PBE0. For our diamond test case the energy with such basis was only a few μE h away from the minimum when transferred from one method to another. The evidence of the excellent performance of the BDIIS method paves the way for a careful definition of system-specific basis sets, as a viable alternative to all-purpose basis sets. Nevetheless, it could be employed for a more extensive work that would permit the creation of all-purpose basis set families for a larger set of atomic species. Furthermore, the algorithm here described could be very useful to optimize basis sets for post-HF correlation methods 20,56 as well as for response properties. 57 The authors declare no competing financial interest. Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article ■ ACKNOWLEDGMENTS
7,135.8
2020-03-26T00:00:00.000
[ "Chemistry" ]
A clinical solution for non‐toxic 3D‐printed photon blocks in external beam radiation therapy Abstract Purpose A well‐known limitation of multi‐leaf collimators is that they cannot easily form island blocks. This can be important in mantle region therapy. Cerrobend photon blocks, currently used for supplementary shielding, are labor‐intensive and error‐prone. To address this, an innovative, non‐toxic, automatically manufactured photon block using 3D‐printing technology is proposed, offering a patient‐specific and accurate alternative. Methods and materials The study investigates the development of patient‐specific photon shielding blocks using 3D‐printing for three different patient cases. A 3D‐printed photon block shell filled with tungsten ball bearings (BBs) was designed to have similar dosimetric properties to Cerrobend standards. The generation of the blocks was automated using the Eclipse Scripting API and Python. Quality assurance was performed by comparing the expected and actual weight of the tungsten BBs used for shielding. Dosimetric and field geometry comparisons were conducted between 3D‐printed and Cerrobend blocks, utilizing ionization chambers, imaging, and field geometry analysis. Results The quality assurance assessment revealed a −1.3% average difference in the mass of tungsten ball bearings for different patients. Relative dose output measurements for three patient‐specific blocks in the blocked region agreed within 2% of each other. Against the Treatment Planning System (TPS), both 3D‐printed and Cerrobend blocks agreed within 2%. For each patient, 6 MV image profiles taken through the 3D‐printed and Cerrobend blocks agreed within 1% outside high gradient regions. Jaccard distance analysis of the MV images against the TPS planned images, found Cerrobend blocks to have 15.7% dissimilarity to the TPS, while that of the 3D‐printed blocks was 6.7%. Conclusions This study validates a novel, efficient 3D‐printing method for photon block creation in clinical settings. Despite potential limitations, the benefits include reduced manual labor, automated processes, and greater precision. It holds potential for widespread adoption in radiation therapy, furthering non‐toxic radiation shielding. INTRODUCTION Remarkable strides have been made in improving the physical attributes of external beam radiation therapy treatment delivery in recent years. 1 A critical driver of this progress lies in the burgeoning realm of additive manufacturing, or more generally 3D-printing, and its consequential applications in radiation oncology. 2,3][6][7][8] In radiation oncology, clinical plans frequently utilize a multi-leaf collimator (MLC) to modulate the radiation beam, thereby maximizing normal tissue sparing. 9evertheless, clinical necessities can sometimes surpass the physical capacity of the MLC, owing to intrinsic inter-leaf leakage, intra-leaf leakage, and geometric constraints attributed to the unique shaping of the leaves.A prime example is mantle region therapy, where the MLCs fall short in fully safeguarding normal tissues within a single field. Currently, the solution is for supplementary shielding to be manufactured and implemented in the treatment, predominantly in the form of Cerrobend photon blocks.However, this remedy is not without its share of obstacles.The process of fabricating a Cerrobend photon block is laborious and fraught with challenges, including the likelihood for the Cerrobend to crack if poured all at once.A workaround to this problem entails pouring the Cerrobend into the mold in successive layers, each succeeding layer poured after the preceding one has cooled.However, this method introduces the risk of air bubble formation between layers.Additionally, often the blocks are made larger than their in-field shape to facilitate the technician attaching two bolts per Cerrobend part to the mounting tray, with the bolts being placed at least 3 cm apart.The entire manufacturing process, due to its complexity and labor intensity,presents numerous opportunities to compromise dosimetric accuracy, including issues with the design's divergence and overall resolution. Consequently, there is a compelling need to devise an innovative, non-toxic, and automatically manufactured photon block.We aim to extend the use of 3D-printing in radiation oncology by offering a comprehensive overview of automatically generated tungsten ball bearing (BB) filled 3D-printed photon block shell that is not only patient-specific but also non-toxic. Patient plan selection Photon blocks are created by dosimetrists in the treatment planning system (TPS) Varian Eclipse v15.6 (Varian Medical Systems, Siemens Healthineers, Palo Alto, CA) by creating a margin around the specific anatomy to be blocked in a particular field.Three patient-specific photon shielding blocks were chosen for investigation: nearly circular mantle field lung shielding (patient A), pelvis field ovary shielding (patient B), and long,thin mantle field lung shielding (patient C) (Figure 1, Table 1).The dosimetric parameters of point output between the blocks,point output under one of the blocks, megavoltage (MV) image profiles, and field geometry were investigated for clinically used Cerrobend blocks and 3D-printed blocks. 3D-printed photon block design The complete 3D-printed photon block shell design features five general components: the baseplate, the tungsten BB containing shell,checkerboard hole pattern, the blocked complete irradiated area outline (CIAO), and aluminum nuts and bolts.The aluminum bolts are placed outside the CIAO.Although aluminum was chosen, since they are outside the CIAO, any metal may be used for the bolts.Plastic bolts are not recommended due to low shear strength.These 3D-printed and reusable components in conjunction with the tungsten BB infill compose the overall 3D-printed photon block.The 3D-printed photon block is designed to have similar dosimetric properties to the currently used Cerrobend standard.We are limited in design due to the weight and the amount of material needed to equivalently attenuate the beam.We are also constrained by the dimensions allowed between a Varian TrueBeam applicator mount tray insert and the head of the gantry which is approximately 8.2 cm.The 3D-printed photon block is attached to the provided Varian applicator tray by inserting the aluminum nuts and bolts through the baseplate and the tray.The 3D-printed block shell used for containing the tungsten BBs were designed to be automatically calculated to the appropriate divergence for the beam source (Figure 2).The tungsten BB containing block shell has an interior thickness of 8 cm, whereas the Cerrobend blocks are made at 7.5 cm thickness.After initial testing with thicker walls, a final block shell wall thickness of 1.2 mm was chosen.The choice of wall thickness is a compromise between strength and dosimetric impact (see Sections 3.1, 4.2 for structural integrity tests).Thickness of the tungsten ball bearing layer was adjusted to match the treatment planning system.The shell walls were not incorporated in the treatment planning system's dose calculation.The baseplate was designed to extend 1 cm larger in all jaw directions at a 100 cm source-to-surface distance (SSD).The tungsten BB-filled block shells were sealed by affixing a reusable 3D-printed cylindrical plug into the fill hole, which was then secured in place between the 3D-printed baseplate and the mounting tray.The final design of the automatically generated 3D-printed photon block shell emerged after an iterative review and redesign process involving physicists, dosimetrists, radiation therapists, and physicians, ensuring consideration and clinical viability. 3D-printed photon block generation The scripted generation of the patient-specific photon shielding block shells for 3D-printing is completed via the Eclipse Scripting API (ESAPI) as well as the Python programming language, and directly accessible within the clinical TPS.Patient-specific attributes, such as the spatial coordinates of the block's perimeters are interfaced (Table 2).A stereolithography (STL) file is exported to a patient-specific file directory for 3D-printing.The automatic generation is completed in approximately 20 s. 3D-printed photon block fabrication The 3D-printed photon block shell STL files are imported into Bambu Studio v1.7, an open source 3D-printing slicing software, and remotely set to and printed with tough PLA (MatterHackers PRO Series PLA, Matter-Hackers, Lake Forest, CA, USA) via a fused deposition modeling (FDM) type Bambu Lab X1 Carbon (Bambu Lab, Austin, TX, USA).The FDM 3D-printer is configured with a 0.6 mm nozzle, ensuring at least two walls are deposited for each of the BB containing shells of 1.2 mm thickness.The infill was set to 30%, with three top and bottom walls, which only effects the baseplate and top face of the 3D-printed photon block shell.The baseplate was oriented flat onto the print bed.The print time for each patient-specific photon shielding device was approximately 1.5 h. 3D-printed photon block quality assurance The shell was filled compactly with 1.5-2 mm tungsten BBs for radiation shielding.This has been shown to be adequate for clinical radiation shielding. 7,8The expected weight of the tungsten BBs was calculated and compared to the actual value.The shape of the tungsten BB containing shell is assumed as a frustum for the purpose of calculating the mass of the tightly packed tungsten BBs.First, the area, A, of a block's bottom face, A bottom and the top face, A top , are calculated for each of the two blocks.The area of the bases are calculated through the shoelace theorem with the input being the n clockwiseordered spatial coordinates, x i , y i , of the perimeter of the blocks at 100 cm SSD, directly interfaced via ESAPI (Equation 1). Secondly, the volume of each block is calculated, where h is the height of the frustum, and the previously calculated areas of the top, A top , and bottom face, A bottom , are inputted per block (Equation 2). The total mass, M, of n total theoretical tungsten BB containing shells are then calculated, from the sphere packing fraction (approximately 0.6 from theory of randomly packed spheres 10,11 and our prior work 7 ), the density of the tungsten alloy BBs (approximately 17.6 g/cc) and V i which represents the volume of each i th BB containing shell utilizing the previously validated parameters (Equation 3). 7 The constants and combine to give average density of the BB-filled volume.This is approximately 10.5 g/cc, from random sphere packing theory, but to obtain a more precise predicted mass for a given 3Dprinting process and batch of BB's one should obtain a calibrated value for the product by weighing a set of fully-filled representative blocks.The calculated mass is then compared to the mass difference of the fully filled tungsten BB 3D-printed photon block shell and the 3D-prrinted photon block shell with the absence of BBs (only the 3D-printed block shell).The BB-filled volumes (V i ) were calculated at the Python stage of block shell generation.The Python script converts the design to STL format in the last step.It is possible, though less convenient, to also calculate V i through Boolean and trimming operations of final STL shell volumes. Point dose measurement Point dose output measurements were taken in solid water using a pinpoint ionization chamber (PTW Pin-Point chamber, PTW-Frieburg, Breisgau, Germany) with a Varian TrueBeam.The setup for the linac energies of 6, 10, and 15 MV were taken at 100 cm SSD, 5 cm of solid water depth and 5 cm of solid water backscatter.For all measurements, 400 monitor units (MUs) were delivered.For each patient's blocks,a baseline measurement was taken at the central axis (CAX) with the clinical jaw size.The ion chamber was then laterally and longitudinally shifted between each patient's photon blocks, the clinical field is then configured with MLC and jaw, and measurements were taken for the Cerrobend then 3D-printed photon blocks.The ion chamber was then laterally and longitudinal shifted under one of the photon blocks for each patient with their clinical field configured, for the Cerrobend then 3D-printed photon blocks. The dose for each photon energy was calculated in corresponding setup conditions at the CAX, the patientspecific between block location, and patient-specific under block location utilizing the Eclipse TPS and Acuros v15.6.The ratio of the CAX measurement to the between block location measurement is computed and compared to the 3D-printed and Cerrobend ratio, with the overall difference to TPS computed.The ratio of the CAX measurement to the under block measurement is also computed and compared to the 3D-printed and Cerrobend ratio, where both are then compared to the TPS. MV image analysis The Electronic Portal Imaging Device (EPID) imager onboard the TrueBeam was used to capture MV images of both the 3D-printed and Cerrobend blocks for each patient.For each image, the clinical field was used, including gantry orientation, and 100 MUs were delivered.The MV images for the 3D-printed and Cerrobend design are overlayed and profiles are taken through each of the blocks for each patient.Images at all four cardinal angles were taken to further evaluate any sag or shift in the blocks. Photon block field geometry A field geometry comparison between the 3D-printed and Cerrobend photon blocks was performed on 6 MV images taken at 100 cm SSD.The resulting Digital Imaging and Communications in Medicine (DICOM) images are thresholded and binarized using MATLAB R2023b (MathWorks, Natick, MA, USA) to maintain spatial accuracy.The TPS block perimeters used for the generation and fabrication of each patient's 3D-printed photon block are also masked and binarized.The patient field CIAO from the TPS was also imported with ESAPI, then masked over both MV images and the TPS blocks to account for Cerrobend blocks enlarged size outside of the treatment field.Utilizing the built-in "jaccard"function, the Jaccard distance is applied to measure the dissimilarity of sample sets and is described in Equation ( 4), where A and B in this context are the set of binarized pixels of the blocked area inside the CIAO, which is defined by the MLC.This then excludes any extended area of the Cerrobend blocks outside the CIAO field, which may be added to allow for screwing to the baseplate.The set of pixels A is either the attributed to the 3D-printed design (3D) or the Cerrobend design (CB), and B is the TPS design; delineated respectively, as d j (3D, TPS) and d j (CB, TPS), where 0 ≤ d j (A, B) ≤ 1. Quality assurance The calculated mass, measured mass, and the percentage difference of the tungsten BBs are shown in Table 3 Point dose measurement The placement of each of the measurements are given in Figure 3.The relative dose output measurements for the three patient-specific blocks were compared (Figures 4 and 5).When positioned under one of the photon blocks, the output of the blocks agreed within 2% of each other.When compared to the TPS, both the 3D-printed and Cerrobend blocks were found to agree approximately within 2%.When position between the photon blocks, the output agreed within 2% when comparing both the 3D-printed and Cerrobend to the TPS. MV image analysis For each patient, 6 MV image profiles were overlayed and measured (Figure 6).For all patients, the profiles taken through the 3D-printed and Cerrobend blocks agreed within 1% outside of high gradient regions. Photon block field geometry An in-field block comparison at 100 cm SSD between the 3D-printed, Cerrobend, and TPS aperture for each patient is shown in Figure 7.The computed values of the Jaccard distance are described in Table 4. On average, the 3D-printed blocks had a Jaccard distance of 0.067, F I G U R E 5 The percentage difference to the treatment planning system (TPS) ratio for three patients, using 3D-printed and Cerrobend blocks.Measurements were conducted between the photon blocks at linac energies of 6, 10, and 15 MV, using a PTW PinPoint ionization chamber within a 100 cm SSD, 5 cm solid water depth, and 5 cm solid water backscatter configuration.For patient B, the measurements between the blocks were shielded by the multi-leaf -collimator. Dosimetric commissioning The quality assurance procedure for packing the tungsten BBs in the 3D-printed block shell has been demonstrated to be accurate within 1.5%.The calculation method for the mass, as described in Equation ( 3), could be fine-tuned as more data points are gathered. If the BB-filled block shell is overfilled, that is, the measured mass is higher than expected mass, this indicates either a small variance from the calibrated density of the BB filled volume (), or deviation of the shell volume from the expected volume.Since slightly higher attenuation is preferable to under-attenuation, a looser tolerance on overweight may be applied compared to underweight, as long as the weight check QA can still catch gross "wrong shell volume" errors.The precise tolerance value is a clinical choice that should be made at commissioning.The point dose measurements under the photon blocks followed a general trend where the 3D-printed and Cerrobend blocks differed between each other by approximately 1.5% on average.When compared to the TPS, both the 3D-printed and Cerrobend blocks were found to range within 2% of the TPS.The point dose measured between the blocks followed a general trend where when measured in the open field, the 3D-printed measurement was found to be lower than Cerrobend by approximately 0.8% on average.This can be accounted for due to the 3D-printed photon block shell baseplate, which served as an interface to attach the tungsten BBfilled shells to the applicator tray, with an approximate thickness of 1.6 mm PLA.This difference was found to align the 3D-printed blocks closer to the measured TPS value.The measurement for patient B had been taken beneath the MLC leaves between the blocks, attributing to the nearly zero difference in output.The notable differences exceeding 1% between the 3D-printed and F I G U R E 7 In the first column, overlayed in-field block (black) of the Cerrobend (pink) and treatment planning system (TPS, green) block at 100 cm SSD.In the second column, overlayed in-field block (black) of the 3D-printed (pink) and treatment planning system (green) block at 100 cm SSD.The multi-leaf collimator leaves (light blue, Bank A, dark blue, Bank B) from the TPS are masked over both Cerrobend and 3D-printed blocks.Cerrobend blocks for both the under and between block measurements were presumed to be due to the disparity in the shape, placement, orientation, and density of the blocks.Overall, adopting institutions are able to tune their clinical TPS to reflect the 3D-printed photon blocks over the Cerrobend standard. The MV image profile analysis also emphasized the approximate 1% attenuation of the beam outside of the shielding regions.The notable differences exceeding 1% between the 3D-printed and Cerrobend block were presumed to be due to the disparity in the shape, placement, and orientation of the blocks. The photon block field geometry underscored the manual error in manufacturing and positioning of the Cerrobend photon blocks.However, the 3D-printed photon blocks were still susceptible to potential placement error due to the manual attachment of the 3D-printed baseplate to the tray via aluminum nuts and bolts.This error was identified in Table 4, where for patient C, the 3D-printed photon blocks in-field were 10.8% dissimilar to the TPS.An inspection of Figure 7 for patient C provided insight that the 3D-printed blocks might have been attached to the plate 0.5 mm laterally, with a slight rotation.This is because the holes in the Varian supplied acrylic block tray are 7.75 mm diameter and the ¼−20 screws are 6.15 mm outer diameter in the threaded region.To eliminate this slack, future versions may utilize custom designed screws or centering rings on the screws that fill this gap, and eliminate manual alignment. Advantages and limitations The novel 3D-printed photon block presents a range of advantages and limitations.The fully tungsten BB filled 3D-printed block shell was found to maintain structural rigidity within 0.3 mm at all four cardinal gantry angles.While block volumes of 90 cc or less were directly investigated, the minimal sag suggests sizes of at least double are acceptable, though specific commissioning should be performed.In our commissioning examples each 3D-printed shell also holds two of these blocks of near equal size.Another one of its major advantages lies in the maximum block fidelity for the edges and divergence.Being an entirely digital process, it allows for precision that is hard to achieve with manual processes.This eliminates the need for someone to be trained or to spend time creating photon blocks by hand, reducing manual labor associated with the manufacturing of the blocks significantly.Furthermore,it utilizes infrastructure that is already used for either 3D-printed boluses or 3D-printed electron cutouts, streamlining the process and making it more cost-effective.Another notable advantage is the automation of several aspects of the process.The patient identifier,jaw orientation,field orientation (AP/PA), and block placement are all automated, reducing the chances of human error and enhancing the efficiency of the workflow.Despite these numerous benefits, the novel photon block also comes with a few limitations.There can be potential printing defects, an inherent risk with any 3D-printing process.Careful quality control measures are needed to ensure these defects are caught and corrected.Another limitation is the 3D-printed baseplate blocks the light field during treatment.The placement, however, is commonly verified through image guided radiation therapy prior to treatment delivery through MV images.Finally, the process requires the purchase of tungsten BBs, which is an added expense that clinics would need to consider but could also be used for 3D-printed electron cutouts. Proposed clinical workflow After completion of thorough commissioning by a physicist, we recommend MV images of the 3D-printed photon blocks should be taken prior to a patient's treatment and compared to the TPS for dosimetric and geometrical alignment for the first dozen patients.We then suggest the routine clinical workflow as follows: (1) The treatment planner, using the automated script, creates the STL file for 3D-printing.(2) The physicist or physicist assistant then prints the block shell using an optimized printing profile with a 3D-printer.(3) The physicist or physicist assistant then weighs the block shell, fills the block shell compactly with tungsten BBs, and reweighs the block, to obtain a tungsten BB mass within 2% of the calculated value.(4) The block is then capped, aligned properly, and attached to the acrylic tray via aluminum nuts and bolts.(5) A transparency is then printed at 55.4 cm SSD with the photon block from the TPS to verify the correct jaw orientation, shape of the printed block, patient identifier, and field orientation.(6) The block is then given to the patient's treatment machine and the acting radiation therapists.(7) After the completion of the patient's treatment, the 3D-printed photon block and tray would be returned to the physicist or physicist assistant to be disassembled.(8) The tungsten BBs, aluminum nuts and bolts, and acrylic tray would be stored for future use, and the empty 3D-printed photon block shell would be kept for recycling into filament, disposed of, or provided to the patient. CONCLUSIONS This study has validated a novel method for 3D-printed photon block creation and use in clinical settings, showing accuracy and efficiency gains over traditional manual processes.While acknowledging potential printing defects and light field blockage, we argue that these limitations are outweighed by the benefits of reduced manual labor, automated processes, and greater precision.Following a carefully outlined clinical workflow, we envision the broader adoption of this technique in the field of radiation therapy, furthering the advancement of non-toxic photon shielding.We hope this approach is able to be easily adopted by care teams. AC K N OW L E D G M E N T S We would like to thank the team of radiation therapists, dosimetrists, physicists, and physicians that aided in the clinical implementation. C O N F L I C T O F I N T E R E S T S TAT E M E N T Amy S. Yu and Lawrie Basil Skinner have a US patent on field shaping devices utilizing tungsten alloy ball bearings. TA B L E 1 F Patient-specific cases details.I G U R E 1 (a) Top-down view of three patient photon blocks.The rows from top to bottom correspond to patients A, B, and C, respectively, while left column is Cerrobend and right is 3D printed.Note the Cerrobend blocks A, C are extended inferiorly out of the field to allow for additional mounting screws.(b) Side view of the three patient photon blocks in the same row, column arrangement as (a). F I G U R E 2 (a) A top-down wire-frame view of the automatically generated computer-aided-design model of the 3D-printed photon block, with features noted.(b) A 6 megavolt image of the 3D-printed photon block, with features noted.The 3D-printed baseplate is 1 cm larger than the treatment field in all jaw directions. F I G U R E 3 F I G U R E 4 Megavoltage (MV) images taken during measurement with a pinpoint ion chamber either underneath the photon blocks, or between the photon blocks at clinically delivered MV energies of 6, 15, and 6 MV for patient A, B, and C, respectively.Where either the Cerrobend or 3D-printed block extends outside of their intersection are represented as, respectively, as green and pink.Percentage difference to the Treatment Planning System (TPS) ratio for three distinct patients using both 3D-printed and Cerrobend blocks.Measurements were conducted under a photon block at linac energies of 6, 10, and 15 MV, using a PTW PinPoint ionization within a 100 cm SSD, 5 cm solid water depth, and 5 cm solid water backscatter configuration.and the Cerrobend blocks had an average Jaccard distance of 0.157.Patient C's Cerrobend in-field block deviated the most from the TPS. F I G U R E 6 Overlayed megavoltage image line profiles through 3D-printed and Cerrobend blocks for patient A, B, and C. Patient A, B, and C correspond to row (a), (b), and (c) respectively.TA B L E 4 Jaccard distances for each patient case. AU T H O R C O N T R I B U T I O N SJoseph B. Schulz, Piotr Dubrowski, Clinton Gibson, Amy S. Yu, and Lawrie Basil Skinner contributed to the conception and design of the study.Joseph B. Schulz collected all data.Joseph B. Schulz and Piotr Dubrowski performed the data analysis.Joseph B. Schulz wrote the first draft of the manuscript.All authors contributed to manuscript revision, read, and approved the submitted version. Data interfaced from the ESAPI to Python. TA B L E 2
5,868.8
2024-01-11T00:00:00.000
[ "Medicine", "Engineering" ]
Investigation of SH-Wave Fundamental Modes in Piezoelectromagnetic Plate: Electrically Closed and Magnetically Closed Boundary Conditions It is theoretically considered the propagation (first evidence) of new dispersive shear-horizontal (SH) acoustic waves in the piezoelectromagnetic (magnetoelectroelastic) composite plates. The studied two-phase composites (BaTiO3-CoFe2O4 and PZT-5H-Terfenol-D) possess the piezoelectric phase (BaTiO3, PZT-5H) and the piezomagnetic phase (CoFe2O4, Terfenol-D). The mechanical, electrical, and magnetic boundary conditions applied to both the upper and lower free surfaces of the plate are as follows: the mechanically free, electrically closed, and magnetically closed surfaces. As a result, the fundamental modes of two new dispersive SH-waves recently discovered in book [Zakharenko, A.A. (2012) ISBN: 978-3-659-30943-4] were numerically calculated. It was found that for large values of normalized plate thickness kd (k and d are the wavenumber and plate half-thickness, respectively) the velocities of both the new dispersive SH-waves can approach the nondispersive SH-SAW velocity of the piezoelectric exchange surface Melkumyan (PEESM) wave. It was also discussed that for small values of kd, the experimental study of the new dispersive SH-waves can be preferable in comparison with the nondispersive PEESM wave. The obtained results can be constructive for creation of various technical devices based on (non)dispersive SH-waves and two-phase smart materials. The new dispersive SH-waves propagating in the plates can be also employed for nondestructive testing and evaluation. Also, it is obvious that the plates can be used in technical devices instead of the corresponding bulk samples for further miniaturization. Introduction The function of the piezoelectric and piezomagnetic materials in transducers is well-known that was discussed already by about half-century ago, for instance, see [1].Today there are several ways to bond these two dissimilar materials together to form unique smart materials with desired or new properties.For instance, one threedimensional material representing a piezoelectric phase can be used as a matrix with zero-dimensional inclusions (particles) of the piezomagnetic phase, or vise versa.This two-phase piezoelectromagnetic (PEM) composite has the (3-0) connectivity.The example of such PEM composite is BaTiO 3 -CoFe 2 O 4 [2] [3] of hexagonal class 6 mm, consisting of the BaTiO 3 piezoelectric phase and the CoFe 2 O 4 piezomagnetic (magnetostrictive) phase.References [2] [3] provide the measured material constants given as percentage volume fraction (VF) of BaTiO 3 in the BaTiO 3 -CoFe 2 O 4 composites.These bulk composites, however, can be also formed as plates because the form of plate can be preferable for further miniaturization of some smart material technical devices. One of the other possible connectivities between the piezoelectric and piezomagnetic phases to form PEM composites represents a multi-layered (sandwich-like) structure.Such laminated composites have the (2-2) connectivity.These piezoelectromagnetic (magnetoelectroelastic) laminates can be composed of linear homogeneous piezoelectric and piezomagnetic layers with a perfect bonding between each interface.It is natural that average material properties of such laminated plates can be treated.Concerning the transversely isotropic PEM plates, the material parameters of the BaTiO 3 -CoFe 2 O 4 and PZT-5H-Terfenol-D laminated composite [4]- [6] are well-known, see also paper [7].Reference [8] has stated that researches on the behavior of the PEM laminate composites are relatively recent.The PEM laminates can demonstrate significant interactions between the elastic, electric, and magnetic fields and have direct applications in sensing and actuating devices, for instance, damping and control of vibrations in structures. The two-phase materials possessing both the piezoelectric and piezomagnetic effects can actually have the magnetoelectric effect.In the magnetoelectric materials, the value of electromagnetic constant α can be responsible for the evaluation of magnetoelectric interactions.The magnetoelectric materials can be formed as the composites discussed above and also exist in the single-phase form such as monocrystals.In comparison with the other PEM composites and PEM single-phase materials, the laminated composites can possess a very strong magnetoelectric coupling [9].The most famous PEM monocrystals are Cr 2 O 3 , LiCoPO 4 , and TbPO 4 [9] and they can be also used in the forms of bulk materials or monocrystal plates.The magnetoelectric effect in the single phase materials is usually very small and none of them can have combined large and robust electric and magnetic polarizations at room temperature.However, it is essential to mention the Sr 3 Co 2 Fe 24 O 41 Z-type hexaferrite [10] with a hexagonal structure discovered in 2010.It is thought that the Sr 3 Co 2 Fe 24 O 41 hexaferrite with the realizable magnetoelectric effect can be already sufficient for practical applications.It is hoped that the complete list of the review works on the magnetoelectric materials and their applications can be found in [9]- [52]. This short report has the purpose to calculate the wave characteristics for some piezoelectromagnetic composites.This was not carried out in [53] because it did not study concrete composites.Using the corresponding dispersion relations given in the following section, these calculations can be performed only numerically.The first two composites which will be researched are BaTiO 3 -CoFe 2 O 4 and PZT-5H-Terfenol-D.However, it is necessary to start with a brief review of the theory and then to report the results of the calculation.This is the main purpose of the following section. Theory and Results First of all, it is necessary to mention the high symmetry propagation directions for the transversely isotropic piezoelectromagnetic materials of class 6 mm.In these propagation directions, the propagation of the SH-waves possessing the anti-plane polarization must be coupled with both the electrical and magnetic potentials.This means that such propagation directions can also support the propagation of the purely mechanical Lamb waves possessing the in-plane polarization.As a result, this report has no interest in the propagation of purely mechanical Lamb waves.So, the wave propagation direction is along the plate surface and perpendicular to both the surface normal and the sixfold symmetry axis of the studied material of class 6 mm.The surface normal and the sixfold symmetry axis must be also perpendicular to each other [53]- [55].It is possible to state that many such propagation directions can be found.However, the SH-wave speed in such propagation directions must be the same because this is the transversely isotropic case.Note that the high symmetry propagation directions are well-known and can be also found in [56] [57].Also, it is worth noting that in the high symmetry propagation direction, the following independent nonzero material parameters exist: the stiffness constant C, piezomagnetic coefficient h, piezoelectric constant e, dielectric permittivity coefficient ε, magnetic permeability coefficient μ, and electromagnetic constant α, where C = C 44 = C 66 , e = e 16 = e 34 , h = h 16 = h 34 , ε = ε 11 = ε 33 , μ = μ 11 = μ 33 , and α = α 11 = α 33 [53]- [55], see also the famous books cited in [58] [59]. The boundary conditions in the case when the treated material simultaneously possesses the piezoelectric, piezomagnetic, and magnetoelectric effects are perfectly described in [60].To obtain the dispersion relations for the case of the mechanically free, electrically closed, and magnetically closed surfaces of the piezoelectromagnetic plate, the following points must be passed through: • consider thermodynamic variables and functions; • write constitutive relations; • thermodynamically define material constants; • compose equilibrium equations; • exploit the electrostatics and magnetostatics in the quasi-static approximation; • constitute coupled equations of motion in the differential forms; • represent the tensor form of the equations of motion; • treat the suitable high symmetry propagation directions for SH-waves; • find the eigenvalues and the corresponding eigenvectors; • employ the mechanical, electrical, and magnetic boundary conditions at the upper and lower surfaces of the piezoelectromagnetic plate.As a result, the corresponding dispersion relations can be obtained for this case of the mechanical, electrical, and magnetic boundary conditions.Following book [53], the following two dispersion relations for the determination of the velocities V new10 and V new11 of the tenth and eleventh new SH-waves propagating in the piezoelectromagnetic plate can be written: In Equations ( 1) and ( 2), the velocities V new10 and V new11 are numbered similar to those used in [53] to avoid any confusion.It is also central to state that dispersion relations (1) and ( 2) are valid for calculation of the velocities of the fundamental modes when the velocities V new10 and V new11 are smaller than the speed V tem of the shear-horizontal bulk acoustic wave (SH-BAW) coupled with both the electrical and magnetic potentials.The value of the SH-BAW velocity V tem can be evaluated with the following expression: ( ) where ρ is the mass density of the piezoelectromagnetic plate. In Expressions (1), (2), and (3), the following material parameter is also present: ( ) The material parameter 2 em K defined by Expression ( 4) is called the coefficient of the magnetoelectromechanical coupling (CMEMC).This dimensionless coefficient represents the material characteristic of the two-phase materials.However, in the dispersion relations written above the second dimensionless coefficient denoted by 2 m K can be found.This coefficient of the magnetomechanical coupling (CMMC) represents a material characteristic of a purely piezomagnetic material and is defined by the following expression: It is transparent in dispersion relations (1) and ( 2) that for kd → ∞, both the velocities V new10 and V new11 of the new dispersive SH-waves in the piezoelectromagnetic plate will approach some nondispersive SH-SAW velocity recently discovered by Melkumyan [61].This SH-SAW velocity is called the piezoelectric exchange surface Melkumyan (PEESM) wave [6] and can be defined by the following expression: Also, it is possible to discuss in this report that for the case of a very small value of the electromagnetic constant α, this constant can be neglected, namely α = 0.This discussion is missed in [53].Therefore, Expression (4) for the CMEMC 2 em K can be reduced to the following form: where K represents the coefficient of the electromechanical coupling (CEMC).It is a material characteristic of a pure piezoelectrics. Using Expression (7) instead of (4), it is possible to write the following definitions instead of the SH-BAW and SH-SAW velocities defined by Expressions ( 3) and ( 6), respectively: ( ) So, it is now possible to report the obtained results concerning the calculation of the dispersion curves for concrete transversely isotropic composite materials, namely to investigate the dependencies of the velocities V new10 and V new11 on the normalized half-thickness kd of the piezoelectromagnetic plate, where k and d respectively stand for the wavenumber in the propagation direction and the plate half-thickness.This is the main purpose of this paper.It is possible to briefly discuss the piezoelectromagnetic composites given in Table 1 for comparison.The table lists the material parameters for two famous composites such as BaTiO 3 -CoFe 2 O 4 and PZT-5H-Terfenol-D.The material parameters of the composites were borrowed from papers [4]- [6].In the table, one can find that the value of εμ for the BaTiO 3 -CoFe 2 O 4 composite is an order larger than that for the other composite.Also, both composites have the dominant piezoelectric phase because the CMMC 2 m K is significantly smaller than the CEMC 2 e K .It is well-known that the PZT-5H-Terfenol-D composite can possess a large value of the electromagnetic constant α that is the characteristic of the magnetoelectric effect.Therefore, to study this composite is preferable in this short report. In the table, the value of the CMEMC for the PZT-5H-Terfenol-D composite is 2 em K ~ 0.8.Therefore, the value of the CMEMC for a hypothetic (composite) material graphically studied in Figure 1 is chosen as 2 em K = 0.8.This figure shows the dependence on the second parameter such as 2 m K that can be found in dispersion relations (1) and (2).To understand the influence of this parameter on the fundamental mode dispersion relations, the values of 2 m K were chosen as follows: 2 m K = 0.1, 0.4, and 0.7.In Figure 1, dispersion relations (1) and ( 2) are shown by the grey and black lines, respectively.It is natural that when the value of 2 m K is significantly smaller that the value of 2 em K (case of 2 m K = 0.1 in the figure) both the normalized velocities V new10 /V tem and V new11 /V tem can be situated well below the SH-BAW velocity V tem coupled with both the electrical and magnetic potentials.In contrast, the large value of 2 m K = 0.7, which is slightly below the value of 2 em K = 0.8, leads to the case when the velocities V new10 /V tem and V new11 /V tem for the corresponding fundamental modes of the new Table 1.The material parameters of the piezoelectromagnetic composites.For these parameters there are the following values of the electromagnetic constant α: α 2 = 0.0001 εµ [s 2 /m 2 ] and α 2 = 0.01 εµ [s 2 /m 2 ] for BaTiO 3 -CoFe 2 O 4 and PZT-5H-Terfenol-D, respectively.Figure 2 shows dispersion relations (1) and ( 2) for the fundamental modes of the new dispersive SH-waves propagating in the piezoelectromagnetic plates.Two composite materials listed in the table such as Ba-TiO 3 -CoFe 2 O 4 and PZT-5H-Terfenol-D are compared.It is clearly seen in Figure 2 that the BaTiO 3 -CoFe 2 O 4 composite with smaller value of the CMEMC 2 em K illuminates the weakly dispersive behaviors of the velocities V new10 /V tem and V new11 /V tem .However, the values of the velocity V new10 at the plate half-thickness kd → 0 can be situated significantly below the value of the SH-BAW velocity V tem .This is true because the BaTiO 3 -CoFe 2 O 4 composite has the value of V tem twice as much compared with the PZT-5H-Terfenol-D composite.This peculiarity manifests that to use plates instead of the corresponding bulk samples can be preferable for investigation of the SH-wave propagation.The plates are also used to further miniaturize various technical devices based on smart materials.It is well-known that the sensitivity of some technical devices (for instant, biological and chemical sensors, delay lines) based on different dispersive and non-dispersive SH-waves can be more significant.It is also flagrant that the piezoelectromagnetic SH-waves can be produced by the electromagnetic PZT-5H-Terfenol-D 0.9 acoustic transducers (EMATs) [62].This non-contact method (EMAT) can offer a series of advantages in comparison with the traditional piezoelectric transducers [63] [64].The results of this short report can be also useful for constitution of technical devices with a higher level of integration such as lab-on-a-chip, etc. Conclusion The propagation of new dispersive acoustic SH-waves in the transversely isotropic PEM plates was considered.The two-phase PEM composites such as BaTiO 3 -CoFe 2 O 4 and PZT-5H-Terfenol-D were studied.For the plates, the case of the mechanically free, electrically closed, and magnetically closed surfaces was treated.The fundamental modes of two new dispersive SH-waves were numerically calculated.It was found that for large values of kd, the velocities of both the new dispersive SH-waves can approach the nondispersive SH-SAW velocity of the PEESM wave.For small values of kd, the value of the corresponding new SH-wave in the plate can be situated significantly below the value of the SH-BAW velocity V tem .This can be convenient for experimental studies of the new dispersive SH-waves propagating even in such PEM materials as BaTiO 3 -CoFe 2 O 4 in comparison with the nondispersive PEESM wave.It is thought that such new dispersive SH-waves can be also exploited in the nondestructive testing and evaluation.It is natural that various technical devices based on dispersive SH-waves and two-phase smart materials can be constituted, for instance, dispersive wave delay lines. Figure 1 . Figure 1.The dispersion relations for the fundamental modes of the new dispersive SH-waves propagating in the piezoelectromagnetic plates.For 2 em K = 0.8, the following values of 2 m K are used: 2 m K = 0.1, 0.4, and 0.7.The grey and black lines are for dispersion relations (1) and (2), respectively. Figure 2 . Figure 2. The fundamental modes of the new dispersive SH-waves propagating in the piezoelectromagnetic plates.The black lines are for the Ba-TiO 3 -CoFe 2 O 4 composite and the grey lines are for the PZT-5H-Terfenol-D composite. SH-waves are positioned just below V tem .This already looks like the dispersion relations shown in Figure2for the BaTiO 3 -CoFe 2 O 4 composite possessing the significantly smaller value of the CMEMC dispersive
3,768.8
2014-05-20T00:00:00.000
[ "Physics", "Engineering" ]
Introspecting knowledge If we use “introspection” just as a label for that essentially first-person way we have of knowing about our own mental states, then it’s pretty obvious that if there is such a thing as introspection, we know on that basis what we believe, and want, and intend, at least in many ordinary cases. I assume there is such a thing as introspection. So I think the hard question is how it works. But can you know that you know on the basis of introspection? Well, that all depends on how introspection works. I present one account of how introspection works and argue that on that account, you can know that you know ordinary empirical things on the basis of introspection. As far as how we know about them is concerned, there’s no principled difference between the factive and non-factive mental states. sense of lumping our knowledge of our own inner life in with our knowledge of easy math and basic logic, no matter how metaphysically and epistemologically distinct those things may otherwise have seemed. Immunity to a certain kind of skeptical scenario is one thing. Infallibility is quite another. But maybe what's behind the temptation for infallibility is the idea is that there just isn't all that much to knowing that you believe that p over and above believing that p. And maybe, the thought continues, if there's a logical connection between the facts, the fact that you believe that p and the fact that you know you believe that p, this might explain the apparent lack of a procedure and the apparent ease with which self-knowledge comes when it does. The problem with the infallibilist's position, of course, is that it's just so implausible. For some of us, self-deception is a daily occurrence. But in our zeal to reform the Over-Enthusiasts, we shouldn't forget just how easy self-knowledge is. Usually, there's not that much you have to do to figure out whether you believe that p. And perhaps we should add that in a surprising number of cases, you are in a better position to know what you believe than I am to know what you believe. How's that work? There's another way of thinking about essentially first-personal knowledge that's not based on the idea of immunity to error. On the one hand, there's the idea of a distinctive route. You have a way knowing about your own mental states that depends essentially on the fact that they're yours. What's thought and thought about depend on the same point of view. So this is one part of the first-person/third-person asymmetry. My way of forming beliefs about your mental states is different from yours. Maybe it's less direct or immediate. And on the other hand, there's the idea of a special status. The route, whatever it is, confers a high degree of justification, or warrant, or entitlement, or whatever you like. This is another part of the asymmetry. Your beliefs about your own mind have better epistemic credentials than my beliefs about your mind, at least in the ordinary, everyday case. So we start with the idea that when we're talking about self-knowledge, we're not just talking about any old knowledge you have about yourself. We're talking about your knowledge of your own mental states. But we shouldn't be absolutely certain that only facts of a specific metaphysical kind are available to us in a first-person way: only facts that are inner and present can be known this way. In the ordinary, everyday case, if you already have a plan, then you know what you'll be having for dinner. And the fact that makes your belief true is not only all the way out there in your kitchen. It's all the way out there in the future. But if you know what's going to happen because you've made up your mind that it's going to happen, then your knowledge of what you will be doing might be essentially firstpersonal after all (Anscombe 1957;Gibbons 2010). If what we're after is something we couldn't possibly be wrong about no matter what, we should abandon all hope. But if what we're after is something essentially first personal, then maybe your knowledge of your intentional actions could be like that. We should wait and see. If we give up on infallibility and the idea that there's a logical connection between first-and second-order belief, there's no longer an obvious motivation for the retreat to the inner. Typically, often enough, and with surprising ease, we know what we want, and we know what we believe. And we should add to this list of standard examples. Typically, often enough, and with surprising ease, we know that we know. We know what we can see from where we are. And we know what we're doing on purpose. Our way of knowing these things about ourselves is very much like other people's ways of knowing the same things about themselves. And it's not that much like other people's way of knowing the same things about us. There's something essentially first personal about this way of knowing. I think that knowing that you know is an example of plain old regular selfknowledge. It's not the kind of thing you need to go to a therapist for. In the first part of this paper, I'll sketch a picture of how ordinary self-knowledge works and look at some objections to extending the picture to the case of knowing that you know. In the second part, I'll raise what I take to be the fundamental question about the picture and sketch an answer to that question. To the extent that the picture and the answer work, they work just as well for knowing that you know as they do for knowing that you believe. (Edgley 1969;Evans 1982;Moran 2001). Eventually, I'll give you my account of what it comes to. But for now, we start with the basic idea. (Transparency) It's okay, and probably more than just okay, to answer questions about the mind by thinking about the world. You answer the question of whether you believe that p by thinking about whether or not p. But you also answer questions about what you want for dinner by thinking about food. At least, I do. But the idea is not just that we do it this way. The idea is that it's epistemically okay to do it this way. And this can seem kind of puzzling. Why is it okay to answer questions about one thing by thinking about something you know to be different? But some people think it's more than just okay (Moran 2001). Even if you did it some other way, if you couldn't do it this way, that would show that there's something seriously wrong with you in the rationality department. And this can seem even more puzzling. We'll come back to transparency. What's the status? In the olden days, they used to think about this in terms of some sort of general principle: infallibility, indubitability, incorrigibility, etc. (Alston 1971). I assume we're all past this by now, or anyway, think we are. But infallibility and friends are just special cases of a more general strategy: counting kinds of possibility for error. So the standard view is that we have privileged access to our non-factive mental states like beliefs, desires, and experiences. But we don't have privileged access to our factive mental states and events like knowledge, perception, and intentional action. Here's one way of making that view seem plausible. Suppose you have a false belief that p. You may well correctly believe that you believe that p, but if you believe that you know that p, that would be a mistake. Now suppose you believe that you believe that p because you desperately want p to be true. You don't really believe that p. Now your belief that you believe that p is false, and so is your belief that you know that p. You're wrong about whether you believe only in the second case, but you're wrong about whether you know in both cases. So there are more kinds of error possibilities for first-person knowledge claims than there are for belief claims. So either we have more privileged access to our beliefs than to our knowledge, or, as the standard view would have it, we have privileged access to what we believe, but not to what we know. But if you're counting kinds of error possibilities, then the best-case scenario is zero possibilities for error. And that's just what the traditional definition of infallibility says. Infallibility ð Þh ðp 2 RÞ ðBp ! pÞ Necessarily, for every proposition within a certain range, if you believe it then it's true. Now let R be the set of necessary truths. If you believe any one of them, you're guaranteed to be right. But this has no epistemic consequences whatsoever. Most importantly, it says nothing about why you believe that p. If you believe that arithmetic is incomplete because it just seems that way, you're guaranteed to be right. But this says nothing about the epistemic credentials of the belief because the epistemic credentials of a belief depend on what it's based on. When justifiers are also truth makers There's another way of thinking about the special status. Suppose that your belief that p is the reason for which you believe that you believe that p. I represent that like this. Bp BBp RFW This is not an argument. This is a description of a transition that may or may not occur. But if it does occur, the mental state on top is the reason for which you go into the state on the bottom. This is not an inference. If you infer that q from your beliefs that p and that if p then q, then the justification for the conclusion depends essentially on the justification for the premises. If your beliefs in the premises are not justified, then neither is your belief in the conclusion. But if those beliefs are justified, then so is your belief in the conclusion, and its justification is derived from Introspecting knowledge 563 the justification of the beliefs it's based on. But you could have an unjustified, false belief that p and still know that you believe that p. So the justification of the secondorder belief is not derived from the justification for the first-order belief. We have a traditional model for this. Suppose it seems to you that p. On certain views, the experience itself justifies the belief that it seems to you that p. I represent that like this. RFW And on certain views, its seeming to you that p, the experience itself, can justify you in believing that p, at least in the absence of reason to doubt. RFW Neither of these is an inference in the traditional sense. Assuming that experiences themselves are not justified, or warranted, or knowledge, the justification of the bottom state is not derived from that of the top. Inference transmits knowledge. It's not a source of knowledge. But introspection is a source of knowledge. If we just look at the first two, the ones that involve a second-order belief on the bottom, I'm tempted to go on in the same way. The desire that p is itself your reason to believe that you desire that p. RFW The idea that first-order states are reasons for second-order states about them appears in Peacocke (2000). But my view is dramatically less fancy than his. I think you can explain it all in terms of plain old regular rational causation. Ram Neta (2011) also presents a version of this view. We'll come back to him. So the basic idea isn't new. In fact, I've tried it out myself before (Gibbons 1996). You can read this idea into Byrne (2005), if you understand rule following in terms of reasons for which. But Byrne does insist that it is an inference. It's just not the kind of inference that transmits knowledge, ignorance, or anything else. So Byrne would only accept my first example. According to him, the belief that o is desirable justifies your belief that you desire o (Byrne 2011). On Byrne's picture of rule following, the only thing left to the idea of inference is that it's got to be a belief on top. I think my way of going on in the same way is better than Byrne's. In the language of reasons, the idea that it has to be a belief on top is the idea that the kinds of reasons that determine the rationality of the mental states they cause are always beliefs. So if you thought that the desire for the end, plus the belief that the means would achieve the end, can rationalize or make sense of taking the means, then you might, like Davidson (1963), call these things reasons for action. But if it's got to be a belief on top in order to be what Byrne calls inference, then either there's no such thing as practical inference, or desires are always and everywhere irrelevant to what it makes sense to do. In epistemology, the idea that it has to be a belief on top can be expressed with Davidson's motto that ''nothing can count as a reason for holding a belief except another belief' ' (1986). On this view, experiences themselves are epistemically irrelevant, and you get coherentism. Coherentists tried to use beliefs about experience to play the epistemic role that experiences themselves actually play (BonJour 1985), and this was one of the main reasons for the downfall of coherentism (Sosa 1980). What does justify those beliefs about experience that are supposed to play experience's role? Randomly made up beliefs about experience are just not as good, epistemically speaking, as beliefs that are based on what they're about. So I'll go on in my own way. Suppose that one of these transitions from the first order to the second order did occur. The first question to ask is if the resulting belief constitutes knowledge. Is it just an accident that your belief is true? No. The belief is justified on the basis of the fact that it's about. On one fairly common picture, the relation between justifiers and truth makers is usually not this close. According to the picture, in a case of perceptual knowledge, the fact that p causes it to seem to you that p, and its seeming to you that p justifies the belief that p. Assuming there's no funny business going on, the relation between the justifiers and the truth makers is close enough to constitute ordinary knowledge. But when the justifiers are themselves the truth makers, you get not only knowledge, but a special epistemic status. Now suppose that this could happen, and it did. RFW Is it just an accident that your belief is true? No. In fact, given that the justifier is also the truth maker, it looks like privileged access. Of course, this isn't what always happens. It only happens in the good case. But suppose you believe that you believe that p because you desperately want p to be true, but you don't really believe that p. You confuse one mental state for another. Surely, no one in this century would ever say out loud that this sort of thing could never happen. But this possibility isn't supposed to call into question the idea that in the ordinary, normal case, we have privileged access to our beliefs. Now suppose you've got a false belief that p. This leads you to believe that you know that p. You've confused one metal state for another. Unless we simply assume the standard view, we have no reason to treat these two cases differently. But the standard view is exactly what's at issue. And if we don't treat the two cases differently, this possibility shouldn't call into question the idea that in the ordinary, normal case, we have privileged access to our knowledge. Objections to extending Ram Neta (2011) thinks there might be an objection to extending this picture to factive mental states. He doesn't quite argue for the standard view, but he thinks the best place to look for such an argument is in the following neighborhood. Maybe we can believe that we believe that p when and because we believe that p, but we can't believe that we know that p when and because we know that p. And we might think this is true because: (A) You could easily know that p without believing that you know that p. and (B) You could easily believe that you know that p but not know that p. It's not clear what work (A) does. It is easy to know that p without believing that you know that p. But it's just as easy to believe that p without believing that you believe that p. As far as (A) is concerned, there's no difference between belief and knowledge. So if there's anything to this, it must be in (B). So maybe the idea is that you could mistake a false belief for knowledge. But this is just confusing one mental state for another. If you say that this is impossible in the case of non-factive mental states but possible in the case of factive mental states, we'd have a principled difference between the factive and the non-factive. But I dare you to say that out loud. So maybe the idea is that it's just easier to confuse one mental state for another in the factive case than in the non-factive case. You think you're in love, but it's really the other thing. How easy is that? Your tortures tell you that they're going to put their cigarette out on the back of your neck. But they put an ice cube there instead. You think it feels hot, but it really feels cold. Everybody makes this mistake. You think you're hungry, but you're really just bored, or upset, or stressed. There's nothing particularly weird about this, and it's really not the least bit uncommon. If we think the line around our epistemic natural kind separates our knowledge of belief from our knowledge of knowledge, we need a principled difference between the non-factive and the factive. There is no principled difference. Safety first Over-Enthusiasts about self-knowledge are often chastised for underestimating how easy it is to get it wrong with respect to the inner life. But it's a different mistake to underestimate how easy it is to get it right when it comes to knowledge. It's plausible to suppose that safety is necessary for knowledge. 1 You know that p only if you couldn't easily be wrong in believing that p. Suppose you look at the door and come to believe that it's open. How easy is it for you to be wrong about that? It's not impossible. But there's a difference between easy and possible. And that's the difference between safety and skepticism. If this is an ordinary, everyday case, it's not that easy for you to be wrong about the door. The situations in which you're mistaken are not that close to the actual world. So in that kind of case, you could have ordinary empirical knowledge that the door is open. So suppose you know that the door is open, and suppose further that you believe that you know. How easy is it for you to be mistaken in your second-order belief? It's not impossible. But are there nearby situations in which you're wrong in thinking that you know that p? They're not situations in which p is false. If there were nearby worlds where you think the door is open when it's not, then you don't know the door is open, at least if safety is necessary for knowledge. But we're assuming you do know the door is open. The easiest kind of case to come up with where you're wrong in thinking that you know that p is the case where p is false. But first-order knowledge rules this case out not only as actual but also as nearby. So the second-order belief that you know is at least roughly as safe as the knowledge it's based on. Gaining second-order knowledge from first-order knowledge is easy. It's not guaranteed, and it's not impossible for the two to come apart. If the route up from each order to the next were guaranteed, we'd get the familiar regress. You'd have to know infinitely many things in order to know anything. But if the route up from first-order to second-order belief were guaranteed, then you'd get a similar regress. You'd have to believe infinitely many things in order to believe anything. So the lack of a guarantee doesn't point to a principled difference between the factive and the non-factive. Transmission failure Here's one more objection to extending the picture to our knowledge of factive mental states. According to me, you can know that you know that p on the basis of introspection. And you could know a priori that knowing that p entails that p is true. If we define reflective knowledge as knowledge that's based only on introspection and the a priori, doesn't my theory entail that you could have reflective knowledge that the door is open? You just infer it from your introspective knowledge that you know plus your a priori knowledge of the entailment. Whether or not an epistemic status will transfer over known entailment depends on what that status is (Wright 1985). Suppose that the status conferred by introspection is that you couldn't be wrong in believing that p. If your belief that p has this status, and you know that p entails q, then you couldn't be wrong in believing that q. In fact, this status would transfer over entailment whether known, or believed, or not. And this should make you wonder whether it's a distinctively epistemic status at all. Suppose instead that the status conferred by introspection is that your belief that p is justified on the basis of the fact that it's about. Even if you know that p entails q, there's no reason to think that your belief that q will be justified on the basis of what it's about. That belief may well be based on your beliefs that p and that if p then q. If we think of the special status in terms of what's based on what, we should not expect it to transfer over known entailment. And when you look at the picture of how things go when things go well, we should expect that it would not transfer in these cases. If I'm right, your introspective knowledge that you know is based on your knowledge that the door is open, and not the other way around. So the firstorder belief is not based on introspection and the a priori. Part II: the question So there's a picture of how self-knowledge works when it does, and the picture applies equally to the factive and non-factive mental states. And the picture tells you what the special status is: it's when justifiers are also truth makers. When the belief Aren't they lovely? There are various things to be said in favor of these transitions. They're reliable. In fact, they're hyper-reliable. If you form the second-order belief in this way, that belief is guaranteed to be true. Furthermore, we philosophers can know a priori that the transitions are hyper-reliable. That's pretty good. But suppose someone believes that the sky is blue, and concludes on that basis that arithmetic is incomplete. We philosophers can know a priori that this transition is hyper-reliable. But we're not impressed with the epistemic credentials of this belief. Thinking about what's good about the transitions in terms of a guarantee of truth is just the same thing as thinking about the special status in terms of counting kinds of error possibilities. And this explanation leaves many of us feeling a little cold. But we all have an inner tortoise (Carroll 1895). We know that we shouldn't listen to our inner tortoise. But it's still in there. What our inner tortoise wants is not just that the transition is hyper-reliable or that somebody else knows that it's hyperreliable. What your inner tortoise really wants is that the subject knows that the transition is hyper-reliable. The tortoise wants the subject to see the connection between the thing on top and the thing on the bottom. And seeing the connection must be a third mental state distinct from the things on top and the bottom. It makes no difference whether you call the third mental state accepting a premise or accepting a rule. It's a mental state whose epistemic credentials can be called into question. Suppose you arrive at the belief that q on the basis of inference from something else. And suppose it looks at first as though there are twelve necessary conditions for your belief that q to be justified. If you're not aware of one of these conditions, then it is for you as if it's not there. But it's a necessary condition on justification. So it is for you as if you're not justified. So your belief that q is not responsibly formed. So you need to be aware of them all. But that means that the awareness of the first twelve necessary conditions is itself a necessary condition on justification, and you need to be aware of it. If you like, you can say that you only need to be aware of the first three of the first twelve necessary conditions. 2 Now you have conditions on justification it is logically possible to satisfy, but you won't satisfy your inner tortoise. We don't really trust our inner tortoise. So even if we're left feeling a little cold by hyper-reliability, we're also left wondering if we should be. If the only options are the tortoise's impossible conditions and hyper-reliability, then maybe hyperreliability is the best that we can do. When questions are connected But even if we ignore our inner tortoise, a serious question about our transitions remains. Why is the thing on top a reason for the thing on the bottom? This is the fundamental question about the picture. One problem with moving from the sky's being blue to the incompleteness of arithmetic is that the premise is about one thing, and the conclusion is about another. And something similar is going on with our transitions. So when we ask why the thing on top is a reason for the thing on the bottom, here's one thing that might be bothering us. How could it be okay to answer questions about one thing by thinking about something you know to be different? And this brings us back to transparency. Suppose someone asks you in a non-philosophical context if you know where the keys are. You know, but don't always notice, that this is a question about you. In the good case, you say, ''Yeah,'' thereby answering the question of whether or not you know. But unless you've trained yourself to be annoying, you immediately follow this with, ''They're on the dining room table.'' You don't always notice that this is a question about you because you answer the question about yourself by thinking about the world. There's no need to force knowing that you know into the mold of introspection. It's already in the mold. Yeah, but how's that work? It's okay to answer questions about one thing by thinking about something else when the questions are connected. And questions can be connected in different ways. Here's one kind of case where the questions are connected by way of a mental state that connects them. The questions are connected because you believe that the facts are connected. Suppose you know that if p then q, but you don't yet have a view about either p or q. Now the question of p is not independent of the question of q. And they're connected by way of your knowledge of the conditional. So you have to answer these questions together. Some cases are messier than our original case. Here you have no view about whether or not if p then q; no view about p; and no view about q. But the questions have come up, and you have to answer them together. Any evidence that p needs to be weighed against any evidence you might have for (if p then q) and not-q. And so on for all the relevant combinations. Why are these three questions connected? Is it by way of a linking mental state, perhaps your antecedent acceptance of the tortoise's first premise? T1 ð Þ ððp&ðp ! qÞÞ ! qÞ But what gives you the right to believe that? And calling it a rule changes nothing except how we ask the question. What gives you the right to accept the rule? Is it And you need the tortoise's second premise, or the corresponding rule, to connect them. It's a very difficult question exactly how all this works. But one thing we know for sure is that the questions don't always have to be connected by way of a linking mental state. If you know that if p then q, you see the connection between p and q. If you also believe p, you need to put these two mental states together to get anything out of them. But putting them together approximately amounts to thinking about them at the same time. You don't need anything other than this, aside from being reasonable, to get something out of them. And being reasonable is not a third mental state whose epistemic credentials can be called into question. It's not a matter of wanting to be reasonable; it's not a matter of having a second-order intention to comply with the demands of reason whatever those may be; and it's not a matter of knowing all of math and logic. It's a matter of responding to good reasons because they're good reasons. And this gives us some clue about what good reasons are. They're the kind of thing that can directly get you to avoid the bad combinations. Sometimes you revise. Sometimes you conclude. But the reasons themselves can keep you from believing p; believing if p then q; and believing not-q, at least when you're being reasonable. But good reasons don't just get you to avoid the bad. They can also get you to believe what you ought to believe and justify what they cause. It's tempting to represent certain sorts of inferences in the following way. RFW In this case, the justification for your belief that q is determined by the justification for your beliefs that p and that if p then q. And what makes this transition rational does the same thing for the transitions you make both in the original case and the messy case of answering questions together. What does make these transitions rational? One thing that ties these three together can be understood in terms of a wide-scope ''ought'' (Broome 1999). Suppose the questions of p, q, and if p then q have come up and they matter to you. If you do answer all these questions, the worst-case scenario this. You believe that p; you believe that if p then q; and you believe that not-q. But the second-worst-case scenario is not that great. Here, you believe that p; you believe that if p then q; you're trying to figure out whether or not q; and you have no clue. Here's one way of putting the relevant wide-scope ''ought'' in English. If you're going to have views on these matters, your views ought to be like this: You believe that p and that if p then q only if you believe that q. Here's how that looks in shorthand. You can satisfy this requirement of reason either by believing that q, or by failing to believe either p or if p then q. In the first instance, the wide-scope ''ought'' rules out the bad combinations. In effect, it tells you what to avoid. Don't believe this stuff unless you also believe that stuff. More than that, it doesn't on its own tell you what to do. But while (WSO) doesn't explicitly say anything about transitions like (MoPo), we can use it to explain why the transitions are rational. We assume that the questions of p and so on have come up, and we think of (WSO) as ruling out as impermissible certain combinations of answers. From the mere fact that you do believe that p and that if p then q, the main thing that follows is that you ought to either revise or conclude. But now suppose that you ought to believe that p, and you ought to believe that if p then q. When it comes to the requirements of rationality, I think that what you ought to believe is determined by the evidence, broadly construed. (WSO), on its own, says that you ought to either revise or conclude. Your evidence for p and for if p then q, says that you shouldn't revise. So your only permissible option left is to conclude. So that's what you ought to do. In this sort of case, a transition like (MoPo) could be rationally required. Given your evidence, it's the only reasonable move for the mind to make. If the evidence had been otherwise, (WSO) would license the transition from the beliefs that if p then q and that not-q to the belief that not-p. At least sometimes, there's only one way to go to avoid the bad combinations. You can think of (WSO) as the idea that you are epistemically responsible for some, though certainly not all, of the logical facts themselves. You're responsible for the obvious ones. (WSO) doesn't just rule out the worst-case scenario where you believe that p, believe that if p then q, and believe that not-q. It also rules out the second-worst-case scenario where you believe that p, believe that if p then q, and have no clue about q. If the question has come up and you can't figure out the answer, this is a failure of rationality. If true, (WSO) is true regardless of whether you believe it, accept it, or represent it. In the language of reasons, if your justified beliefs that p and that if p then q are the reasons for which you believe that q, then as far as (WSO) is concerned, things have gone as well as they can, and you're justified in believing that q. But your acceptance of (WSO) need not ever be one of the reasons for which you believe anything. You can be reasonable in virtue of fulfilling this requirement without having to represent it. You don't need a linking mental state. One thing that matters from the point of view of rationality is how good your reasons really are, not just how good you think they are. Another thing that matters is that you avoid the combinations that really are bad, not just the ones that look bad to you. (WSO) is a rational requirement. You don't have to like it; you don't have to accept it; but you do have to comply. And when things go as well as they can, you get more than mere compliance, if that just means acting in accord with the rule. If your belief that arithmetic is incomplete is based on your belief that the sky is blue, you believe the right thing. But you don't believe it for the right reasons. If you believe that p, believe that if p then q, and also believe that q, we can suppose that in believing that q, you believe the right thing. But we don't know yet whether you believe it for the right reasons. But if your justified, first-order beliefs are the reasons for which you believe that q, then you can believe the right thing for the right reasons without having to represent the rule or requirement. Complaining about no-frills reliabilism, and even no-frills hyper-reliabilism is very tempting. But it's also risky business. You've got this reliable process. So your beliefs are either likely or guaranteed to be true. What more do you want? Maybe what you want is what your inner tortoise wants, not just the existence of a rational connection between premises and conclusion, but the knowledge that the transition is reliable. If that's what you want, you just can't have it. But if you're satisfied with the existence of the rational connection plus the ability to believe the right things for the right reasons, then it no longer looks as though the only options are the tortoise's impossible conditions and mere hyper-reliability. But other cases are messier. You haven't made up your mind about whether it's raining; whether to take an umbrella; or even whether you ought to take an umbrella if it's raining. Many different combinations of answers to these questions are just fine. But some combinations are no good from the point of view of rationality, like believing it's raining; forming the conditional intention; and deciding not to take your umbrella. Some practical cases Why are these three questions connected? It's not by way of an antecedent acceptance of the principle of instrumental reason whatever exactly that is. And it's not by way of a second-order intention to form the first-order intention if you have the belief. If the first-order intention to / if p plus the belief that p can't get you to / , then the second-order intention to intend to / if you believe that p plus the belief that you believe that p won't be any better off. These questions are connected but not by way of a linking mental state. The reasons themselves get you to avoid the bad combinations, at least when you're being reasonable. One interesting thing happens when we turn to the practical case. Normative questions sneak in all by themselves. Did you hear it? It's about whether you ought to take your umbrella. Let's start with a two-person case. Suppose that for whatever reason, the question of what you're doing for dinner is not independent of the question of what your mate is doing for dinner. When it comes to speech acts, the following distinction is huge: there's starting the conversation about dinner, and there's inviting them to decide. Certain sentences are typically just as good for one as they are for the other: Factual: What are we doing for dinner? Mental: What do you want for dinner? and Normative: What should we do for dinner? In most ordinary circumstances, these sentences are pragmatically indistinguishable. Whichever sentence you pick, you need to make clear what you're doing. Any could be misinterpreted as doing the other thing. And when things go well, any could be used successfully to do either. But it's not just that you could start the conversation with any of these sentences. It's that in the normal case, people move seamlessly between talking about ''oughta,'' ''wanna,'' and ''gonna.'' And it doesn't seem to them as though they are in any way changing the subject. It's as if there's only one question here and three ways of putting it. And it is as if there's only one question here, and that's important. But it's only as if. It would be very, very bad to move from typical pragmatic indistinguishability to the conclusion that these questions all come to the same thing or that there's a single fact that determines the correct answer to them all. Literally identifying the factual question with the normative question commits you to this: The Crazy Idea ð Þ h ð/Þ ðwe ought to / iff we're going toÞ No one's that good. Let's turn to a one-person case. Now you have to make up your mind on your own what to do for dinner. You can ask yourself the following three questions: It's as if there's only one question and three ways of putting it: the practical or deliberative question of what to do for dinner. But if we literally identified the questions, we'd be back to The Crazy Idea in its first-person form. The questions are different, but you have to answer them together because they're connected. And you answer these questions by thinking about food. There are three important things about this case. (1) The sentences are typically pragmatically indistinguishable because the questions (the things asked by use of the interrogatives on a particular occasion) are or ought to be answered together. You move seamlessly from one sentence to another in the ordinary case because you're trying to answer all three questions together. So the connection between the normative and the practical or deliberative question (whichever that turns out to be) is not just a surface feature of ordinary language. It's a feature of practical thinking. Introspecting knowledge 573 (2) The questions don't have to be connected by way of a linking mental state. You answer the question of what to do for dinner by forming an intention. But in order to connect this question with the normative question, you don't need a secondorder intention to intend to / when you think that you should. And you don't need the antecedent acceptance of the anti-akratic principle in order to engage in practical reasoning. (3) The questions are not connected by way of a necessary connection between the facts. So of course it's not true that necessarily, if you believe you ought to / then you intend to /. The question of whether you ought to / is not independent of the question of whether to /. But the connection between the questions is normative and epistemic, not necessary. You ought to be able to answer these questions together. And you will answer these questions together when you're being reasonable. But nobody's reasonable all of the time. And if you can't answer these questions together, there's something wrong with you in the rationality department. So while certain combinations of answers to these questions are just fine, other combinations are ruled out by the requirements of rationality. I ought to /; but I'm not going to. That's pretty much what it is to be akratic. It's to think that you have most good reason to do one thing while intending or doing something else. This is neither impossible nor uncommon. I do it all the time. But it involves you in some kind of irrationality. Again, we can think of the relevant rational requirement in terms of a wide-scope ''ought.'' Where /-ing is something you're in a position to do for a reason, if the questions of whether you ought to / and of whether to / come up, then your answers to those questions ought to be like this: You believe you ought to / only if you /. O½BO/ ! you / In the case of (WSO), there is a necessary connection between the facts that p and that if p then q on the one hand and the fact that q on the other. And we just can't help but think that the necessary connection between the facts must have something to do with the normative connection between the beliefs. But in the case of the antiakratic principle, there's no necessary connection between believing that you ought to / and /-ing. So what is wrong with being akratic? In this case, the questions are connected not because of a necessary connection between the facts, but because of a connection between the reasons. Any reason to think you ought to / is itself a reason to /. And a good reason not to / is a reason to think you shouldn't. So if your answer to the question of whether to / comes apart from your answer to the question of whether you should, then you're being irrational somewhere. Either you're believing or acting for bad reasons. If the ''O''s in the principle express the requirements of rationality, then when you're akratic, you see yourself as irrational. What you do conflicts with your own sense of the reasons for doing it. But when you are being reasonable, and you do answer these questions together, the reasons themselves can get you to avoid the bad combinations. If you believe you ought to /, you have two options. You can either revise the belief or /. If all the evidence suggests that you ought to /, then you shouldn't revise the belief, and the only permissible option left is to just do it. If your belief that you ought to / is the reason for which you /, then you / for the right reason. And if the things that get you to believe you ought to / also get you to /, then you / for the right reason. But your acceptance of the anti-akratic principle need not ever be your reason for anything. A theoretical case Let's turn to a one-person theoretical case. You might ask yourself the following questions: (F) Is p true? (M) Do I believe that p? (J) Am I justified in believing that p? (N) Should I believe that p? (K) Do I know that p? You answer the questions (M), (J), (N), and (K) by thinking about the world, i.e., by trying to answer (F). When it comes to answering, (F) seems like the fundamental question. But it would be just crazy to identify the facts. Sometimes it's okay, and even more than just okay, to answer questions about one thing by thinking about something else. And this is okay when the questions are connected. The questions don't have to be connected by way of a linking mental state, and they don't have to be connected by way of a necessary connection between the facts. The questions can be connected by way of the reasons. It's okay to answer questions about one thing by thinking about something else when thinking about that something else gives you reasons to answer the questions one way or another. It's not that you always ought to give the same answers to all of these questions. Is there an even number of trees in Hyde Park? I don't know. Do I believe there's an even number? Absolutely not. Like Modus Ponens and akrasia, only certain combinations are ruled out. So which combinations are ruled out in our theoretical case? There are a lot, but here's a place to start. (O) It's raining, but I don't believe that. and (C) I believe it's raining, but it's not. It's not just that Moore-paradoxical beliefs involve giving different answers to (F) and (M). It's that there's a rational conflict between these two answers (Moore 1952). Moore-paradoxical beliefs are irrational or incoherent. That's supposed to be obvious. The hard question is why they're incoherent given that the proposition believed is contingent. If there's no necessary connection between the facts, what is wrong with believing Moore-paradoxical things? Here's one way of thinking about it (Gibbons (2013). In both cases, you see yourself as irrational. Suppose you believe (O). Either you have sufficient evidence for the first conjunct, or you don't. If you do have reason to believe that it's raining, that is reason to believe that the second conjunct shouldn't be true. If it's true when it shouldn't be, then you're irrational. If, on the other hand, you don't have evidence for the first conjunct but believe it anyway, then you're irrational. Now suppose you believe (C). Either you have sufficient evidence for the second conjunct, or you don't. If you do have reason to believe that it's not raining, that is reason to believe that the first conjunct shouldn't be true. If it's true when it shouldn't be, then you're irrational. If, on the other hand, you don't have evidence for the second conjunct but believe it anyway, then you're irrational. The idea is not that you could never be justified in believing both conjuncts. If your therapist convinces you that deep down, you believe your mom is out to get you, you may know full well that she's not out to get you. So you could know something like (C). But you need your therapist to arrive at the second-order belief because your repressed first-order belief about your mom is not responding to reasons, and it's not behaving in the way that good reasons should. And that's why you can't know about it in the ordinary, first-person way. The first-order belief that p is itself a reason to believe that you believe that p. It's the very best reason to believe that you believe that p. If you form the second-order belief in the ordinary, normal way, you get not only knowledge but also a special status. The second-order belief is justified on the basis of the fact that it's about. There's nothing wrong with taking other routes. But if the ordinary route up is not available to you, there's something wrong with you in the rationality department. The real problem with the observation model of self-knowledge is that the model leaves out the demand for rational integration of the first and second orders (Shoemaker 1996;Burge 2000;Moran 2001). One of the most distinctive things about self-knowledge is that what's thought and thought about depend on the same point of view. The normative significance of this is that your first-and second-order beliefs should be responsive to the same set of reasons. The requirements of integration can be thought of in terms of wide-scope ''oughts.'' Here are some things to consider, one for (O) and one for (C). The conjunction of these two looks a little harsh. (WO) seems to be saying that you should leave no stone unturned. Form the second-order belief whenever you have the first-order belief. And (WC) seems to be saying that you should never get it wrong when it comes to beliefs about your own mind. The easiest way to think about all of the wide-scope ''oughts'' that we're considering is that they don't really require you to have a view. What this pair says, in effect, is that if you do answer the question of whether p is true and you also answer the question of whether you believe it, you should answer these questions together, and your answers should be related like this: you believe that p if and only if you believe that you believe that p. This is an expression of the idea that the ordinary route up ought to be available to you, not that it necessarily will be. Here's another combination that Moore thought was Moore-paradoxical (1962). It's raining, but I don't know that. At least on the surface of ordinary language, it's perfectly fine to move seamlessly between talking about knowledge and truth. I tell you that the store is closing early today, and you immediately ask me how I know. To ask where the keys are, you have to pick a sentence. You might ask, ''Where are the keys?'' Or you might ask, ''Do you know where the keys are?'' Your choice of sentence is based on considerations of style. It's as if there's only one question here and two ways of putting it. But it's only as if. Whether it's a one-person or two-person case, the question of whether it's true is not independent of the question of whether you're in a position to know. These questions are not connected by a linking mental state or a necessary connection between the facts. They're connected because the reasons are connected. Any reason to believe that you're not in a position to know that p is itself a reason to revise the belief that p (Gibbons 2013). This is how undermining defeaters work. If you find out that half of the things in your neighborhood that look like barns are really barn facades, you're no longer justified in believing that the thing in front of you is a barn. The thing about the facades doesn't show that your belief was false or unjustified. But it does show that there was something so seriously wrong with it from the epistemic point of view that you need to revise it. And what it shows is that you weren't in a position to know. If reason to believe you don't know whether or not p is reason to withhold, then it's not just an accident that we use the sentence we do to express the state of seriously withholding judgment: I don't know. There's no problem with my believing that it's raining but you don't know that. But there is a problem with your believing that it's raining but you don't know that. And the problem is fairly straightforward. Any reason for you to believe the second conjunct is reason for you not to believe the first. Your first-order and second-order beliefs ought to be responsive to the same set of reasons. And here's a way of thinking about one of the requirements on how these things ought to be connected. If this were a narrow-scope ''ought,'' we might get some kind of regress. But in any case, it would be another version of the idea that you should leave no stone unturned. But if we read (WK) on the model of (WO), it just tells you how your answers to these questions ought to be organized at the end of the day, if you're going to answer them. In the first instance, wide-scope ''oughts'' rule out the bad combinations. Many wide-scope ''oughts'' rule out the worst-case scenario. Some of them also rule out the second-worst-case scenario. But they also bring with them an implicit conception of the best-case scenario. One idea behind (WSO) is the idea that the beliefs that p and that if p then q are reasons to believe that q. And if they're good reasons, these reasons can be good enough on their own. You don't need to be justified in believing the tortoise's first premise or in accepting either Modus Ponens or (WSO). If your justified first-order beliefs are the reasons for which you believe that q, then from the point of view of rationality, this is as good as it needs to get. One idea behind the anti-akratic principle is the idea that the belief that you should is a reason to do it. And if this reason is good enough, it's good enough on its own. You don't also need to accept the principles, or the rules, or what have you. So you could intend, believe, and do the right things for the right reasons without having to meet the tortoise's impossible conditions. One idea behind (WK) is the idea that the knowledge that p is a good reason to believe that you know that p. And in fact, it's the very best reason to believe that you know. When the belief that you know is justified on the basis of the fact it's about, you get the kind of self-knowledge that comes with a special status. And in the ordinary, normal case, when you're being reasonable, you have available to you a distinctive route up from first-order to second-order knowledge. If this route is not available to you, there may well be something wrong with you in the rationality department. When we think about self-knowledge in terms of counting kinds of error possibilities, it looks like there's a huge difference between knowing what we believe and knowing that we know. But there's another picture of how selfknowledge works. Introspective knowledge is based on what it's about. And the demand for rational integration of the orders explains why the first-order state is a reason for the second-order state. And the picture works just as well for knowledge and belief. The picture explains not only why the route confers the status and all that. It also explains why the first-order question about the world is the fundamental question, at least when it comes to answering them. When things go well, what justifies your belief that you believe that p is your belief that p. And what justifies your belief that you know that p is your knowledge that p. And so on. So in these sorts of cases, in order to be justified in believing the second-order thing, you need to be in the firstorder mental state. And being in the first-order mental state just is thinking about the world. The key to knowing that you know is the key to good cooking. Start with good ingredients, and don't mess anything up. If you know that p, you're starting with good ingredients. You've got truth, justification, safety, and all that. It's not that necessarily, in every possible case if you can't figure out from here whether or not you know, it's automatically your fault. It's that there's a strong presumption in the ordinary case that if you can't get there from here, you've probably messed something up. When it comes to cooking, not messing anything up requires a certain amount of skill. But knowing that you know is so much easier. All you have to do is answer the question about the mind by thinking about the world. And that's what we do every day.
13,385.4
2018-01-17T00:00:00.000
[ "Philosophy" ]
“Effect of market and corporate reforms on firm performance: evidence from Kuwait” Following the global financial crisis in 2008, many countries have introduced economic and corporate reforms to assure fair markets and mitigate the risk of management misconduct. In this context, Kuwait has implemented two new major laws to restructure its capital markets and improve corporate governance. The two laws are the Capital Market Authority Law (CMAL) and Kuwait Companies Law (KCL). In this paper, the authors sought answers to two questions: (1) has the performance of the listed companies changed in response to the enforcement of the laws? and (2) was there a direct influence of the laws on that change? The authors found some evidence of significant change in performance. Moreover, they provide evidence of KCL viability as a determinant of better performance. Interestingly, CMAL was found to be inadequate for improving firm performance. Implications and recommendations for further research are provided. Amani kh. Bouresli (Kuwait), Talla M. Aldeehani (Kuwait) BUSINESS PERSPECTIVES LLC “СPС “Business Perspectives” Hryhorii Skovoroda lane, 10, Sumy, 40022, Ukraine www.businessperspectives.org Effect of market and corporate reforms on firm performance: evidence from Kuwait Received on: 4th of April, 2017 Accepted on: 26th of May, 2017 INTRODUCTION It is widely accepted that value maximization is the ultimate goal of business firms. Owners of these firms usually hire professionals to manage the business. When these managers do not act in the best interest of the owners, the firm is said to suffer from agency problems. Corporate governance (CG) is the set of rules and regulations by which a firm is directed and controlled to protect owners' interests and avoid agency problems and managers' misconduct. Financial markets and certainly business firms operating under weak governance are more vulnerable to exploitation and abuse. Recent global market and corporate financial regulatory reforms were the results of the latest mega business scandals and global financial distresses. Ever since the pioneering work of La Porta et al. (1999a), an extensive research was conducted to explore how firm value can be influenced by the introduction of new CG rules and regulations. Evidence of the effect, however, is inconclusive. For example, La Porta et al. (1999b) found that stronger governance practices provide positive signal to the market, leading to value appreciation. Wang et al. (2011) provide evidence of association between governance reforms and better performance. Others believe that corporate governance (ownership concentration) or lack of it is irrelevant to firm value (Omran et al., 2008). Another argument against a very strict governance code and heavy market regulations was raised by Carney (2006). Others (Bruno & Claessens, 2010) argue that the level of corporate governance strength, at the country and company level, may have different impact on performance. Indeed, negative effects are possible for some firms. Their results implied that a stringent regulation can harm the performance of companies with strong governance structure and has no effect on companies with poor governance structure. A similar conclusion can be found in Brickley et al. (1997) and Jomini (2011). The key question here is how much CG rules and regulations should firms apply before harming financial results and the desired goal of value maximization? In this paper, we attempt to contribute to the possible answers to this question with regard to less developed markets. These markets had less attention from researchers mainly because of the lack of adequate information on corporate governance factors. The scope of our research is limited to the Kuwait Stock Exchange (KSE) after the implementation of the new Capital Market Authority Law (CMAL) applied in 2010 and Kuwait Companies Law (KCL) introduced in 2012. Our aim is to explore the possible effects of applying the new laws on the performance of the firms listed in the KSE. In the section, we discuss the relevant literature with the goal of developing our research hypotheses. We, then, relate these hypotheses to the CG articles included in the CMAL and KCL while discussing the new regulations. In the following section, we provide a discussion of our data, test variables and methodology. We, then, discuss the results in the following section. Finally, we end with concluding remarks, implications and recommendations. LITERATURE REVIEW The literature on CG was initiated by La Porta et al. (1999a) who used data on ownership structures of large corporations in 27 wealthy countries excluding insignificant market of Kuwait, UAE and Saudi Arabia. The main finding was that "controlling shareholders typically have power over firms significantly in excess of their cash flow rights, primarily through the use of pyramids and participation in management". This result was later confirmed by Al-Deehani and Al-Saad (2007) for Kuwait. Using data of the same sample of La Porta et al. (1999a), La Porta et al. (1999b) explored investor protection and corporate valuation and found evidence of positive relations between higher valuation and better protection of minority shareholders. The question, however, is do CG rules and regulations, always, lead to better corporate values while preserving owners' interests? This question is addressed by Daines (2001). He examined the effect of Delaware corporate law on firm value using Tobin's Q as a proxy for firm value. The evidence supported the view that firms incorporated in Delaware worth significantly higher than firms incorporated elsewhere. Delaware law is considered one of the best corporate laws, as it attracts more than 50% of public firms incorporated in the US. Clear and well known rules, courts precedent and quick rules update are among the reasons for its attractiveness. Moreover, Delaware State has a specialized Chancery Court for resolving corporate disputes. Accordingly, the evidence indicates that corporate law quality, which fairly protect investors, create positive investment environment that promote firm value, hence, increasing investor return. Several CG factors were tested and their effects on firm value were assessed. One CG variable is associated with board size. There are two conflicting evidence regarding board size. The first argues that smaller board-size firms are generally associated with better performance (Yermack, 1996;Jensen, 1993;Eisenberg, 1988;Singh & Davidson, 2003), whereas the second argues that larger boards are associated with stronger firm performance (Zahra and Pearce, 1989;Kiel & Nicholson, 2003;Coles et al., 2008). Another variable is associated with the leadership structure of the CEO to avoid conflict of interest and, hence, lower agency cost. For better governance, regulators and institutional investors enforce firms to separate the positions of board member and CEO, as it is easier to abuse power and authority for self-interest when one person is holding the two positions. In fact, the majority of empirical evidence supports the separation of the two positions. Dahya et al. (2009) showed that market regulators in 15 developed markets separated the positions of CEO and chairperson. Chen, Lin and Yi (2008) showed that many firms in the period from 1999 to 2003 altered their policies and bylaws to change the leadership structure from duality to non-duality. Jensen (1993) argued that duality would mitigate the monitoring role of the board and supervision of management and, hence, increase agency cost. Another key component of governance framework is board independence and the presence of independent directors. Beasley (1996) examined the relation between board structure and financial scandals and found that the higher the percentage of independent directors, the lower the cases of financial manipulation. Daily et al. (2003) argue that, during financial crisis, firms with more independent directors have lower probability of facing bankruptcy. Investigating the risk faced by investors, La Porta et al. (2002) found evidence of positive relation between higher valuation and better protection for minorities. Risk facing investors was also addressed by Emil et al. (2014). Bhagat and Black (2001) explored the relation between the ratio of independent directors and short-term performance. They documented a positive relation between the presence of independent directors and performance. Wu, Lin, Lin and Lai (2010) examined the impact of corporate governance mechanism on firm performance. They found that firm performance was positively associated with board independency, CEO/chairman position separation and with smaller boards. Duc and Thuy (2013) found that board compensation has a positive effect on performance measured by ROA and that the board size has a negative effect. Globally, and specifically in smaller economies, applying governance framework is relatively a new trend and further evidence is needed to assess its impact. Khatab Porta et al. (2000) believe that expropriation of minority shareholders by controlling shareholders can take many forms. Controlling shareholders can: steal the profits, divert business opportunities, appoint unqualified family members in key managerial positions and sell valuable assets of the firm they control to another firm they control at lower fair price. Hence, the above forms of expropriation of minority shareholders are consistent with the agency theory (see, for example, . Stronger market regulations that secure sound protection for investors signify developed markets. Following the 2008 global financial crisis, developing market and less developed markets are working hard to introduce new CG rules and regulations to protect investors from power abuse of the controlling managing minority. Regulators believe that well protected investors reduce agency cost, induce market growth, enhance firm value and that investors are willing to pay more for stocks of firms listed in such well-regulated, fair markets. They also believe that creditors are more willing to finance firms when their rights are well protected by the legal system. However, a conclusive evidence of these believes is yet to be supported by scientific research. For Kuwait Stock Exchange (KSE), there were two major sets of regulations that were introduced lately. The first was the CMAL to regulate the stock market in 2010. The second was the 2012 KCL or the Ministry Law (MLaw) to regulate shareholding companies. The new laws imposed many CG articles and provisions that forced all listed companies to make necessary changes in their bylaws and internal policies. We present, in the following section, a discussion of some of the articles included in the new law relevant to CG. KSE and the reception of CAML KSE was officially established in 1983 following Almanakh stock market crisis, a major local financial crisis, which started in 1981 and was caused by severely inflated stock prices, unregulated market transactions and uncontrolled trading. Since official establishment, KSE has been regulated by a market committee headed by the Minister of Commerce with four representatives from the Chamber of Commerce and representatives from the Central Bank and Ministries of Commerce and Finance. A major structural change happened in 2010 when a new regulator took over market supervision from KSE. CMAL was issued in 2010 in an attempt to regulate Kuwait financial markets and to separate supervision from management roles. Up until the implementation of CMAL, Kuwait Stock Exchange played double role as a regulator and as an administrator of stock market trading, which caused conflict of interest. However, following the 2008 global financial crisis and after the institution of capital market authorities in the entire GCC region, the need for an independent regulator in Kuwait has increased. The new regulatory body aimed to discipline the market through higher transparency requirements, protection of shareholders, governance rules, defining responsibilities, etc. The new CMAL carried many new provisions with significant amount of legal burden on firms that were mostly recovering from the financial crisis. A major issue associated with the capital markets authority (CMA) is its budget and sources of operations finance. As mentioned in article 19, CMA shall finance its operations from market fees and violations fines. This provision increased the incentives for the regulator to increase costs, hence, the broad increase in market fees and accordingly an increase in market burden. This provision was lately amended to engage the government in financing CMA's budget in addition to market fees and fines. Another related issue associated with the CMAL was the separation of responsibilities between the stock exchange as a self-regulatory organization and the regulator. This separation was associated with a huge amount of overlapping in duties and ambiguity for market participants during the first years of CMA's launch. This element also increased the burden on market participants and listed firms, which led to a decrease of their activities in the market. Consequently, trading volume decreased significantly from an average Kuwaiti Dinar (KWD) 148.9 million and 147.4 million in 2007 and 2008, respectively, to KWD 24.3 million and 28.9 million in 2011 and 2012, respectively. The excessive fines and penalties and the number of legal cases filed against traders and market makers during the first 2 years of operations caused the market to freeze and all major players to stop trading. According to article 63 of CMAL, all market participants shall receive a formal license from the CMA to participate in market activities including dealers, brokers, investment funds, etc. Licensing requirements were very strict and, in some cases, hard to obtain or apply. Accordingly, article 66 imposed a set of requirements all related to governance codes, such as separating activities, risk management, avoiding conflict of interest and reports requirements. Furthermore, articles from 71 to 75 set out shareholders provision protection for minorities. The law dealt also with provisions related to transparency and disclosure requirements. The last chapter of the law imposed market violation provisions, which added strong enforcement factor to the market. KCL relevance to KSE The other law relevant to the operations of KSE is KCL. We counted 18 articles included in the new law that are related to issues of CG. Starting with article 181 and ending with article 216, these issues are summarized as follows: 1. Imposed minimum number of board members for public firms (article 181). 2. The positions of the chairman of the board and chief executive officer shall not be combined (article 183). 3. Regulatory bodies were given the right to impose the appropriate corporate governance code on firms under their jurisdiction, and thereby governance is mandated by law. Therefore, all public firms reacted to this article by changing their bylaws and internal policies (article 186). 4. Imposed presence of independent directors, at least one, and determined an upper cap of their number, surprisingly, not to exceed half of the board. Independent directors are exempted from the minimum ownership requirement (article 187). 5. Imposed a minimum number of 6 board meetings per year. This is in line with governance codes for having higher number of board meetings to keep the board well informed for an efficient decision making process (article 190). 6. A person, even if in the capacity of representative of a natural or legal person, may not be a member of the board of directors, of more than five public companies headquartered in Kuwait (article 194). 7. Board members are not to exploit information to benefit selves or others, nor can they dispose shares they own in the company during tenure (article 195). 8. Board members are not allowed to disclose confidential information except through general assembly meetings (article 196). 9. Board members of companies cannot serve in boards of two competing companies at the same time. This restriction is to prevent selfdealing, as well as to protect against conflict of interest; major elements in any proper corporate governance framework (article 197). 10. Remunerations for the board members shall not exceed 10% of net profit after dividends distribution of 5% for 5 years, otherwise, it should not exceed KD 6,000 annually for each member (article 198). 11. Board members, executive management and their families are banned from having interest in business deals with the company without the approval of the general assembly (article 199). 12. With the exception of banks and loan-extending companies, board members, CEO and families are not to receive loans from company without the approval of the general assembly (article 200). 13. Board members are legally responsible for fraud actions, misuse of authority and violation of this law. 14. Articles 206 and 208 call for fair general assembly meetings, sending invitations to all shareholders with proper agenda and complete set of information. 15. Articles 209, 212 and 216 provide minority shareholders the power to dismiss the board and the chairman when required. The Kuwaiti public companies listed in the KSE have been complying with this law for about 5 years. Therefore, it is logical to hypothesize a positive effect of applying this law on all performance indicators of these companies. To test for this effect, we discuss, in the following section, our data and methodology and measures to test specific hypotheses. DATA AND METHODOLOGY To study the effect of applying the new CG laws on the performance of the listed public companies, we need first to measure the significance of differences in performance indicators before and after the introduction of each law. If significant differences exist, then, we measure the effect of introducing each law on each indicator. As CMAL was introduced in 2010 and the KCL was enforced during 2012. We collected fundamental data for the years 2007 to 2014 sourced from the annual published reports of the Institute of Banking Studies in Kuwait. We elected the fundamental data of five sectors. We canceled out companies in other sectors, which were unrepresentative of the nature of the sector to which they belong. For example, health care, communication and educational companies were included in one sector called services. The companies of each of the five sectors we chose were of the same nature. Originally, there were data for 147 companies. However, because of missing data for some of the years, some were canceled out. The number of companies remaining are 102 with 816 observations. The data are organized in the form of long format of longitudinal data involving the dimensions of time and individual companies. The data are considered strongly balanced, as each individual company has the same number of years. Based on the reviewed literature, certain performance indicators were elected for investigation. These indicators represent profitability, valuation, assets management, debt and agency costs. The variables in question are profit multiplier, total assets turnover, debt ratio, return on equity and market to book ratio and equity to assets ratio. With these indicators, we presume to cover the most important performance aspects. The following is a brief description of these indicators and the specific relevant hypotheses: Total assets turnover is calculated as total revenue to total assets. This is an indicator of the company's efficiency in managing its assets. Higher numbers indicate better assets management efficiency. The hypothesis related to this indicator is that enforcing CMAL and KCL's CG rules will prevent managers from investing in unnecessary assets leading to better assets turnover. Debt ratio is the total debt to total assets. Although the ratio is important for measuring company financial distress, when it comes to cost efficiency, more debt leads to lower cost of capital and higher value. However, more increase of debt may lead to major financial distress or even bankruptcy. Our hypothesis, in relation to this indicator, is that enforcing CMAL and KCL's CG rules will encourage managers to raising new external funds to finance viable investment leading to a higher debt ratio and better value. Return on equity is a widely acceptable measure of profitability related to the owners' equity. It is calculated as the net profit to owners' equity. The logical hypothesis is that enforcing CMAL and KCL's CG rules will ensure the alignment of the management interests with owners' interest leading to better profitability for the owners. PE ratio is directly related to company valuation. We calculate it as closing price at the end of the year to earnings per share, which we estimate as net profit divided by number of shares outstanding. PE ratio is also called the profit multiplier. It indicates how much investors are willing to pay, profit multiples, to acquire the share. Higher PE ratio indicates higher value of the firm. Our hypothesis, in relation to this indicator, is that enforcing CMAL and KCL's CG rules will lead to a higher PE, hence, a higher firm value. MB ratio is also related to company valuation. It is calculated as the market stock price over book value per share (BVPS). BVPS divided is calculated as owners' equity over the number of shares outstanding. When MB is less than one, the company is seen as an opportunity for takeover. This is because owner's equity worth more than its market stock value. A buyer will be encouraged to sell it in pieces. On the other hand, a higher MB ratio indicates that investors are valuing the company higher than its equity. Our hypothesis, in relation to this indicator, is that enforcing CMAL and KCL's CG rules will lead to a higher MB, hence, a higher firm value. Agency cost is the money charged to the firm because of management misconduct. There are many proxies for agency costs measures. We choose the equity to total assets ratio for representation of agency costs as suggested by Berger and Patti (2006). They argue that higher leverage or lower equity to total assets is associated with lower agency costs. This is in line with our hypothesis on debt ratio. The hypothesis for this specific indicator is that enforcing CMAL and KCL's CG rules will lead to a lower equity to total assets ratio leading to lower agency cost. In this paper, we investigate: 1. the significance of differences in the performance indicators before and after the implementation of each law; 2. the effect of each law on each performance indicator for the different sectors. Here is a summary of our null against research hypotheses in relation to KCL: Hypothesis 1: H0: Total assets turnover before and after the enforcement of KCL's CG rules is the same. H1: Total assets turnover before and after the enforcement of KCL's CG rules is significantly different. Hypothesis 2: H0: Debt ratio before and after the enforcement of KCL's CG rules is the same. H1: Debt ratio before and after the enforcement of KCL's CG rules is significantly different. Hypothesis 3: H0: Return on equity before and after the enforcement of KCL's CG rules is the same. H1: Return on equity before and after the enforcement of KCL's CG rules is significantly different. Hypothesis 4: H0: PE ratio before and after the enforcement of KCL's CG rules is the same. H1: PE ratio before and after the enforcement of KCL's CG rules is significantly different. Hypothesis 5: H0: MB ratio before and after the enforcement of KCL's CG rules is the same. H1: MB ratio before and after the enforcement of KCL's CG rules is significantly different. Hypothesis 6: H0: Agency cost before and after the enforcement of KCL's CG rules is the same. H1: Agency cost before and after the enforcement of KCL's CG rules is significantly different. In addition, the following is a summary of our null against hypotheses in relation to CMAL: Hypothesis 7: H0: Total assets turnover before and after the enforcement of CMAL's CG rules is the same. H1: Total assets turnover before and after the enforcement of CMAL's CG rules is significantly different. Hypothesis 8: H0: Debt ratio before and after the enforcement of CMAL's CG rules is the same. H1: Debt ratio before and after the enforcement of CMAL's CG rules is significantly different. Hypothesis 9: H0: Return on equity before and after the enforcement of CMAL's CG rules is the same. H1: Return on equity before and after the enforcement of CMAL's CG rules is significantly different. Hypothesis 10: H0: PE ratio before and after the enforcement of CMAL's CG rules is the same. H1: PE ratio before and after the enforcement of CMAL's CG rules is significantly different. Hypothesis 11: H0: MB ratio before and after the enforcement of CMAL's CG rules is the same. H1: MB ratio before and after the enforcement of CMAL's CG rules is significantly different. Hypothesis 12: H0: Agency cost before and after the enforcement of CMAL's CG rules is the same. H1: Agency cost before and after the enforcement of CMAL's CG rules is significantly different. To test for significant differences in the performance indicators, we choose the nonparametric, Mann-Whitney U test. This is a two-independentsample test procedure to compare two groups of cases on one variable. This test does not assume normality. It is considered more robust and more efficient than the student t-test, as it is less likely to show statistical significance in the case of outliers' presence. Given the limited sample of this research, the Mann-Whitney U test is our best choice. To investigate the effect of introducing the CG laws on performance, we use a generalized least square (GLS) model with panel data. We use a random effect model, as we believe that the variation across companies and sectors is random and uncorrelated having some influence on the performance indicator variable. Our GLS panel regression model is of the form: where it Y is the dependent variable representing the performance indicator. i is the entity and t is time. CMAL reflecting the figures in Table 1. Investment, real estate and industrial sectors indicated no noticeable change in the PE ratio. The insurance sector exhibits another noticeable change after the introduction of the law. This is understandable, since insurance companies were expected to suffer more as a result of the crisis due the increased claims. Panel B of Figure 1 shows a decrease of value for all the sectors, as indicated by the MB ratio. This is also understandable, as equity decreased after 2008 across the board. The big losses of the banking sector as represented by the ROE ratio are evident in panel C. The same plot shows the big decrease of the ratio for the insurance sector after CMAL. Except for the banking and real estate sectors, the asset management, as represented by the ATO ratio, has improved after CMAL. Panels E and F exhibit unnoticeable change in the debt ratio and agency cost ratio. In panel C, we can observe the big plunge of ROE for the insurance sector after applying the law. This huge drop in profitability can only be explained in the context of the 2008 global financial crisis. In general, the plots show a general drop in valuation and profitability and an increase in agency cost due to the same reason. The results indicate that for the banking sector PE and MB are significant at the 5% level and ROE and ATO are statistically significant at the 10% level. This means that valuation, profitability and asset management performance indicators before and after the in- Notes: * Statistically significant at 5%. ** Statistically significant at 10%. real estate sector only PE and ROE are statistically different at the 5% significant level indicating differences in valuation and profitability before and after the introduction of the law. For the industrial sector, only MB is significant at the 5% level which indicates differences in valuation of this sector before and after the introduction of the law. TESTING RESEARCH HYPOTHESES Tables 4 and 5 above summarize the results of hypotheses testing based on the introduction of KCL and CMAL. Estimating the GLS panel data regressions Autocorrelation, heteroskedasticity, stationarity and independent variables' multi co linearity are all common problems with linear regres-sions. Given the nature of our panel data, autocorrelation and heteroskedasticity are not a concern, since we consider only a total of eight years for all the companies. This number is further split when grouping to compare performance indicators. We also use the option of robust standard error to eliminate these two problems. The problem of multi co linearity of explanatory variables is not a concern either, since we use binary variable representing different time groupings. To test for stationarity in the series property of the dependent variable, we use the Levin-Lin-Chu unit root. The null hypothesis of this test is that panels contain unit roots against the alternate hypothesis that panels are stationary. The results of this test are presented in Table 6 below. Table 6 indicates that all variables do not contain unit root and are stationary. Therefore, we can conclude that a linear model can be estimated safely. Our GLS equation with panel data was estimated thirty times to cover the six performance indicators (dependent variables) for each of the five sectors. Table 7 illustrates the results of the model estimation. As indicated in Table 7, four performance indicators are found to be statistically significant either at the 5% level or at the 10% level of significance. These indicators are market to book value representing the value of the firm, debt to asset ratio representing financial leverage, AGCOST representing additional expenses as a result of agency problems and total assets turnover representing the efficiency of asset management. Market to book value indicator is affected negatively by the introduction of both laws indicating adecrease in valuation of banks. This result can be interpreted by the fact that inappropriate laws or heavy legal burden and sometimes unneeded, governance may lead to damaging outcomes. This argument is particularly true in the case of Kuwait. Major controversial and prolonged discussions and amendments to both laws took place before and after approval. One of the authors of this paper was a minister of trade at that time and was deeply involved in preparing the original draft of the laws. She witnessed an immense resistance and pressure by external powers to affect government and parliament to amend the laws to serve their interests. Many market participants believe that corporate governance objectives of the two laws cannot be achieved. Financial leverage factor was found to be affected positively by the introduction of KCL only. The positive effect on leverage could mean that banks feel safe to increase their financial leverage/risk with the introduction of corporate governance rules included in the new Companies Law. The agency cost variable represented by the ratio of equity to total assets is also found to be positively affected by the KCL indicating lower agency cost. This is in line with resulting effect on financial leverage. KCL is also found to affect the assetturn over variable negatively. This means that the performance of the banking sector may be worse with the introduction of both laws in terms of asset management. The result confirms the argument we made with regard to the negative outcome of the value performance indicator. The results of estimating the GLS regressions for the investment sector is presented below in Table 8. It shows that all performance indicators were affected. Table 8 indicates that PE is affected positively by KCL. The PE ratio reflects, particularly, the trader's market valuation of the firm stock. Our interpretation of this result is that stock traders may have believed that the implementation of the KCL will positively affect the performance of the investment sector following the 2008 crisis influencing their optimistic decisions. Contrary to the resulting positive effect on PE ratio, market to book value ratio is found to be negatively influenced by CMAL. This is another valuation indicator reflecting value based on the firm's actual equity. This result tells us that the value of the firm, based on its equity, deteriorate as a direct result of implementing the capital market authority law. MB ratio is also driven by traders' perception of the future of the firm. The negative effect may be interpreted by the fact that traders believe CMAL is unable to improve firm valuation especially as the investment sector was hit badly with huge provisions following the 2008 global financial crisis. Return on equity indicator is positively affected by KCL. An increase of ROE may be due to a decrease in equity of the investment sector relative to profit improvement. The result tells us that the investment sector receive the implementation of KCL as a driver of profitability. Furthermore, the leverage performance indicator is negatively influenced by KCL. This means decision makers in the investment sector may not feel safe with implementation of KCL to raise external funding, which is associated with financial risk. Again, the aftermath of the global financial crisis may add more weight to this feeling. Also, KCL has a positive influence on the AGCOST variable indicating lowered agency cost. Contrary to the same variable for the banking sector, this result means that the implementation of KCL does lead to an improvement in the agency cost of the investment sector. This is understandable since it is the sector that suffered the most from the financial crisis. Assets turnover representing the asset-management performance indicator of the sector is also found positively inspired by the implementation of KCL. Table 9 depicts the resulting outcome of estimating our GLS model for the insurance sector. It shows a significant effect of CMAL on market to book value financial leverage and assets turnover. KCL has no significant effect on any of the financial indicators. The effect on market to book value ratio is negative, indicating a pessimistic market perception with regard to the effectiveness of the CMAL to improve firm value within the insurance sector. The same applies to the asset management variable. On the other hand, financial leverage is positively affected demonstrating an optimistic reception of the implementation of the CMAL with regard to raising new external funds. The results of the regression model for the real estate sector is illustrated in Table 10 below. It shows that except for the PE ratio, all the variables are significantly influenced. CMAL has a negative effect on MB of the real estate sector indicating a lower valuation following Return on equity indicator, on the other hand, is positively affected by KCL. This is in line with the objectives of the law. Another objective is lowering agency costs. This is confirmed by the positive effect of KCL on the AGCOST variable which is positively significant. The leverage ratio, however, is indicating a negative influence. Again, for deci-sion makers in this sector, the implementation of the new KCL does not encourage external funding. Also, the assets turnover variable is positively affected by KCL and negatively affected by CMAL. This implies that KCL implementation leads to better assets management in the real estate sector and the implementation of the CMAL leads to worse assets management. The contradicting sign of the statistic may be explained by the different natures of the laws. The KCL is concerned mainly with factors related to the internal operation of the compa- The effect of KCL on PE is positive. The effect on this valuation indicator means that the market gives more value to the industrial sector in response to the new corporate governance rules included in the law. Another valuation indicator represented by the market to book valuewas found to be affected negatively the CAML. It indicates the market is encouraged by the introduction of the new governance rules included in the CMAL law. KCL, on the other hand, was found to have a positive effect on the profitability performance of this sector. This shows that corporate governance rules included in the KCL leads to an improvement of profitability for industrial companies. Although the effect of KCL is positive on ATO variable, the negative effect of the CMAL is evident again on the assets turnover indicator. An important finding of this research is that, except for D/A ratio, all performance indicators were negatively affected by CMAL. This is evident in table 12 which presents a summary of the resulting signs of all significant effect. The other major finding is that most of the performance indicators that were significantly affected by KCL had positive coefficients. The only explanation of these two contradicting results is that, unlike KCL, CMAL has included corporate governance rules that are inappropriate or ineffective in improving the performance of the Kuwaiti companies. Intolerable strict and heavy CG regulations are common pitfalls of incompetent regulators. This is in line with conclusions made by Carney (2006) and Bruno and Claessens (2010). CONCLUSION Following the 2008 global financial crisis, many countries all over the world have enforced new market reforms and more strict corporate governance regulations. Kuwait was not an exception. It enforced two major laws targeting market reforms and improvement of corporate governance of the companies listed in Kuwait Stock Exchange. The Capital Market Authority Law (CMAL) was implemented in 2010 and the Kuwait Companies Law (KCL) was implemented in 2012. Feasibility of the two laws was controversial as it was extensively debated among economic and political rivals. Eventually, the two laws were enforced. In this research, we sought answers to two question (1) has the performance of the listed companies changed in response to the enforcement of the two laws? And (2) if it has, was there a direct influence of the laws on that change? To answer the questions, we reviewed the relevant literature with the objective of identifying the proper factors to measure and develop our research hypotheses. Six factors were identified representing valuation, profitability, assets management, debt and agency costs. For each factor we developed two hypotheses for a total of twelve hypotheses. Each hypothesis is tested using Mann-Whitney U test of two-independent-sample to compare two groups of cases. For the CMAL, except for the agency cost indicator, all indicators for the banking, before and after the implementation of the law were found to be significantly different. For the other sectors, only the valuation factor represented by the market to book value was found to be significantly different. For the KCL, market to book value and assets management factor were found to be significantly different for the banking sector. For the investment sector, except for assets management factor, all other factor were found to be significantly different. Performance indicators for the insurance sector exhibited no significant differences. Profitability indicator and valuation indicator, represented by the price earnings ratio for the real estate sector, before and after the implementation of KCL, were significantly different. Valuation indicator represented by the market to book value ratio was the only factor to exhibit a significant difference. These results are definitely inconclusive.
8,820
2017-07-06T00:00:00.000
[ "Economics", "Business" ]
ECONOMIC IMPORTANCE OF USE OF PESTICIDES IN WHEAT PRODUCTION ECONOMIC IMPORTANCE OF USE OF PESTICIDES IN WHEAT PRODUCTION1 Quality and productivity determined by genotypes and application of scientific farming measure in wheat production. The pesticides are contributing to achieving high yield of wheat which application. The aim of this work is economic analysis of pesticides application in wheat production. For analysis used collected data from 32 wheat producers in rural area of Republic Serbia. The results in included farms in this investigation showed that average area of wheat production was 1.6 ha with achieved average grain yield 3621 kg ha-1 and average costs 563.15 € per hectare. The average use of pesticides active ingredients was 892.5 g. Wheat producers applied the different amount of pesticides active ingredients: 646 g (72.44%) of herbicides, 231.7 g (25.96%) of fungicides and 14.3 g (1.60%) insecticides. The average plant protection costs by used pesticides were 70.30 euros ha-1, which was 12.48% of wheat production. The gain threshold computed was 319.54 kg ha-1. For achieving high economic output in wheat production is necessary apply right dose of pesticide, decrease costs of production and continuously provide education of farmers. Introduction Wheat (Triticum aestivum L.) is one of the important cereal crops and staple food as well source of proteins for about 70% human population in the world. The weeds, pests, diseases and insects are the major source of crop damage, yield and quality reduction in the world. The economic production of wheat depends from scientific measure of farming which contribute prevention of loses of yield (Knezevic et al., 2015). In wheat EP 2017 (64) 4 (1323-1334) Adriana Radosavac, Desimir Knežević production the application of pesticides is one of important measure in plant protection of attack of pests and diseases that can cause of yield lose. For production of safe food is very important develop new technology of pesticide application, new pesticides with less hazardous for health (Delcour et al., 2015). Pesticides effects of suppressing pathogens on the plants contribute to the higher yield and quality (Aktar et al., 2009). Behind of use of pesticides, other factors that influence to economic production are genotypes, fertilizer and machines. The control of weeds contributes to prevent losses of yield varied depends of crops from 10% to 50%. The economic impact of insect infestation can be significant which cause serious damage of yield and quality of wheat. The bug (Sunn pest [SP] Eurygaster spp.) damaged wheat grain endosperm due to injected proteinase that cause disruption of protein structure and caused reduced flour quality, dough properties low bread volume and texture (Torbica et al., 2007;Dizlek, 2017). Also, attack of cereal leaf beetles (Oulema spp.) cause reduction of assimilation between 10% (attack of the single larva) to 80% (massive attack of larvae) what indicated economic threshold of larva per stem and losses of grain. The intensity of attack of cereal leaf beetles are different depends of season and regions (Tanaskovic et al., 2012). In Serbia, cereal leaf beetles sporadically affected cereals wheat, without significant economic damage. However, in the period 1988-1992, it becomes economically the most important pest in cereals, and up to 28% of cereals were chemical treated. However, during 1992-1998, cereal leaf beetle's populations decreased, and only 2-2.5% of wheat area was sprayed (Stamenković, 2000;Jevtić et al., 2002). Application of herbicides has economic benefit through yield increasing and decline expenses of labour. For the sustainable rural agriculture is necessary develop technology of crops production with achieving economic profitability, social and economic equity and environmental and food security. In conventional farming, from the period of Green revolution the enormous amount of chemicals were used to protect crop damages due to weed, pests and diseases, control, which connected with environmental pollution as well unsafe food products. However, sustainable agriculture need based by use pesticides with the least toxicity, decreasing of energy expenses and increasing yield and profit (Sexton et al., 2007). Modern handling methods, clean technology, can lead to decline presence of contaminated matter and pest attack to seeds or plants. The very important is choice of right type of pesticides and its application of recommended dose at the right time in prevention of negative effect to production costs, pest resistance to pesticides and ill effect on human and animal health, environment and sustainability of agriculture production (Khan et al., 2010). Another advance of right use of pesticides is suppression and reduction of plant pest and diseases and has key role in increasing agricultural production as well income of farmers due to crop production (Nazarian et al., 2013). In Serbia, pesticides play important role in food security due to limited arable land and requirements of user for improving food security and protecting the environment. The aim of this investigation was evaluation of economics of pesticide use in wheat production to determine the farm-level economic cost and amount of pesticides used in wheat production for rural development. Material and methods In wheat production in 2015 obtained 2418203 tons, approximately. The wheat production realized on 589922 ha approximately, what is the second large area in production among cereal crops in Serbia (source data of statistical office of government of Republic of Serbia). For our study were included 32 wheat farmers in different location of Serbia. The farm was chosen by simple random sampling method. The obtained data in structured questionnaires submitted to farmers were analyzed for farm size and structure, farmers experience and education for agricultural production, area under wheat production, applied quantity and type of pesticides, data of grain yield. By frequency presented characteristics of farmers. Toxicity of pesticides determined according to classification by WHO (2009). Economic cost of pesticides per hectare computed by formula: EC= Q x P Economic cost = Quantity of active ingredient of pesticide (g ha -1 ) x Price of pesticide (l € -1 ) The gain threshold can be calculated with the following formula: Results The analysis of agricultural properties showed variation of size and structure household, production of agricultural plant species, type of technology of cultivation. Mainly individual farmers are produce for their own consumption and surplus for the market. In analyzed individual farms, the average size of cultivated area was 5.8 ha of which 27.8% (1.6 ha) used for wheat production (Table 1). In wheat production, the farmers expressed interest in optimization of technology growing practices in the aim to increase grain yield and make profit. The knowledge and experience of how to use of pesticides, farmers learn on different way. Farmers for decision of pesticide application have numerous sources as well internet information, information from extension services advices, input dealers and pesticide labels. Mainly, farmers watched special agricultural programs 87.50% (28 farmers) while for needs of agriculture used internet about 18.75% (six farmers). About 65.6% (21) of farmers participated at some special meeting for wheat production, while 25% (eight farmers) participate at the meeting for plant protection. Instruction on the labels for pesticide application read 81.25% of producers (Table 3). The information on the label is very important source of knowledge for the farmers how to safe use and apply pesticides (Waichman et al., 2007). No segment of the population is completely protected against exposure to pesticides. The highrisk groups exposed to pesticides include production workers, formulators, sprayers, loaders and agricultural workers. Especially, the high-risk groups are people that are in contact with pesticides. Exposure to pesticides linked to negative effect of immune function, liver, intelligence, cardiovascular a respiratory function, reproductive abnormality cancer (Sarwar, 2015). Among them, the farmers belong to the risk group and need take measure of preservation of pesticides toxicity. For the safety is very important method of pesticide application, use of protective equipment and cloths. In this study, the 71.87% of farmers applied pesticides by mechanical spraying and 15.62% of farmers applied manually. Among them about 46.87% of producers, used protective equipment and 25.00% used protective clothing. Most of farmers 87.50% who are prefer use more safe techniques to protect environment during agricultural production (Table 4). In wheat production applied pesticides which contributed to the growth of crop productivity as well food supply. The pesticides used by the farmers in wheat production presented in table 6. Pesticides were grouped by their toxicity classification and their chemical family (WHO, 2009). In our study wheat farmers, the nine different types of pesticides were used. Among 32 farmers the four types of herbicides: used in wheat production. Most of the farmers used herbicide Metmark WP which active ingredient is Metsulfuron methyl (56.25%). Some of wheat farmers used active herbicides ingredient Thiophanate-methyl (9.38%), Tribenuron (6.25%), Aminopyralid + Florasulam (3.12%). According to WHO classification the toxicity of all applied herbicides classified in U group (Table 6). Among the 32 wheat farmers the type of fungicide commonly used by the farmers was identified as Tebuconazole, which classified as moderately hazardous (II group of toxicity). The two trade makes (Zantra and Acord) of fungicides were used by 53.13% of the farmers as protection from fungal diseases in wheat production ( Table 6). The insecticides commonly used by the farmers were identified as Deltamethrin (12.5 %), Bifenthrin (3.12%) and Chlorpyrifos methyl + Cypermethrin used by 3.12 % of the farmers (table 6). The differences of used amount of pesticides affected by weather, season, pest pressure, price of pesticides and technical equipment. Therefore, in Serbia cereal leaf beetle sprayed 28% of wheat area in period 1988-1992, while only 2-2.5% of wheat area was sprayed in period 1992-1998 (Stamenkovic, 2000). Results In Serbia wheat grown on about six hundred thousand hectares per annum with total production over the 2.0 million tons. According to official report in Serbia realized the average wheat grain yield 3400 kg ha -1 with expenses for application of pesticides in average 92.0 € ha -1 . The amount of pesticides use in wheat production in Serbia is not significant different in comparison to European Union countries. However, in Serbia, EU and all over the world there are concerns about negative influences of pesticides on human health, food safety and environment in some regions. (Table 7). By analysis were established that farmers use herbicides and insecticides more than the recommended, fungicides less than the recommended dosages extension services advices, instruction of pesticide labels. The application of inadequate amounts of pesticides (increased or decreased) can lead to inefficient, crop and economic losses and environmental hazards. In this investigation the average costs of wheat production were established to be 563.15 euros per hectare, with share of pesticides cost 70.3 € per hectare, with portion of 12.48% of average production cost. The average yield included farms in this study, was 3621.0 kg ha -1 . In this study were computed that cost of pesticide per kilogram amounted to 0.019 € and the cost of production per kilogram 0.155 € (Table 8). Benefits of use of pesticides The need of use of pesticides is to ensure and improve the yield and quality of products and industrial processes in function to provide safe food and high standard of health in society. Numerous pesticides provide protection against dangerous pest and diseases or their vectors. Some pesticides are used to preserve the perishability of the product during storage i.e., to protect the time usability of goods, food, products. Without use of targeted pesticides, many products (coating, sealants) cannot be use for consumers, but products enable placing on market without or with low content of pesticides to protect environment. The use of pesticides requires assessment of the economic feasibility and safety for human health and environment, social consciousness and International cooperation and competitiveness (Sexton et al., 2007). In recent time, political measured and demands of numerous professional and public associations directed to carefully examine impact of pesticides on environmental and human health as well pesticide benefits, risks and their application in accordance with hygienic standards. The very important is knowledge about benefits and risks of pesticides and their rational application with the motto "as much as necessary, and as little as possible." This way of application giving to benefits of pesticides through achieve optimal results and long-term efficacy of the treatment, reducing potential risks to health and environment, well targeted manner uses in intended fields. In European Union developed action for sustainable use of pesticides for plant protection products in the aim of harmonized social environmental and economic impact (Directive 2009/128/EC). The ecological basis needs to be put in balanced proportion to socio-economic aspects. For sustainable use of pesticides is necessary conduct education for safety data of pesticide application, poisoning incidents with provable health damage, control of tools and machine and best practice of pesticides application, monitoring of risk and benefits of appropriate use of pesticides, rules of disposal of pesticides products after their use phase and of their packaging. Crop production and protection from the losses The significant attention in agriculture has production of crops which are major in food of human population. Among them the three crops (wheat, rice, maize) spread in production on about 40% of total cropland and are important essential resources of proteins, carbohydrates, lipids, vitamins, microelements in human nutrition all over the world. Also, soybean, cotton, sunflower, barley, rye, out, sorghum take significant place in agricultural production for the food of human and animals. The aims of agricultural production are increasing yield and quality of crops and reducing losses (Knezevic et al., 2017). Improved crop management based on selection of high yielding genotypes, improved soil fertility by application of fertilizers, irrigation, application pesticide contributed to increasing of yield in agricultural crops (Paunovic et al. 2009;Kondic et al., 2012). However, in diverse agro-ecosystems the crop production conducted under pressure of biotic and abiotic limited factors (pests, insects, rodents, drought, frost, high and low temperature air, etc.) which cause reduced yield and quality. Among crops the loss potential of pests worldwide varied depends of crops and in barley can achieve below 50% in sugar beet and cotton more than 80%, On the beginning of 21 st century losses in wheat, barley, soybean, sugar beet and cotton are estimated at 26-30%, while for maize-35%, potatoes-39% and rice-40%, respectively (Oerke and Dehne, 2004). The very important for wheat producers is how to recognize the eco nomic ceiling i.e. the maximum yields that make economic sense, given by the relative prices of input and outputs, risk and other factors (Sumberg 2012). Similar in study of Loyce et al. (2012) found that agronomic optimum could differ depends to the soil-weather conditions and crop management practices but also by the degree of risk. The greater potential of costs optimization is in crop protection com pared to costs of crop nutrition. Pesticides have been a major contributor to the growth of crop productivity and food supply (Sexton et al., 2007). The weeds had the highest loss potential (32%) while the less effect have animal pests (18%) and pathogens (15%). In addition, due to viruses estimated serious problems in potatoes and sugar beets in some areas in average 6-7% and in other crops about 1-3% (Oerke and Dehne, 2004). The measures of protection showed the highest efficacy at 53-68% and lower between 43-50% of protection in food crops. The protection depends from agro-ecological region and highest coefficient of efficacy in wheat was 28%. The control of weed can conduct by mechanical removal and herbicides and efficiency of weed control is higher (68%) than control of animal pests (39%) and diseases (32%) by using of pesticides (Oerke and Dehne, 2004). The increasing of quantity of crop production and food is possible through increasing productivity per unit area. This is possible on the base intensification of pest control in various crops. When the pest problem is managed at the proper time it improves the crop productivity. Therefore, use of pesticide of appropriate dose and time contributes to improving the crop productivity and quality (Khan et al., 2010). Using of pesticides than the recommended dose can decline protection efficacy. Considering the task of preventing negative effects on the environment the prevention of losses in crop production can achieve by integrated pest management. Conclusion Application of pesticides can prevent losses caused by pests in agricultural production and can improve quantity and quality of the produce. In this study showed that average area of wheat production was 1.6 hectare with average yield 3621.0 kg ha -1 and with average cost of wheat production was 563.15€. In average use of pesticides active ingredient was 892.5 g ha -1 with costs 70.30 € what is 12.48% of wheat production costs. In analysis of use of pesticides in wheat production in the individual farms in Serbia showed that the gain threshold was 319.54 kg ha -1 what is 8.80% of wheat production per hectare, what is economically justified. In study was found that the farmers applied herbicides and insecticides more than recommended amount and insecticides less than recommended amount, what leads
3,913
2017-12-20T00:00:00.000
[ "Economics", "Agricultural and Food Sciences" ]
Biocompatible Phosphorescent O2 Sensors Based on Ir(III) Complexes for In Vivo Hypoxia Imaging In this work, we obtained three new phosphorescent iridium complexes (Ir1–Ir3) of general stoichiometry [Ir(N^C)2(N^N)]Cl decorated with oligo(ethylene glycol) fragments to make them water-soluble and biocompatible, as well as to protect them from aggregation with biomolecules such as albumin. The major photophysical characteristics of these phosphorescent complexes are determined by the nature of two cyclometallating ligands (N^C) based on 2-pyridine-benzothiophene, since quantum chemical calculations revealed that the electronic transitions responsible for the excitation and emission are localized mainly at these fragments. However, the use of various diimine ligands (N^N) proved to affect the quantum yield of phosphorescence and allowed for changing the complexes’ sensitivity to oxygen, due to the variations in the steric accessibility of the chromophore center for O2 molecules. It was also found that the N^N ligands made it possible to tune the biocompatibility of the resulting compounds. The wavelengths of the Ir1–Ir3 emission maxima fell in the range of 630–650 nm, the quantum yields reached 17% (Ir1) in a deaerated solution, and sensitivity to molecular oxygen, estimated as the ratio of emission lifetime in deaerated and aerated water solutions, displayed the highest value, 8.2, for Ir1. The obtained complexes featured low toxicity, good water solubility and the absence of a significant effect of biological environment components on the parameters of their emission. Of the studied compounds, Ir1 and Ir2 were chosen for in vitro and in vivo biological experiments to estimate oxygen concentration in cell lines and tumors. These sensors have demonstrated their effectiveness for mapping the distribution of oxygen and for monitoring hypoxia in the biological objects studied. Introduction The development and application of transition metal phosphorescent complexes, such as non-invasive molecular oxygen sensors, is a topical area of research. The possibility of using such complexes in biomedical experiments is particularly important, since tracking changes in O 2 concentration is a fundamental issue in the studies of metabolic processes, as well as for diagnosing various pathologies and evaluating the efficiency of the therapy used [1][2][3][4]. The sensory response of phosphorescent complexes to the presence of molecular oxygen is based on the effective energy transfer from the excited triplet state of phosphors to the ground triplet state of O 2 molecules to give phosphorescence quenching, accompanied by a decrease in emission intensity and in the lifetime of the excited state [5][6][7]. In early studies, phosphorescence intensity was actively used as an analytical signal to determine oxygen concentration. In the ratiometric approach, the sensor response was quantified by comparison of the phosphorescence intensity with that of a certain oxygenindependent external or internal standard; fluorescence emission was commonly used as the latter. However, this approach makes analytical systems more complicated (at least two emitters have to be used in the measurements) and suffers from the dependence of the results on the optical properties of the samples under study, as well as the possible influence of different factors, other than oxygen concentration, on the luminescence of the reference emitter (pH, temperature, viscosity, etc.). The lifetime response of oxygen concentration variations is free from the above drawbacks, does not depend on the sensor concentration that makes phosphorescence lifetime measurements more reliable and results in a wide use of phosphoresce lifetime imaging (PLIM) in different analytical and biomedical applications, including oxygen sensing [7][8][9][10]. For the successful application of phosphorescent oxygen sensors in biology, they should be soluble in physiological (aqueous) media, demonstrate low toxicity and high stability and exhibit good photophysical characteristics (high quantum yield, sensitivity to the presence of molecular oxygen, emission and excitation in a required wavelength range). It is also worth noting that, in order to verify the practicable applicability of oxygen sensors in biomedical experiments, it is necessary to test them in various model biological media, since different factors, such as variations in pH values, temperature, salinity, viscosity, ions and the presence of biomacromolecules (primarily albumin, which has in its structure so-called "hydrophobic pockets"), can significantly affect the photophysical characteristics of the sensor. Exploration of Ir(III) orthometallated complexes as oxygen sensors has a rather long and rich history, beginning with a key publication [39] demonstrating the applicability of this type of emitter for the detection of hypoxia in biomedical research. Later (see a mini-review [40] and recent review [41] for a complete list of references), in the works of Dr. S. Tobita et al. and several other research teams [36,[42][43][44][45][46][47], it has been demonstrated that Ir(III) molecular emitters can be successfully applied in in vitro and in vivo biomedical studies to qualitatively detect hypoxia in the tumor microenvironment [45,48] and to measure oxygen concentration in cell cultures by using a ratiometric approach [49] or timeresolved techniques, such as PLIM [50,51]. Moreover, in some cases, directed subcellular localization is possible; for example, in mitochondria [44,52], which makes it possible to track (especially with simultaneous FLIM imaging of the NADH and other coenzymes) metabolic processes and their variations with changes in nutritional conditions, oxygenation, etc. Significantly shorter values of the lifetimes of iridium complexes compared with the platinum and palladium porphyrins (by one or two orders of magnitude) allow for much faster signal accumulation in PLIM experiments, making it possible to increase the acquisition rate and/or real-time image resolution. Nevertheless, it must be mentioned that the iridium chromophores display lower brightness compared with the above-mentioned Pt and Pd porphyrins, which is determined not by their quantum yields (which are quite comparable), but by the lower extinction coefficient of these samples. Nevertheless, the versatility of the photophysical characteristics of iridium complexes achieved by variations in the donor properties of the ligands and simpler synthetic methods open the way for their further modification and vectorization, making Ir(III) complexes the compounds of choice for the preparation of oxygen sensors for particular biological studies. As for many other phosphorescent transition metal complexes, their hydrophobicity and insolubility in aqueous media remain issues to be solved. One of the useful approaches consists of chromophore conjugation with synthetic [45] or natural [53,54] water-soluble polymers. Recently, we have designed and synthesized a number of oxygen sensors based on iridium complexes [52,[55][56][57][58][59][60][61][62][63][64][65], which exhibit good sensitivity to molecular oxygen. The chromophores in these compounds are shielded from interaction with the bio-environment by relatively short oligo(ethylene glycol) tails, which makes intracellular internalization possible and to simultaneously impart water solubility and increase the biocompatibility of these molecules. In this article, we present the synthesis, characterization and photophysical study of three novel phosphorescent [Ir(NˆC) 2 (NˆN)]Cl complexes (Ir1-Ir3) containing the same NˆC metallated and different NˆN diimine ligands. The oxygen-sensing properties of the most effective emitters (Ir1 and Ir2) were investigated in model physiological media and in living cells, as well as through in vivo experiments in a mice tumor model by using time-resolved phosphorescent lifetime imaging. Details of synthetic and separation procedures, photophysical experiments and computational studies, as well as data concerning the description of installations and methods for conducting biological experiments, are given in the Supplementary Materials. Synthesis and Characterization A derivative of pyridine-benzothiophene was chosen as a cyclometallating ligand, since it is known that iridium complexes based on it exhibit emissions in the red region of the spectrum [68][69][70], which can be useful for in vivo studies, since such luminescence will be within the transparency window of biological tissues (≥600 nm). To impart water solubility, biocompatibility and low toxicity to the target compounds, as well as to protect them from nonspecific interactions with biomolecules, we introduced short-branched oligo(ethylene glycol) substituents into the structure of the cyclometallating and diimine ligands. The scheme of the corresponding modification and obtaining a new NˆC ligand is given below (Scheme 1), while the modification reactions of the corresponding NˆN ligands are described in the previous literature [52,60,62]. To impart water solubility, biocompatibility and low toxicity to the target compounds, as well as to protect them from nonspecific interactions with biomolecules, we introduced short-branched oligo(ethylene glycol) substituents into the structure of the cyclometallating and diimine ligands. The scheme of the corresponding modification and obtaining a new N^C ligand is given below (Scheme 1), while the modification reactions of the corresponding N^N ligands are described in the previous literature [52,60,62]. In the next stage of the synthesis (Scheme 2, upper part), upon the reaction of this cyclometallating ligand N^C and iridium(III) chloride, we isolated a new dimeric complex [Ir2(N^C)4Cl2], which we further used as a starting material in obtaining the target phosphorescent compounds, Ir1-Ir3. The synthesis of the target complexes (Scheme 2, bottom part) was carried out by replacing the labile chloride ligands in the [Ir2(N^C)4Cl2] dimer with diimine N^N chelates, with simultaneous dissociation of the dimer, to obtain the [Ir(N^C)2(N^N#)]Cl complexes in good preparative yields (59-95%), where the N^N# diimine ligands were also modified with oligo(ethylene glycol) residues. In the next stage of the synthesis (Scheme 2, upper part), upon the reaction of this cyclometallating ligand NˆC and iridium(III) chloride, we isolated a new dimeric complex [Ir 2 (NˆC) 4 Cl 2 ], which we further used as a starting material in obtaining the target phosphorescent compounds, Ir1-Ir3. The synthesis of the target complexes (Scheme 2, bottom part) was carried out by replacing the labile chloride ligands in the [Ir 2 (NˆC) 4 Cl 2 ] dimer with diimine NˆN chelates, with simultaneous dissociation of the dimer, to obtain the [Ir(NˆC) 2 (NˆN#)]Cl complexes in good preparative yields (59-95%), where the NˆN# diimine ligands were also modified with oligo(ethylene glycol) residues. The compounds obtained (ligands, iridium(III) dimer and the target iridium complexes) were comprehensively characterized by NMR spectroscopy and mass spectrometry. It should be noted that, due to the presence of a large number of oligo(ethylene glycol) substituents in the structure of the target complexes, we failed to obtain these compounds either in the form of high-quality single crystals, suitable for X-ray diffraction analysis, or as a polycrystalline material, suitable for elemental analysis of the CHNS content. Nevertheless, the number, multiplicity, location and integral intensities of 1 H signals in 1D NMR spectra, as well as their cross-correlations in 2D 1 H-1 H COSY and NOESY NMR spectra, made it possible to reliably establish the structure and composition of these compounds (Figures S1-S17 in the Supplementary Materials). Additionally, these conclusions were confirmed by high-resolution ESI + mass spectrometry; see Figures S3-S18. In the obtained mass spectra, the main signals corresponded to the molecular ions of these complexes, both in pure form and with the addition of H + or Na + cations. The isotopic distribution patterns were also in excellent agreement with those calculated for these particles. We also carried out quantum chemical calculations, which included optimization of the ground state structure (as an example, the optimized structure of the Ir1-0 complex is shown in Figure 1). Note that for the sake of simplicity in the optimization procedure, we used a model structural pattern, which differs from the structures of the Ir1-Ir3 compounds in that oligo(ethylene glycol) groups were replaced by methyl substituents to reduce the calculation time. It should be noted that such substitution is reasonable, since neither oligo(ethylene glycol) nor methyl substituents noticeably affect the central core structure and photophysical characteristics, being remote from the chromophoric center and having a similar nature from the viewpoint of donor-acceptor properties. The optimized structures for the Ir2 and Ir3 complexes are shown in Figures S22 and S23 in the Supplementary Materials. The obtained characteristics of the optimized structures (ligand disposition in the coordination octahedron, bond lengths and angles) are not exceptional and fit the structures of closely analogous complexes well, synthesized and characterized previously [68][69][70]. The obtained structural parameters of the optimized patterns are summarized in Tables S9-S11; see Supplementary Materials. It is also important to note that the optimized structural motifs of Ir1-Ir3 are in complete agreement with the data of proton NMR spectroscopy in terms of the ligand environment symmetry and intramolecular non-bonding proton contacts observed in the NOESY spectra, which additionally confirm the correctness of the complexes' composition and structure determination by this method. The compounds obtained (ligands, iridium(III) dimer and the target iridium complexes) were comprehensively characterized by NMR spectroscopy and mass spectrometry. It should be noted that, due to the presence of a large number of oligo(ethylene glycol) substituents in the structure of the target complexes, we failed to obtain these compounds either in the form of high-quality single crystals, suitable for X-ray diffraction analysis, or as a polycrystalline material, suitable for elemental analysis of the CHNS content. Nevertheless, the number, multiplicity, location and integral intensities of 1 H signals in 1D NMR spectra, as well as their cross-correlations in 2D 1 H-1 H COSY and NOESY NMR spectra, made it possible to reliably establish the structure and composition of these compounds (Figures S1-S17 in the Supplementary Materials). Additionally, these conclusions were confirmed by high-resolution ESI + mass spectrometry; see Figures S3-S18. In the obtained mass spectra, the main signals corresponded to the molecular ions of these complexes, both in pure form and with the addition of H + or Na + cations. The isotopic distribution patterns were also in excellent agreement with those calculated for these particles. Photophysical Study All compounds obtained exhibited luminescence in the red region of the visible spectrum. The absorption and emission spectra of Ir1-Ir3 are shown in Figure 2, and numerical spectroscopic data, together with emission quantum yields and lifetimes in aqueous aerated and deaerated solutions, are summarized in Table 1 and Table S1 ( previously [68][69][70]. The obtained structural parameters of the optimized patterns are sum-marized in Tables S9-S11; see Supplementary Materials. It is also important to note that the optimized structural motifs of Ir1-Ir3 are in complete agreement with the data of proton NMR spectroscopy in terms of the ligand environment symmetry and intramolecular non-bonding proton contacts observed in the NOESY spectra, which additionally confirm the correctness of the complexes' composition and structure determination by this method. Photophysical Study All compounds obtained exhibited luminescence in the red region of the visible spectrum. The absorption and emission spectra of Ir1-Ir3 are shown in Figure 2, and numerical spectroscopic data, together with emission quantum yields and lifetimes in aqueous aerated and deaerated solutions, are summarized in Tables 1 and S1 (see The absorption spectra of the studied iridium complexes exhibit strong high-energy bands in the range 250-350 nm and low energy absorption at ca. 450 nm with the tail extending below 550 nm. DFT analysis of the absorption spectra (see Tables S3-S8) showed that the observed high-energy bands may be assigned to a combination of the transitions between the aromatic systems of the N^C and N^N ligands with some admixture of the metal to ligand charge transfers ( 1 MLCT). The low-energy bands at ca 450 nm may be associated with intraligand and ligand-to-ligand charge transfer ( 1 LLCT and 1 ILCT) localized at two N^C ligands with a minor admixture of 1 MLCT transitions to N^C to the N^C ligands. The lowest calculated S0 ⟶ S1 transition, which has a very low oscillator strength, is located well below 500 nm and is associated with electron density transfer to the N^N ligand from N^C ligands and iridium ion. Ir1-Ir3 demonstrated luminesce in an aqueous solution in the red region of the visible spectrum, showing slightly structured emission bands with the maxima at 632, 638 and 655 nm, respectively; see Figure 2 and Table 1. These complexes displayed large Stokes shifts of ca. 17-200 nm, lifetimes in the microsecond domain and strong sensitivity to the presence of molecular oxygen, is indicative of the triplet nature of the emissive excited state, i.e., phosphorescence. Ir1 and Ir2 exhibit rather intense luminescence with the The absorption spectra of the studied iridium complexes exhibit strong high-energy bands in the range 250-350 nm and low energy absorption at ca. 450 nm with the tail extending below 550 nm. DFT analysis of the absorption spectra (see Tables S3-S8) showed that the observed high-energy bands may be assigned to a combination of the transitions between the aromatic systems of the NˆC and NˆN ligands with some admixture of the metal to ligand charge transfers ( 1 MLCT). The low-energy bands at ca 450 nm may be associated with intraligand and ligand-to-ligand charge transfer ( 1 LLCT and 1 ILCT) localized at two NˆC ligands with a minor admixture of 1 MLCT transitions to NˆC to the NˆC ligands. The lowest calculated S 0 → S 1 transition, which has a very low oscillator strength, is located well below 500 nm and is associated with electron density transfer to the NˆN ligand from NˆC ligands and iridium ion. Ir1-Ir3 demonstrated luminesce in an aqueous solution in the red region of the visible spectrum, showing slightly structured emission bands with the maxima at 632, 638 and 655 nm, respectively; see Figure 2 and Table 1. These complexes displayed large Stokes shifts of ca. 17-200 nm, lifetimes in the microsecond domain and strong sensitivity to the presence of molecular oxygen, is indicative of the triplet nature of the emissive excited state, i.e., phosphorescence. Ir1 and Ir2 exhibit rather intense luminescence with the quantum yields 17.3% and 8.5%, respectively, in a deaerated aqueous solution. On the contrary, Ir3 was a very weak emitter with an emission intensity two orders of magnitude lower compared to its congeners, which makes its application as an oxygen sensor in biological experiments impossible. The DFT and TD-DFT calculations gave the emission wavelengths, which were in very good agreement with the experimental data (Table S2). Analysis of the nature of emissive transitions (T 1 → S 0 ) for the studied complexes showed that the character of Ir3 emission is essentially different from those revealed for Ir1 and Ir2; see Figure 3 and interfragment charge transfer Tables S4, S6 and S8. In the complexes Ir1 and Ir2, the relaxation processes occur through the 3 ILCT and 3 MLCT transitions associated with the NˆC cyclometallating ligand. For Ir3, the excited-state relaxation is mainly associated with the diimine NˆN ligand, and, as a consequence, 3 MLCT (NˆN → Ir) and 3 LLCT (NˆN → NˆC) transitions are observed. Such a significant difference in the nature of the phosphorescence processes is most probably responsible for the strong difference in emission quantum yield (of the order of 0.1% in deaerated water) of Ir3 as compared to Ir1 and Ir2. One of the possible explanations of this observation may be the higher contribution of rotational non-radiative channels to excited-state relaxation for Ir3 due to the presence of two {-C(O)NHR} substituents at the diimine ligand compared to only one substituent at the NˆC ligands in Ir1 and Ir2. transitions are observed. Such a significant difference in the nature of the phosphorescence processes is most probably responsible for the strong difference in emission quantum yield (of the order of 0.1% in deaerated water) of Ir3 as compared to Ir1 and Ir2. One of the possible explanations of this observation may be the higher contribution of rotational non-radiative channels to excited-state relaxation for Ir3 due to the presence of two {-C(O)NHR} substituents at the diimine ligand compared to only one substituent at the N^C ligands in Ir1 and Ir2. Ir1-0 Ir2-0 Ir3-0 The noticeably higher phosphorescence quantum yields of Ir1 and Ir2, as well as their high sensitivity to the variations in molecular oxygen concentration in solution, led us to choose these compounds for the further studies of their sensor properties and their applicability as luminescent oxygen probes in biosystems. To calibrate the dependence of Ir1 and Ir2 excited state lifetimes on oxygen concentration, we carried out the measurements in water and in model biological solutions containing typical components of intracellular media: PBS buffer with the addition of bovine serum albumin-BSA and DMEM with the addition of 10% fetal bovine serum-FBS; see Table 1 and Figure 4. The experiments in the model solutions make it possible to reveal the effect of salinity, pH and the presence of the main protein component of plasma (albumin) on the sensors' characteristics. The obtained data (Table S1) indicate lifetime independence on pH and media salinity. Aqueous solutions were studied for comparison. The noticeably higher phosphorescence quantum yields of Ir1 and Ir2, as well as their high sensitivity to the variations in molecular oxygen concentration in solution, led us to choose these compounds for the further studies of their sensor properties and their applicability as luminescent oxygen probes in biosystems. To calibrate the dependence of Ir1 and Ir2 excited state lifetimes on oxygen concentration, we carried out the measurements in water and in model biological solutions containing typical components of intracellular media: PBS buffer with the addition of bovine serum albumin-BSA and DMEM with the addition of 10% fetal bovine serum-FBS; see Table 1 and Figure 4. The experiments in the model solutions make it possible to reveal the effect of salinity, pH and the presence of the main protein component of plasma (albumin) on the sensors' characteristics. The obtained data (Table S1) indicate lifetime independence on pH and media salinity. Aqueous solutions were studied for comparison. The obtained calibration curves ( Figure 4) indicated that the growth (DMEM + 10 % FBS) and model biological media (PBS with 70 mM BSA) had an almost identical effect on the behavior of these sensors. The slope of the Stern-Volmer curves in these media for both complexes coincided under the limits of experimental uncertainty and obviously differed from that in pure water. This observation can be explained by an increase in solution viscosity in the presence of biological macromolecules that reduces the rate of oxygen diffusion in the sample under study and causes a drop in the Stern-Volmer quenching constant. It is also impossible to exclude a reversible non-covalent interaction of the chromophores with hydrophobic pockets of albumin, which may shield the complexes from collisions with oxygen molecules that increase the observed excited state lifetime. The obtained data indicate that the oligo(ethylene glycol) moieties are unable to completely shield the chromophore from the effects of the bio-environment, which in turn implies the diffusion in the sample under study and causes a drop in the Stern-Volmer quenching constant. It is also impossible to exclude a reversible non-covalent interaction of the chromophores with hydrophobic pockets of albumin, which may shield the complexes from collisions with oxygen molecules that increase the observed excited state lifetime. The obtained data indicate that the oligo(ethylene glycol) moieties are unable to completely shield the chromophore from the effects of the bio-environment, which in turn implies the use of calibration curves obtained in model physiological media for the estimation of oxygen concentration in biological samples. It is worth noting that Ir1 and Ir2 demonstrated high dark stability and low photobleaching rate under irradiation with 355 and 365 nm. Thus, the obtained data clearly indicate that the phosphorescent complexes Ir1 and Ir2 are suitable for application in functional bioimaging as promising oxygen sensors, since they exhibit appreciable quantum yields, high sensitivity to the presence of molecular oxygen and good solubility and stability in aqueous solutions, including model biological media. Biological Experiments Using the MTT assay, it was found that all the compounds tested were non-toxic for cultured cancer cells CT26 (murine colorectal carcinoma cell line) and HCT116 (human colorectal carcinoma cell line) in the concentration range studied. Upon incubation with the complexes for 24 h, more than 90% of tumor cells remained viable at concentrations of 150 µM and below ( Figure 5 and Figure S24 in Supplementary Materials). Biological Experiments Using the MTT assay, it was found that all the compounds tested were non-toxic for cultured cancer cells CT26 (murine colorectal carcinoma cell line) and HCT116 (human colorectal carcinoma cell line) in the concentration range studied. Upon incubation with the complexes for 24 h, more than 90% of tumor cells remained viable at concentrations of 150 μM and below ( Figures 5 and S24 in Supplementary Materials). Next, the ability of Ir1, Ir2 and Ir3 to accumulate in living cancer cells was investigated. As mentioned above, Ir3 featured a very low-emission quantum yield that gave an extremely weak luminescence signal inside cells, thus making this complex unsuitable for further biological testing. Using laser scanning microscopy, it was shown that Ir1 and Ir2 successfully penetrated into cultured cancer cells ( Figure 6); the phosphorescence intensity of both complexes increased stepwise in the time period from 1 to 24 h of incubation. Complex Ir2 displayed more intense luminescence compared to Ir1 under normoxic conditions, due to a higher extinction coefficient at 405 nm (excitation wavelength in cell internalization experiments). Inside the cells, the complexes were distributed heterogeneously as granules in the cytoplasm and also as a homogenous fraction in the cytosol. Next, the ability of Ir1, Ir2 and Ir3 to accumulate in living cancer cells was investigated. As mentioned above, Ir3 featured a very low-emission quantum yield that gave an extremely weak luminescence signal inside cells, thus making this complex unsuitable for further biological testing. Using laser scanning microscopy, it was shown that Ir1 and Ir2 successfully penetrated into cultured cancer cells ( Figure 6); the phosphorescence intensity of both complexes increased stepwise in the time period from 1 to 24 h of incubation. Complex Ir2 displayed more intense luminescence compared to Ir1 under normoxic conditions, due to a higher extinction coefficient at 405 nm (excitation wavelength in cell internalization experiments). Inside the cells, the complexes were distributed heterogeneously as granules in the cytoplasm and also as a homogenous fraction in the cytosol. After 3 h of incubation, the subcellular localization of Ir1 and Ir2 was analyzed using co-staining with organelle-specific dyes (Figure 7). It was found that both complexes colocalized moderately with lysosomes (M1 = 0.584 for Ir1 and M1 = 0.769 for Ir2) and almost did not colocalize with mitochondria (M1 = 0.236 for Ir1, M1 = 0.316 for Ir2). After 3 h of incubation, the subcellular localization of Ir1 and Ir2 was analyzed using co-staining with organelle-specific dyes (Figure 7). It was found that both complexes colocalized moderately with lysosomes (M1 = 0.584 for Ir1 and M1 = 0.769 for Ir2) and almost did not colocalize with mitochondria (M1 = 0.236 for Ir1, M1 = 0.316 for Ir2). After 3 h of incubation, the subcellular localization of Ir1 and Ir2 was analyzed using co-staining with organelle-specific dyes (Figure 7). It was found that both complexes colocalized moderately with lysosomes (M1 = 0.584 for Ir1 and M1 = 0.769 for Ir2) and almost did not colocalize with mitochondria (M1 = 0.236 for Ir1, M1 = 0.316 for Ir2). In order to assess the applicability of the Ir1 and Ir2 sensors for determination of the oxygen concentration inside cells, we conducted PLIM experiments, both under conditions of normal oxygenation and hypoxia simulation. In the course of the imaging experiments, we did not observe emission intensity degradation that is indicative of the sensors' stability in physiological media. These experiments demonstrated that upon modeling hypoxia, the phosphorescence lifetime of the Ir1 and Ir2 complexes increased by approximately two times-from 3.8 µs and 3.5 µs to 8.1 µs and 6.3 µs, respectively, indicating the high sensitivity of both complexes localized in cell cytoplasm to variations of oxygen content (Figure 8). It also should be noted that the difference between normoxic and hypoxic phosphorescence lifetimes were slightly more pronounced for Ir1 (4.3 µs) than for Ir2 (2.8 µs), and therefore the Ir1 complex was used for further in vivo testing on mouse tumor models. Biosensors 2023, 13, x FOR PEER REVIEW 12 of 18 In order to assess the applicability of the Ir1 and Ir2 sensors for determination of the oxygen concentration inside cells, we conducted PLIM experiments, both under conditions of normal oxygenation and hypoxia simulation. In the course of the imaging experiments, we did not observe emission intensity degradation that is indicative of the sensors' stability in physiological media. These experiments demonstrated that upon modeling hypoxia, the phosphorescence lifetime of the Ir1 and Ir2 complexes increased by approximately two times-from 3.8 μs and 3.5 μs to 8.1 μs and 6.3 μs, respectively, indicating the high sensitivity of both complexes localized in cell cytoplasm to variations of oxygen content (Figure 8). It also should be noted that the difference between normoxic and hypoxic phosphorescence lifetimes were slightly more pronounced for Ir1 (4.3 μs) than for Ir2 (2.8 μs), and therefore the Ir1 complex was used for further in vivo testing on mouse tumor models. After 30 min of local administration of Ir1 into tumors in vivo at the concentration of 250 μM, the phosphorescence signal was detected primarily in the cytoplasm of tumor cells. Inside the cells, the complex was distributed more homogeneously in comparison with cultured cells in vitro. We also checked the intensity of phosphorescence after 3 h and 6h of local sensor administration ( Figure S26 in Supplementary Materials) and did not observe significant changes. Therefore, for greater convenience and reliability of the experiment, we used the smallest incubation interval-30 min. In CT26 tumor cells in vivo, the phosphorescence lifetime of Ir1 was 7.4 ± 0.8 μs on average (Figure 9). In HCT116 tumor xenografts, the phosphorescence lifetime of Ir1 was 8.6 ± 0.5 μs, indicating their more hypoxic status ( Figure S25 in Supplementary Materials). After 30 min of local administration of Ir1 into tumors in vivo at the concentration of 250 µM, the phosphorescence signal was detected primarily in the cytoplasm of tumor cells. Inside the cells, the complex was distributed more homogeneously in comparison with cultured cells in vitro. We also checked the intensity of phosphorescence after 3 h and 6 h of local sensor administration ( Figure S26 in Supplementary Materials) and did not observe significant changes. Therefore, for greater convenience and reliability of the experiment, we used the smallest incubation interval-30 min. In CT26 tumor cells in vivo, the phosphorescence lifetime of Ir1 was 7.4 ± 0.8 µs on average (Figure 9). In HCT116 tumor xenografts, the phosphorescence lifetime of Ir1 was 8.6 ± 0.5 µs, indicating their more hypoxic status ( Figure S25 in Supplementary Materials). However, within each CT26 tumor, a high heterogeneity of phosphorescence lifetime, and consequently oxygen distribution, was observed at the cellular level. In the same tumors, the phosphorescence lifetimes varied from ~6.8 μs to ~9.7 μs, which, in general, corresponded with the values of the phosphorescence lifetimes typical of different degrees of hypoxia and the lifetime data measured for this complex in cuvette experiments (Table 1). It is important to note that the dose of the complex used did not induce any acute toxic effect on mice and did not change the typical histological structure of the tumor tissue, thus proving the potential biocompatibility of these phosphorescent O2 probes. Therefore, the conducted in vitro and in vivo experiments showed the high potential of the new Ir1 complex for assessing tissue oxygenation using PLIM. Conclusions We have obtained and comprehensively characterized three new target iridium complexes, [Ir(N^C)2(N^N#)]Cl, with various diimine N^N ligands in their composition. These complexes were decorated with short-branched oligo(ethylene glycol) groups to give them water solubility, biocompatibility and low toxicity. These compounds exhibited oxygen-dependent phosphorescence. The study of their photophysical properties made it possible to determine the two most promising sensors for further biological testing. The quantum yields of these complexes were moderate and reached 17% in deaerated water. The emission wavelengths were in the transparency window of biological tissues. Biological studies showed that these compounds have low toxicity. Cellular in vitro experiments proved that these sensors exhibit a significant lifetime response to changes in the oxygen concentration in the sample. Simulation of hypoxia in cells led to a two-and threefold increase of these values. The effectiveness of these sensors allowed their application in in vivo experiments on living mice tumor models. In these experiments, the sensors also made it possible to record the presence of significant hypoxia in tumors, as well as their heterogeneity. However, within each CT26 tumor, a high heterogeneity of phosphorescence lifetime, and consequently oxygen distribution, was observed at the cellular level. In the same tumors, the phosphorescence lifetimes varied from~6.8 µs to~9.7 µs, which, in general, corresponded with the values of the phosphorescence lifetimes typical of different degrees of hypoxia and the lifetime data measured for this complex in cuvette experiments (Table 1). It is important to note that the dose of the complex used did not induce any acute toxic effect on mice and did not change the typical histological structure of the tumor tissue, thus proving the potential biocompatibility of these phosphorescent O 2 probes. Therefore, the conducted in vitro and in vivo experiments showed the high potential of the new Ir1 complex for assessing tissue oxygenation using PLIM. Conclusions We have obtained and comprehensively characterized three new target iridium complexes, [Ir(NˆC) 2 (NˆN#)]Cl, with various diimine NˆN ligands in their composition. These complexes were decorated with short-branched oligo(ethylene glycol) groups to give them water solubility, biocompatibility and low toxicity. These compounds exhibited oxygen-dependent phosphorescence. The study of their photophysical properties made it possible to determine the two most promising sensors for further biological testing. The quantum yields of these complexes were moderate and reached 17% in deaerated water. The emission wavelengths were in the transparency window of biological tissues. Biological studies showed that these compounds have low toxicity. Cellular in vitro experiments proved that these sensors exhibit a significant lifetime response to changes in the oxygen concentration in the sample. Simulation of hypoxia in cells led to a twoand threefold increase of these values. The effectiveness of these sensors allowed their application in in vivo experiments on living mice tumor models. In these experiments, the sensors also made it possible to record the presence of significant hypoxia in tumors, as well as their heterogeneity. Thus, we have obtained and studied very promising new molecular oxygen sensors based on low-toxicity biocompatible phosphorescent iridium complexes.
7,868
2023-06-26T00:00:00.000
[ "Chemistry", "Biology" ]
Bioinspired flatfish detection using electrical impedance measurements Bottom trawling for flatfish by means of tickler chains has a high ecological impact due to the continuous seabed disturbance, low selectivity and high fuel costs. This issue could be significantly mitigated by using localized startle stimuli, triggered by a detection system that selectively targets flatfishes of landable size. Flatfish, however, constitute a significant challenge for remote detection, due to their low optical and acoustical signatures. Some species of predatory fish feeding on flatfish overcome this issue by using electroreception to localize they prey, even if it is buried in bottom sediments. We take this phenomenon as an inspiration in an attempt to develop a biomimetic remote fish detection technique based on electrical impedance measurements. We constructed a detection system including a set of electrodes and a low-cost analog front-end. The electrodes were mounted on a dedicated frame and dragged above a layer of sand inside a tank with sea water and several common sole (Solea solea). An underwater camera was used to acquire video recordings synchronized with impedance data for reference. We demonstrate that fish presence below the electrodes manifests itself by changes in the measured resistance and reactance values. This phenomenon occurs even if the fish is covered with a layer of sand. The results demonstrate the potential of bioinspired remote flatfish detection, which could be highly useful for monitoring or targeted stimulation. Background Many flatfish species are a common target for commercial fisheries (Rijnsdorp et al 2007).As demersal organisms, they are harvested using various types of bottom trawling techniques (Cashion et al 2018, Santos et al 2022)-including beam trawls with tickler chains (Boute 2022).Such means introduce constant disturbance of the seafloor and entail a significant negative environmental impact-related not only to the caused mechanical damage, but also, e.g. to increased fuel consumption of the fishing vessels and limited selectivity (Santos et al 2022).Electrical pulse fishing mitigates several of these issues, but it also raised concerns regarding potential injuries caused to different marine organisms (Miranda andKidwell 2010, Soetaert et al 2016).Despite demonstrated advantages over tickler chain trawls, this technique has been banned in many parts of the world (Boute 2022). The ecological impact of bottom trawling gear could be minimized if the continuous stimuli used to startle and harvest marine organisms from the seafloor would be replaced with a targeted one, triggered by an integrated remote detection system.Such a detection system would preferably target only marine organisms of interest.Flatfish detection, however, is challenging due to their stealth nature.Natural camouflage and the habit of burrowing in the sand make the optical identification with cameras extremely difficult.They also lack a swim bladder which results in low acoustic signatures and near invisibility to echosounders.Some species of predatory fish feeding on flatfish overcome this issue by sensing weak electric signals to localize their prey, even if it is buried in bottom sediments.Such phenomenon was demonstrated e.g. in sharks and rays (Kalmijn 1971).This shows that nonpropagating weak electric fields can be efficiently used to detect fish in highly conductive sea water and within seafloor layers.Some species of freshwater fish use active electrolocation to detect changes in electrical properties in the surrounding medium associated with presence of potential prey (von der Emde 1999).Active sensing by means of generating electric current flow in water and measuring induced voltage drop relies only on contrast between electrical properties of fish and ambient environment.It is independent of the amplitude of weak bioelectric signals emitted by the target, and thus can potentially ensure better detection performance than passive means.We take both of the described biological mechanisms as an inspiration in an attempt to test feasibility of electrical impedance measurements for flatfish detection. Fundamental mechanisms and phenomena underlying electrical impedance measurements for fish detection in freshwater were described in a recent study (Nowak and Lankheet 2022).The introduced approach extended the concept of the so-called resistivity fish counters used in stock estimation in flowing waters and in aquaculture (Appleby and Tipping 1991, Smith et al 1996, Li et al 2021).Flatfish detection in a marine environment, however, presents a broad range of different challenges.First, electrical resistivity of sea water is much lower than resistivity of fresh water, approaching a short-circuit (typically approximately 0.2 Ωm for sea water vs. between 1 and 100 Ωm for fresh water (Nowroozi et al 1999)).As a result, the relation between the effective conductivity of fish tissues and the ambient water is reversed and detectability would depend on decreased conductance due to the presence of a fish.Second, flatfish lie on a sediment layer, i.e. at the border of two media with different electrical characteristics.They also often cover themselves with a layer of sediment.Third, as the flatfish mostly rest motionless the measurement electrodes need to move above the bottom to scan the area.Such motion might be the source of noise and disturbances of various kinds.All these factors raise questions on the feasibility of flatfish detection by means of electrical impedance measurements, and on the potential electrical signatures of fish presence.The present study addresses these questions and paves the way for further investigations aimed at specific applications. Electrical impedance fundamentals Electrical impedance is a quantity relating a harmonic voltage applied between a pair of electrodes to the resulting electrical current flow.It takes both amplitude and phase relations into account and is expressed as a complex number: (1) The real part of the impedance, Re(Z), is called resistance and it describes the voltage to current amplitude ratio for a special case if both signals would be in phase (or if signal frequency would be equal to 0).The imaginary part of the impedance, Im(Z), is called reactance and describes capacitive or inductive characteristics of the measured object.All biological samples, due to the electrical properties of cells and cell membranes are characterized with purely capacitive behavior, and their reactance is always negative (Grimnes and Martinsen 2014).Impedance is a function of signal frequency, and it depends on electrical properties of the tested sample, as well as on the overall system geometry, including shapes and arrangement of the electrodes.Properties of the measurement equipment and the connecting cables also contribute to the eventual result.The challenge for fish detection using electrical impedance is to find the signature of changes in the signal that selectively relate to absence or presence of a fish. Although impedance measurements can be conducted with just a single pair of electrodes, it is often beneficial to use separate current carrying (CC) electrodes for exciting electric current flow within the sample and separate voltage pickup (PU) electrodes for measuring the induced voltage drop.In this case the impedance can be expressed as: where I CC is the amplitude of the applied harmonic current, U PU denotes the measured amplitude of the resulting, harmonic voltage, = 2πf, where f is the frequency of the applied harmonic signal, and θ is the phase shift between the current and voltage signals. Similarly, actively electrolocating fish use separate, specialized organs to generate electric current flow, and electroreceptors to sense the induced voltage drop (von der Emde 1999).In this way it is possible to mitigate the effects related to ionic double-layer formation at the electrodes' surfaces, which would introduce additional impedance components, distorting the recorded changes in impedance.This is especially important in marine environments, and we therefore use a four-electrode system.Impedance measurements utilize low-amplitude signals and are harmless and imperceptible to fish (Nowak and Lankheet 2022).They remain harmless even in an extreme case when electrodes are pressed directly against the body of a fish (Cox andHartman 2005, Hartman et al 2015). Methods The laboratory setup used for the experimental investigations is presented in figure 1.The measurements were conducted inside a tank with dimensions 1200 mm × 1000 mm × 700 mm (length × width × height) filled with sea water (∼50 cm).A layer of sand, on average about 1 cm, was deposited on the bottom and three adult common sole (Solea solea) specimen were swimming freely in the tank. On top of the tank we put a pair of guides made of aluminum v-slot profiles, arranged perpendicularly along the width.The guides were used to slide a measurement module (figure 1, left) comprising sensors and measurement equipment at a constant height above the bottom.At the end of one of the profiles we attached a flat screen made of an acrylic plastic plate, which served as a reflector and a reference for a time-of-flight (ToF) distance sensor. At the bottom of the measurement module were four electrodes made of 0, 1 mm thick phosphor bronze plates: two outer CC electrodes with dimensions 20 × 20 mm, and two inner PU electrodes with dimensions 10 × 20 mm.The distance between the adjacent CC and PU electrodes was 2 mm, and between the inner PU electrodes-200 mm.The electrodes were attached to a transparent beam made of plexiglass and connected via shielded coaxial cables to the impedance measurement electronic circuit. The impedance measurements were conducted using an AD5940 analog front-end (AFE) controlled by an ADICUP3029 microcontroller (both Analog Devices, USA).In-house embedded software for the microcontroller ensured streaming impedance values via a serial port at a rate of approximately 10 Hz.The measurement circuit was connected to a PC computer with a USB cable.The frequency of harmonic current/voltage signals used for measurements was set to 50 kHz, and the output voltage amplitude was limited to 800 mV peak-to-peak. Video recordings of the measurement area were obtained with an underwater GoPro (Hero 10, GoPro, USA) camera inside a waterproof case with a sealed USB connection.The camera was mounted above the dielectric, transparent plexiglass frame, looking downward to the electrodes and 'seabed' .The camera was synchronized to the impedance measurements via the USB connection by means of an in-house Python script. The position of the measurement module was determined using a distance measurement device constructed using a Raspberry Pi Pico board (Raspberry Pi Ltd, UK) and two ToF distance sensors (VL53l1X, STMicroelectronics, Switzerland).One of the sensors was mounted on the top part of the measurement module, above the water surface and was looking forward, measuring distance to the screen at the end of the tank.The second sensor was sealed and attached close to, and at the same height as one of the electrodes, at the bottom part of the module.It was looking downwards, measuring the distance from the bottom. Data acquisition and processing on the host computer were conducted using in-house Python scripts, ensuring synchronization between impedance measurements, video recordings and distance measurements.During measurements the electrodes were located approximately 70 mm above the bottom.We moved the measurement module across the tank, over the positions of flatfish, which mainly remained still on the sand.In some cases we recorded the signal with stationary electrodes and one of the fish swimming below them.back and forth across the tank, passing over a single flatfish buried in the sand.Fish passes are clearly visible as significant positive resistance peaks and negative reactance peaks.Extracted video frames illustrate an example of such an event, as well as an empty part in between, with only sand.Fish positions are indicated in the figure with color markings.The fish was approximately 35 cm in length. Figure 2 presents resistance and reactance values determined while moving the measurement module The same results, plotted as functions of distance measured along the tank, are presented in figure 3, further systematizing the results depicted in figure 2. Lines show averages and corresponding standard deviations over eight subsequent passes.The approximate location of the fish is indicated with a corresponding icon below the plot.Flatfish presence is indicated by highly reliable increments of resistance and decrements of reactance.Highly similar results were obtained for other fish at different locations, either on top of or in the sediment. Figure 4 illustrates results of impedance measurements with electrodes resting still close to the middle of the tank with a fish passing below.The appearance of fish is indicated by a strong positive peak if the resistance plot and a strong negative peak in the reactance plot.The time locations of the peaks correspond to the fish position just below the electrodes, as captured in the synchronized video frames.Video footage with resistance and reactance values in overlay are included in the supplementary data.The datasets used to generate the presented figures are openly available in a repository (Nowak 2023). Discussion The resistance and reactance values measured inside the sea water tank remain at steady levels when electrodes are held still above the bottom or when they are moved across the tank and no fish is present.Steady state fluctuations of the measured values are below 100 mΩ for resistance and less than approximately 40 mΩ for reactance, which includes the influence of all the noise sources.Those include both internal noise of the measurement board and external interferences originating from e.g.water pumps, electronic systems, and their power supplies.Also, in the case of the moving electrodes vibrations of the setup and movement of the connecting wires could contribute to the overall noise level-however, in the conducted experiments this influence was not significant.On this background, the changes in the measured impedance components corresponding to the appearance of a fish below the electrodes are at least an order of magnitude higher.In this regard, electrical impedance measurements proved to be valid and feasible for flatfish detection in the simulated marine environment.The described results were obtained using electrodes dragged approximately 70 mm above the bottom.Such a close-range detection setup should be suitable for bottom fishing, in which sensors integrated with the fishing gear are dragged just above or directly on the seafloor (Boute 2022).The exact relation between the achievable detection performance and distance from a fish is complex, depending on electrode geometry and configuration, fish size and orientation, and properties of the ambient medium.It can be assumed, as a rule of thumb, that the effective detection distance will not exceed the electrode spacing (Nowak and Lankheet 2022).Those estimations seem to be in line with the observations on achievable detection ranges in actively electroreceptive fish (Moller 1980). Bioinspired flatfish detection based on the changes in impedance values is possible thanks to differences in electrical properties between fish tissues and the ambient sea water and bottom sediments.Electrical conductivity of sea water is higher than the effective conductivity of flatfish, and thus the measured resistance value increases when a fish appears in the close vicinity of the electrodes.Although the sediment has a higher resistance than the water, this was also true for a fish in the sediment.As expected for animal tissues, the increase in resistance was accompanied by a clear decrease in reactance.Together, these changes constitute a clear signature for the presence of a fish, either for a stationary fish when electrodes are moved, or for a swimming fish underneath stationary electrodes. All the results were obtained using a low-cost AFE, which constitutes an important step closer towards practical applications including a more selective, triggerable bottom fishing gear with a minimum ecological impact.Such a gear would generate stimuli only when the detection system would signal the presence of a fish.In this way, disturbance of the seafloor would be limited to the absolute minimum, for any type of startle stimulus that could be triggered based on detection.An obvious candidate for such a stimulus, given the availability of electrical electrodes in the system, would be a combination of detection and electrical pulsing.This would allow for a minimum of bottom disturbance as both detection and stimulation would operate remotely.Since pulsing would be required only at locations where a fish is detected, and the electrodes could be constructed in a way to only affect a limited area underneath, any potential side effects of pulsing would also be substantially reduced.Other possible applications could consist of continuous monitoring of fish densities, and only start periods of stimulation in areas of high fish densities.Yet another application could be to install measurement electrodes as counters of fish entering the net, which would allow to optimize stimulation techniques while towing, identify favorable fishing grounds and determine optimal tow durations. In accordance with the operating principle, the impedance-based fish detection system will also react to other objects with electrical properties contrasting to the ambient medium.A detailed discussion on this topic, including underlying physical phenomena and discrimination methods would require separate, extensive studies and falls beyond the scope of the current study.Still, some important, general remarks on this issue can be briefly made.First, from the point of view of the considered bottom trawling applications, system sensitivity is much more important than selectivity.A significant reduction in negative environmental impact can be achieved even for relatively low selectivity rates, compared to non-triggered bottom fishing techniques.Second, impedance-based techniques can be integrated with other detection modalities, such as optical means, to increase performance beyond individual limitations of each.Finally, the approach introduced here offers innumerable possibilities for adjustments and discovering new ways for improvements.For instance, exploiting specific capacitive properties of living tissues (Grimnes and Martinsen 2014, Nowak and Lankheet 2022) might enable to differentiate living from non-living objects by analyzing the reactance component of the detection signal. The present study demonstrates proof of principle for bioinspired flatfish detection using electrical impedance measurements.Obviously, the topic is rather broad and further studies and optimizations may be required for a specific type of application.Topics for further study may include: • Determining an optimal geometry and configuration of measurement electrodes for achieving a specified spatial detection profile.• Performance of the detection system with a broad range of seafloor sediment types and detectability of flatfish as a function of depth in the sediment.• Influence of other marine organisms on the operation of the detection system and methods to make it more selective (in terms of both electrode/hardware configuration, as well as signal processing techniques and discrimination algorithms).• Methods of integrating the detection system with bottom fishing gear.• Determining optimal frequencies, or combinations of frequencies for selective detection of flatfish.• Further fundamental studies on mechanisms and phenomena exploited by electroreceptive fish species. We hope to address these and other related issues in our future studies.We also hope that further investigations on capabilities, behavior, and phenomena utilized by electrolocating fish can provide important cues in this regard. Summary Electrical impedance measurements allow to remotely detect flatfish on or within a layer of sand.Presence of a fish manifests itself by significant increases in measured resistance values and decreases in reactance values.The observed changes are of at least an order of magnitude higher than the total noise levels of the system.The experiments were conducted using a low-cost measurement setup, suitable for practical, large-scale applications.The introduced detection technique could be integrated with a bottom fishing gear to provide a trigger for stimulus pulses.Such an approach would enable to increase sensitivity and selectivity of the gear, while minimizing the negative ecological impact. Figure 2 . Figure 2. Resistance and reactance values as functions of time, determined while sweeping electrodes approximately 70 mm above the bottom (top plot).Images below present video frames captured by the camera at the time moments indicated by the vertical dashed lines.Green ellipses indicate approximate locations of the flatfish covered with a layer of sand. Figure 3 . Figure 3. Resistance and reactance values as functions of distance along the tank, determined while sweeping electrodes four times back and forth approximately 70 mm above the bottom.Solid lines indicate mean values, while shaded areas indicate standard deviations.The flatfish icon indicates the approximated location of the fish on the bottom. Figure 4 . Figure 4. Resistance and reactance values as functions of time, determined with electrodes held still approximately 70 mm above the bottom (top plot).Images below present video frames captured by the camera at the time moments indicated by the vertical dashed lines.During the measurement the flatfish swam below the electrodes (visible in the frames in the middle and in the right).
4,576.6
2023-10-18T00:00:00.000
[ "Biology", "Environmental Science" ]
A semi-empirical solution for estimating the elastic stresses around inclined mine stopes for the Mathews-Potvin stability analysis The Mathews-Potvin stability method is widely used in the Canadian mining industry as a starting point to determine the maximum dimensions of mine stopes. However, it cannot be applied to inclined (more frequently encountered) mine stopes without conducting numerical modelling to obtain the stress factor A, defined as a function of the ratio of unconfined compressive strength of intact rock to the induced principal stress on the exposed stope walls. The need to conduct numerical modelling significantly limits the application of the Mathews-Potvin method. In addition, given its empirical nature and main application for preliminary design, it is deemed undesirable to conduct numerical modelling, especially elaborate modelling. Alternatively, theoretical methods can provide a much simpler and quicker way to estimate stresses around stopes and the corresponding stress factors. Over the years, a large number of studies have been conducted to estimate stresses around openings excavated with various crosssections. However, theoretical or graphical solutions remain unavailable for mine stopes that typically consist of horizontal floor and roof, and two parallel inclined walls (hangingwall and footwall). To remedy this situation, a series of numerical simulations is first performed for openings with vertical and inclined walls, including typical stopes commonly encountered in underground mines. A group of empirical solutions is then formulated to estimate the induced principal stresses at the roof centre and mid-height of the stope walls. The validity and predictability of the proposed solution have been verified using additional numerical simulations. The proposed solution can thus be used to calculate stresses and the resultant stress factors A around typical mine stopes with any inclination angle and height to width ratio, under any in-situ stress state, without conducting numerical modelling. Introduction Ground stability is a challenging issue frequently faced by rock engineers. The trend towards larger and more powerful equipment to improve productivity requires larger underground openings. However, the dimensions of stable underground excavations are finite, limited by field stresses and rock mass conditions. The correct design of underground openings is thus of paramount importance. The Mathews-Potvin method is a simple and useful tool for mining engineers. It is commonly used as a starting point to determine the dimensions of stopes or design the required support (e.g., Mathews et al. 1981;Potvin 1988;Hutchinson and Diederichs 1996;Li and Ouellet 2009). The Mathews-Potvin method is also used to estimate the unplanned dilution due to the slough that can take place around the hangingwall and footwall during blasting or muck-out of blasted ore (Scoble and Moss 1994;Clark and Pakalnis 1997;Kaiser et al. 1997;Kaiser 1999, Diederichs, Kaiser, andEberhardt, 2004;Papaioanou and Suorineni 2016). Another application of the Mathews-Potvin method is to estimate the minimum span exposures to ensure the cavability (self-collapse) of ore in caving mining methods (Sunwoo, Jung, and Karanam, 2006). For mine stope design, a major limitation associated with the Mathews-Potvin method is the need to conduct numerical simulations to obtain a key parameter, called the stress factor (A), which depends on the ratio of the unconfined compressive strength of intact rock to the induced principal stress (σ 1 ) on the walls of the opening. When the geometry of the openings is simple, such as a circular cross-section, analytical solutions exist for estimating the stress around such openings (Kirsch, 1898;Hiramatsu, 1962;Logie and van, Tonder 1967;Hiramatsu and Oka, 1968;Li, 1997). More sophisticated analytical solutions are equally available for estimating the elastic stresses around tunnels with a vertical axis of symmetry, including openings with elliptical, rectangular, and arched walls (Logie and van Tonder 1967;Hoek and Brown 1980;Gerçek 1997;Exadaktylos and Stavropoulou 2002;Brady and Brown 2013). For vertical mine stopes, graphical solutions used to estimate the induced stresses have been elaborated in two dimensions (2D plane strain ;Potvin 1988;Stewart and Forsythe 1995) and three dimensions (Mawdesley, Trueman, and Whiten, 2001;Vallejos, Delonca, and Perez, 2017). In practice, however, orebodies are always inclined to a greater or lesser degree. Few theoretical or graphical solutions are available to assess the induced stresses around inclined stopes with one horizontal floor, one horizontal roof, and two parallel and inclined walls (hangingwall and footwall). Numerical modelling has to be performed to obtain the induced stresses for each specific mining project (Li and Ouellet, 2009). This requires not only the availability of pertinent software and hardware, but also qualified numerical modellers who have a good understanding of field conditions and the behaviour of the rock mass, and know in particular how to obtain reliable numerical outcomes. It is thus desirable to have theoretical solutions that can be used to estimate the stresses around inclined stopes. In this paper, the Mathews-Potvin method is first briefly recalled for the sake of completeness. Numerical simulation results are then presented by considering inclined mine stopes surrounded by a homogenous, isotropic and linearly elastic rock mass. A large range of wall inclination angles, height to width ratios, and in-situ stresses are considered. Semi-empirical solutions are proposed to estimate the induced principal stresses on stope walls by applying the principle of superposition of linearly elasticity theory through a curve-fitting technique applied to the numerical results. The prediction capability of the proposed semi-empirical solutions is verified with additional numerical simulations. A typical example is also given to illustrate the application of the proposed solution. The Mathews-Potvin method The Mathews-Potvin method is an empirical method based on numerous field observations. This method relates the stability of an exposed wall to two factors -the hydraulic radius (HR) and the stability number (N'). The former is defined as (Potvin 1988): [1] The stability number (N') of the exposed wall is defined by the following equation: where Q' is a modified rock tunnelling quality index, A is the rock stress factor, B is the joint orientation adjustment factor, and C is the gravity adjustment factor. The parameter Q' resulted from a modification on the Rock Tunnelling Index (Q) of Barton, Lien, and Lunde (1974), is defined as follows: where RQD is the rock quality designation, J n is the joint set number, J r is the joint roughness number, and J a is the joint alteration number. The rock stress factor A is a function of the ratio between the unconfined compressive strength of the intact rock (σ c ) and the induced major principal stress (σ 1 ) on the exposed walls of a stope. The stress factor A can be expressed as follows (Potvin, 1988), which is further illustrated in Figure 1a: Factor B considers the influence of joints on the stability of the studied exposed wall (Potvin 1988). It represents the effect of the angle between the most critical joints and the wall, as shown in Figure 1b. The gravity factor C depends on the individual influence of the inclination of the exposed wall and the inclination of the critical joints, as illustrated in Figure 1c. Once the stability number N' and hydraulic radius HR are determined, the stability of the exposed wall can be evaluated using the chart of Mathews-Potvin, as shown in Figure 1d. From Equation [4], one notes that the rock stress factor A proposed by Potvin (1988) has some limitations when the rock is submitted to a tensional stress. In this case, the induced principal stress σ 1 is zero (for a 2D model) or non-zero in the third dimension (for a 3D model). Factor A can thus reach its maximum value of 1.0 independent of the tensile stress and tensile strength of the rock, which is unrealistic. Therefore, Equation [4] is not entirely adequate to describe the stability or failure of rock by tension. To overcome this limitation, Li and Ouellet (2009) proposed two approaches. The first is to neglect the tensile strength of the rock, and to fix A = 0.1 for σ 3 ≤ 0 (where σ 3 is the induced tensile stress around the excavation). The second approach is to compare the tensile stress with the tensile strength of the rock, so that A = 0.1125| σt ⁄ σ3 |-0.125 (same form as Equation [4]; where σ t is the tensile strength of the intact rock). Zhang, Hughes, and Mitri (2011) adopted a similar approach to that of Li and Ouellet (2009) when the rock is submitted to tension. Suorineni (2012) concluded that the stress factor for tension (and other factors) needs to be calibrated. Further discussion on the definition of this factor is beyond the scope of the paper, but it is seen that the determination of factor A depends on knowledge of the induced principal stresses on the exposed stope walls. roof, a horizontal base and two parallel and inclined walls. In the figure, W and H are the width and height of the stope, respectively; b is the inclination angle of the stope walls; σ v and σ h on the stress block represent the vertical and horizontal natural in-situ principal stresses, respectively; the out-of-plane stress is another in-situ principal stress. The mid-points have been denoted as U and V on the surfaces of roof and sidewall respectively. The numerical code Plaxis 2D (Brinkgreve and Vermeer 1999), based on the finite element method and commonly adapted for rock mechanics and geotechnical engineering, is used here to evaluate the stresses around mine stopes. The sign convention used by Plaxis 2D considers compression negative (-) and tension positive (+). However, the results presented in this study follow the sign convention commonly used in rock mechanics analysis, where compression is positive (+) and tension is negative (-). Numerical modelling of the elastic stresses around inclined stope walls The linearly elastic model of Plaxis 2D was first validated by comparing the simulated stresses against the analytical solutions for a circular opening (Kirsch, 1898;Hiramatsu, 1962;Hiramatsu and Oka, 1968;Li, 1997). Additional validations were made against the graphical and analytical solutions of Hoek and Brown (1980) in the cases of elliptical and square openings. More details are presented in Pagé (2018). Table I presents the program of numerical simulations. Forty-eight stope geometries were considered by combining the stope width (W), height (H), and wall inclination angle (b). Two regimes of natural in-situ stresses were considered: Case 1 with σ v = 30 MPa and σ h = 0; Case 2 with σ v = 0 and σ h = 30 MPa. It should be noted that the consideration of a zero horizontal insitu stress σ h in Case 1 and a zero vertical in-situ stress σ v in Case 2 is necessary for applying the principle of superposition. This does not mean that the numerical models only correspond to zero vertical or horizontal in-situ stress, although the models remain valid for such extreme cases. The principal stresses tangential to the exposed faces at points U and V (on the wall surfaces) are calculated (see Figure 2). Figure 3 shows a numerical model constructed with Plaxis 2D. An enlarged view of the stope with refined meshes around the stope before excavation is presented. The natural in-situ stresses were first initiated over the entire model. The four outer boundaries were then fixed in all directions. Finally, the excavation of the stope was simulated. For each numerical model with a new stope geometry, domain and meshes sensitivity analyses have been done to ensure that the outer boundaries are far enough from the stope and the meshes around the stope are fine enough. A sufficiently large domain is necessary to avoid the boundary effect, while finer meshes around the stope are required to ensure stable numerical results (see more details presented in Pagé, 2018). Figure 4 presents the minor (σ 3 , Figure 4a) and major (σ 1 , Figure 4b) principal stresses contours around a stope with H/W = 2 and b = 75°, obtained from numerical modelling using an insitu stress state of σ v = 30 MPa and σ h = 0 MPa (with σ v = -30 MPa and σ h = 0 MPa as inputs to Plaxis 2D). Note that the major and minor in-plane principal stresses in Plaxis 2D are represented by σ 1 and σ 3 respectively, while the out-of-plane Figure 4a shows that the critical tangential stresses on the roof are under tension (positive in Plaxis 2D), while Figure 4b indicates that the critical tangential stresses on the walls undergo compression (negative in Plaxis 2D). The minor principal stress at the roof centre is -26.8 MPa (in tension), while the major principal stress at the mid-height of the hangingwall and footwall is 37.4 MPa (in compression). Figure 5 shows the major (σ 1 , Figure 5a) and minor (σ 3 , Figure 5b) principal stress contours around the stope with H/W = 2 and b = 75°, obtained by numerical modelling with a natural in-situ stress state of σ v = 0 MPa and σ h = 30 MPa (with σ v = 0 MPa and σ h = -30 MPa as inputs to Plaxis 2D). In this case, the critical tangential stress on the roof is under compression (negative in Plaxis 2D) based on the major principal stress (σ 1 , Figure 5a), while the critical tangential stresses on the walls are under tension (positive in Plaxis 2D) based on the minor principal stress (σ 3 , Figure 5b). The major principal (compressive) stress at the roof centre is 61.1 MPa, while the minor principal (tensile) stress at the mid-height of the hangingwall and footwall is -23.6 MPa. Formulation To formulate a semi-empirical solution for evaluating the elastic stresses around mine stopes, one uses the principle of superposition valid in elasticity theory for homogenous, isotropic, and linearly elastic material. For a given stope geometry, the stresses around the opening are investigated by applying a horizontal natural in-situ stress. The induced stresses at the point of interest on the stope wall are then normalized by the applied horizontal natural in-situ stress. By changing the stope height to width ratio (H/W) and wall inclination angle (b), a relationship based on curve fitting can then be established between the studied stresses at the point of interest on the stope wall and the horizontal natural in-situ stress, stope width to height ratio, and stope wall inclination angle. The same process is repeated for the vertical natural in-situ stress with different stope width to height ratios and stope wall inclination angles. Applying the curve-fitting technique leads to another equation, which describes the induced stress around the stope opening as a function of the vertical natural in-situ stress, stope width to height ratio, and stope wall inclination angle. By adding the two equations, one obtains an equation that describes the studied stresses at the point of interest on the wall or roof as a function of the horizontal and vertical natural in-situ stresses, stope geometry, and wall inclination. The procedure can be summarized as follows: [5] [6] where f 1 and f 2 are the geometric functions on the critical tangential stress at the roof centre, associated with the vertical and horizontal natural in-situ stresses, respectively; g 1 and g 2 are the geometric functions on the critical tangential stress at the mid-height of the hangingwall and footwall, associated with the vertical and horizontal natural in-situ stresses, respectively. To obtain the four geometric functions f 1 , f 2 , g 1 , and g 2 , a second-degree polynomial regression fit (for both f 1 and f 2 ), and a combination of a power regression fit and a second-degree polynomial regression fit (for g 1 and g 2 respectively) were applied to the numerical results of the critical induced stresses at the roof centre and at the mid-height of the wall as a function of the H/W ratio, separately for b = 90°, 75°, 60°, and 45°. Figure A second calibration of these four geometric functions by considering the wall inclination angle leads to the following equations: Equations [5] to [10] constitute the proposed solution for estimating the elastic stresses at the roof centre and midheight of the hangingwall and footwall around typical mine stopes. These equations are independent of the stope depth and rock mass strength. [7] [8] [9] Figure 5-Isocontours of σ 1 (a) and σ 3 (b) principal stresses around the stope with H/W = 2 and β = 75°, calculated by applying a natural in-situ stress state of σ v = 0 MPa and σ h = 30 MPa in Plaxis 2D (with σ v = 0 MPa and σ h = -30 MPa as inputs tp Plaxis 2D) A semi-empirical solution for estimating the elastic stresses around inclined mine stopes ▶ 410 AUGUST 2021 VOLUME 121 The Journal of the Southern African Institute of Mining and Metallurgy [10] When the natural in-situ stress state is σ v > 0 (in compression) and σ h = 0, the solution predicts tension (σ roof < 0) acting on the roof and compression (σ wall > 0) on the midheight of the walls. Conversely, when the natural in-situ stress state is σ v = 0 and σ h > 0 (in compression), the solution leads to compression (σ roof > 0) on the roof and tension (σ wall < 0) on the mid-height of the walls. Figure 6 shows that the critical induced stresses at the roof centre and at the mid-height of the hangingwall and footwall respectively normalized by the applied horizontal (σ h ) and vertical (σ v ) in-situ stresses as a function of the H/W ratio for stope inclination angles (β) of (a) 90°; (b) 75°; (c) 60°; and (d) 45° The Journal of the Southern African Institute of Mining and Metallurgy VOLUME 121 AUGUST 2021 calculated by the proposed solution (Equations [5] to [10]), and represented by the full lines, correspond well to those obtained by the numerical modelling. This type of comparison between a proposed solution and numerical (or experimental) results, which is used in the calibration or curve fitting to obtain the required parameters, is usually considered as validation or prediction. This is, however, not true. The validity and predictability of the calibrated model (obtained by calibration or curve filling) should be tested against additional and different numerical (or experimental) results. Validation and predictability To test the validity and predictability of the proposed solution, additional numerical simulations were performed by considering more stope geometries and virgin in-situ stress states. Figure 7 shows the variation of the induced tangential stresses, obtained by numerical modelling and predicted by the proposed semi-empirical solution, at the roof centre and at midheight of the walls, for an isotropic natural in-situ stress state of 30 MPa (compression) and stopes having wall inclination angles b = 90°, 75°, 60°, and 45° and H/W ratio varying from 0.1 to 10. It is seen that the agreement between these two different approaches is excellent. Figure 8 presents another validation and test of predictability of the proposed semi-empirical solution using additional numerical simulations conducted with anisotropic in-situ stresses (Figure 8f). The proposed solution can thus be considered as validated. It can then be used to calculate the stresses and the stress factor A for the case of typical mine stopes with any inclination angle and height to width ratio under any in-situ stress state. Sample application In the following, a sample calculation is presented to further illustrate the application of the proposed solution (Equations [5] to [10]). It is planned to mine out a 6 m (W) wide ore vein inclined at 67° (b), with a 30 m (H) high stope located 500 m below the ground surface. The vertical in-situ stress can be estimated based on the overburden depth, while the horizontal in-situ stress is 1.67 times the vertical in-situ stress. These parameters give the following in-situ stress state and stope geometry: Discussion Numerical modelling requires the availability of pertinent software and hardware and qualified modellers who know how to correctly conduct numerical modelling. Currently, the availability of computation resources in terms of hardware and software is no longer an issue, and numerical modelling has become a common practice for various research and design projects. However, knowing how to use a numerical code is often considered equivalent of knowing how to correctly perform numerical modelling. This can partly explain the crisis of confidence in numerical modelling and why many modellers do not believe in even their own numerical results. In fact, knowing how to use a numerical code is totally different from knowing how to conduct numerical modelling. The former needs only short training (a couple of hours) while the latter requires much more advanced training and rich experience in order to obtain stable and reliable numerical outcomes (Chapuis et al., 2001; Barbour and Krahn 2004;Cheng, Lansivaara, and Wei, 2007;Diederichs et al., 2007;Krahn 2007;Chapuis, 2012aChapuis, , 2012bDuncan 2013). This work is partly motivated by a perception that the Mathews-Potvin method was considered useless for stope analysis, and that the stability and the maximum dimensions of the stopes can be directly analysed using numerical models, instead of determining the stress factor A and applying the Mathews-Potvin method. It should be recalled that the empirical Mathews-Potvin method was based on many case study observations. The numerical models performed to determine the stress factor A are very simple, considering only an isolated opening around a homogenous, isotropic, and linearly elastic rock. The effectiveness of the method has been proven, especially when it is used as a starting point for the determination of stope dimensions in the preliminary stage of mining projects. When numerical modelling is conducted for stope stability analysis, the models are usually much more complex in terms of stope geometry, mining sequence, and material parameters. Calibrations using field data/observations can be necessary to find the required (but unknown) parameters. In the preliminary stage of a mining project, little field data and information are available to allow the construction and calibration of such sophisticated numerical models. All of these considerations indicate that the Mathews-Potvin method is very useful at the beginning of a mining project, where it can provide a quick and preliminary estimation of the dimensions of stopes. The necessity for more sophisticated numerical modelling at the advanced stage of a mining project does not invalidate the Mathews-Potvin method. Rather, the Mathews-Potvin method can be more appealing if theoretical or graphical solutions are available for estimating the induced stresses around inclined mine stopes. To this end, a semiempirical solution has been proposed in which curve-fitting techniques are applied against numerical modelling and the principle of superposition of linearly elasticity theory. The results show that the proposed semi-empirical solution can be used to evaluate the induced principal stresses at the roof centre and mid-height of the wall around typical mine stopes. However, one should keep in mind that the numerical models presented in this study contain several assumptions. First, a limitation of the numerical models is associated with the 2D plane strain conditions. The numerical results and the proposed semi-empirical solution are valid only when the stope is very long in one horizontal direction. In an actual mine, this is not always the case. Graphical solutions have been presented by Mawdesley, Treman, and Whiten (2001) and Vallejos, Delonca, and Perex (2017) for 3D vertical stopes. Further work is necessary to consider three-dimensional inclined stopes. The assumption of a linearly elastic rock mass can be held true at relatively shallow depths. At greater depths (deep mines), the behaviour of rocks and rock masses may change to a nonlinear and non-elastic behaviour. Consequently, the validity of the empirical relationships proposed here may be limited to a certain depth. Additional studies could be conducted to formulate similar empirical relationships in nonlinear rock masses. Another limitation of the proposed semi-empirical relationships is related to the stope geometry. The stopes considered here have a parallel hangingwall and footwall as well as parallel roof and floor. In practice, stopes with nonparallel walls are commonly encountered. More work is needed to propose solutions for estimating the stresses around stopes with nonparallel walls. In this study, the vertical and horizontal in-situ stresses were considered to be two principal stresses, implicitly assuming that the out-plane in-situ stress is another principal stress. In practice, the vertical in-situ stress and the two horizontal in-situ stresses could form three normal stresses. Further work is thus necessary to develop a more general solution. Finally, it is very important to point out that the stress factor A defined in the Mathews-Potvin method corresponds to the maximum induced principal stresses on the exposed faces. However, as shown in Figures 4 and 5, the maximum compressive stresses are close to the four corners rather than at the roof centre. We believe that an accurate estimation of the maximum principal compressive stress at stope corners is difficult and unnecessary due to stress concentration -therefore the critical locations in terms of compression should be at the centre, not the stope corners. For tension, as Figure 5b shows, the largest tensile stresses are located near (but somehow distant from) the stope corners, which correspond to the critical locations (rather than the roof or wall centre). In this study, the maximum tensile stress is not considered as its location varies when the stope geometry or natural in-situ stresses change. This renders the formulation very difficult. More work is needed on this aspect. Nonetheless, given the empirical nature of the Mathews-Potvin method and the still limited considerations of the tensile stresses in applying the method, the proposed solutions can provide useful estimation of stresses for application of the Mathews-Potvin method. Conclusions The well-known Mathews-Potvin method is an important design tool for mining engineers. However, the application of this method requires the determination of the induced stresses (and stress factor A) around inclined mine stopes using numerical modelling, as few graphical or theoretical solutions are available for such purposes. To overcome this limitation, a semi-empirical solution has been proposed to estimate the induced principal stresses at the roof centre and mid-height of the walls around mine stopes, by applying the superposition principle of linearly elasticity theory and curve-fitting techniques to numerical results. The validity and the predictive capability of the proposed solution have been verified by additional numerical simulations. The proposed semi-empirical solution can thus be used to evaluate the induced tangential stresses at the roof centre and mid-height of the walls around mine stopes with any inclination angles, height to width ratios, and in-situ stress states, as long as the values of H/W are in the range from 0.1 to 10 and b in the range from 45° to 90°. With these empirical expressions, the stress factor A, a key parameter used in the Mathews-Potvin method, can be determined without conducting numerical modelling.
6,199.2
2021-10-13T00:00:00.000
[ "Geology" ]
Health Index Analysis of Transmission System Components Optimizing maintenance strategies for power transmission components is crucial to prevent failure in the system while maintaining and enhancing the overall economic efficiency. Condition-based maintenance can fulfill this and in the present paper health index-based efficient condition assessment is applied as an essential input to this type of maintenance. A method for the calculation of the health index of primary transmission system components is presented and discussed. Condition-based assessment is utilized for developing a calculation method for evaluating the health index at different levels in the system. The health index of each component is calculated by using the Weighting and Scoring Method. Several factors affecting the condition of different equipment are considered in the algorithm including environmental effects, mechanical stress, and accessibility issues. The special focus is on assigning a numeric value to the health of individual sub-components of the equipment to plan the condition-based predictive maintenance of power system components. The data indicate that in contrast to considering the health index of the entire system, making decisions considering the individual component health indices can result in a more reliable operation. Introduction Reliable operation of transmission lines under optimized investment and operational costs requires a suitable maintenance strategy. Optimizing maintenance strategies for power components requires an effective evaluation of system conditions to generate an assessment-based datadriven solution. This could then be realized by calculating the overall condition of the system. One common method to determine the service condition of the power system infrastructure is to calculate the health index (HI) [1]. HI represents a practical numerical assessment to quantify and integrate the different types of condition information like results of visual inspections, operating condition observations, and data from on-site and laboratory testing. The results are converted into an objective and quantitative index, providing the overall health of the assets. Several studies have determined the HI, such as HI assessment of overhead line by image processing and HI assessment of underground cable by historical failure data [2]. While the existing methods of HI calculations have only be limited to studying one kind of system at a time, there is a need to extend such a study for getting an overall picture of complex systems like combined overhead line (OHL) and underground cable (UGC) configuration. In this work, an attempt is made to extend the HI evaluation to assess the health of integrated transmission lines (OHL and UGC) and improve the reliability of the overall analysis. To achieve this, the proposed method includes the calculation of the health index at subcomponent, component and system levels. Another aspect studied in this work is the inclusion of external factors that crucially impact maintenance strategies, such as environmental effects, mechanical stress, and accessibility issues to get a more practical HI value. Section 2 of this paper describes in detail the methodology applied for calculating the health index of the transmission system at different levels. The method includes the HI calculation of the sub-components and the components of the OHL & UGC. This also includes the scores, and weights of the different component conditions, and the corresponding HI values. In section 3 the calculation results for 2 case studies are shown. A brief discussion of the HI result is given in section 4 which highlights the applicability of the HI for transmission system maintenance. Mathematical equations for calculating HI Health index is calculated by condition-based assessment of the system, component, and sub-component of the transmission system. An example of a sub-component is a transformer bushing, where the transformer is the component. The overhead line or cable system with all components forms the system level. The HI at each level is calculated based on the indicators from the subsequent level by using the weighting and scoring method (WSM) [3]. The results obtained from designated condition indicators are converted into scores based on a standard evaluating datasheet [1,[3][4][5]. The scores range from 0-5 and the outcome is categorized in three groups: good (5), moderate (3) & poor (0). The scores i and assigned weights of i indicators are applied as shown in equation 1 to calculate the health index HI of the sub-components of OHL and UGC. HI is the short representation of Health Index and %HI is system HI in a range of 100. % = ( * ) is the maximum score of the individual condition indicator. In case of a component with j subcomponents the value of % , is used to calculate the % : Then additional component information is added in the calculation as a condition factor CF to get the actual component HI (% ). % = % * (3) CF here stands for critical factor and represents a value that is calculated by the assessment of the component data and operational data (4) (5) In this following, the HI of the transmission system is called system-HI or % . The HI of an overhead line and an underground cable are % ℎ and % , respectively. Fig. 1 gives a schematic overview of the HI calculation process through different levels. Various component information is added at different levels as indicated. It is worth mentioning that the different levels are color-coded, for example, the condition indicators (extreme left) are all indicated using green color. The condition of these indicators are converted into scores and weights and the HI of the subcomponent is calculated (yellow box). The HI of different subcomponents are then used to calculate the HI at the component level, represented (blue box). Different component information like operating temperature, and environmental conditions are added at this stage to get more practical HI values. The component HI is used to calculate the final system HI (pink box). Health Index Ranges The values of the system-HI proposed in this report are divided into four levels: healthy, minor defects, major defects and critical defects. The ranges of the HI and its referred condition with explanation are given in Table 1 with the four levels illustrated by different colors. When the HI result is zero, it indicates that the transmission system is not fixable. Application of WSM Method In this paper, the evaluation of HI is done for a transmission system consisting of both OHL and UGC. 81-100% Healthy The system is healthy and it is capable to perform required functions without any delay within a specific period of time. Maintenance can be delayed. 60-80% Minor defect The system is capable to perform the required functions within a specific period of time. However partial degradation has occurred. An advance maintenance plan is needed. 31-59% Major defect The system is still capable to perform the required functionality within a specific period of time. But some serious degradation of performance has occurred. Maintenance is required. The system is unable to perform the required performance. Maintenance is needed immediately. As mentioned in the previous section, the data of the condition indicators at each level are collected and converted into scores and weights. The scores and weights are given according to their corresponding standards [1,2,[4][5][6][7]. The subcomponents and their condition indicators with scores and weights which are used to calculate the % of OHL and UGC are shown in Table 2 and Table 3 respectively. Next we need to find out the CF value to determine % . To calculate CF the component data and operational data including environmental data, accessibility and mechanical stress data are assessed carefully. Then these are converted into scores and weights for both OHL & UGC. The results can be seen in Tables 5 & 6. Results Based on the methodology presented in the previous section, the results for a case study on a transmission system consisting of both OHLs and UGCs are calculated as shown in Table 5. The HI values are calculated using (4) and (5) and the results are given in Fig. 2, showing both lines and system being in healthy condition. Fig. 2-HI value of the transmission system & its components To understand the applicability of the HI for maintenance a case study has been done with the bad condition of the insulator. This case study is done on the same system, but with an assumed bad condition of insulators. A post insulator was exposed to pollution and therefore suffered insulator flashover as shown in Fig. 3. With this bad condition of a subcomponent the HI of the subcomponents of the OHL, the HI of OHL and the system-HI is recalculated. The results can be seen in the bar diagram in Fig. 4. Due to the damage, the HI of the insulator is 38.57%. This is indicator of an insulator having a major defect. On the other hand, the HI of the overhead line is only indicating a minor defect and the HI of the system shows a healthy system. The system HI of the normal condition is 85.3% (Fig. 2) indicating the healthy condition of the system. Whereas with the bad condition of the insulator the system HI is 81% which also indicates the healthy condition of the system. The difference between these two systems HI results is only 4.3%. The HI of the overhead line in the normal condition is 85.29%, indicating the healthy condition of the overhead line. With the bad condition of the insulator, HI of the overhead line is 76.09%, which indicates minor defects of the overhead line. The effect of the bad condition of the insulator is visible in the overhead line as the health index varies 9.2% and goes from a healthy to a defective overhead line. This is a serious concern in case of maintenance, decisions are mainly made by checking the HI at the system level. From the current case study, it is evident that the HI of the subcomponents is a considerably more reliable parameter for fault identification than the HI of the component or the system. Discussion The present study shows an effective way of conducting health index analysis of transmission systems containing both overhead lines & underground cables. The proposed methodology takes into account HI calculations based on the condition indicators at individual sub-components level, followed by the analysis of the information at the component level. It also suggests a method to separately add OHL and UGC system components in the calculation as condition factors to calculate the overall condition of the system. Such a study can be extended to include more transmission line components such as transmission towers, reactors and transformers as a single system in the HI assessment. The second case study of the overhead line HI and system HI with a bad condition of an insulator indicates that the changes in the insulator condition are more visible in the overhead line HI as compared to the system HI. If the maintenance decisions are taken based on the system HI the bad condition of the insulator could be overlooked and might cause serious interruptions of the power supply. Therefore, it is suggested to make the maintenance decision based on the component HI instead of system HI. When the maintenance decision is made based on the component HI, it is also important to identify the fault location of the component. In this case, the HI of the subcomponents can be used as an indicator of the fault location. The case study also shows that HI calculation at sub-component level is a more reliable factor for identification of faulty sections in the system. When the insulator has a bad condition, the HI of the insulator has significantly reduced from 88.57% to 38.57%. This 50% decrease of the HI is an indicator of serious fault in the insulator. Conclusion In this paper, a method for assessing the health index (HI) of the transmission system including both OHL and UGC by weight and score method (WSM) is presented. The HI is calculated based on the condition indicators of individual sub-components, supplemented with various component information to get a more suitable HI value. Analysis of sub-component HI as compared to higherlevel HIs shows its significance in identifying critical conditions in a system. In general, the HI at component level approach appears to be suitable for conclusive condition assessment as basis for optimized conditionbased maintenance.
2,842
2022-07-05T00:00:00.000
[ "Engineering" ]
Genome-Wide Association Study and Fine Mapping Reveals Candidate Genes for Birth Weight of Yorkshire and Landrace Pigs Birth weight of pigs is an important economic factor in the livestock industry. The identification of the genes and variants that underlie birth weight is of great importance. In this study, we integrated two genotyping methods, single nucleotide polymorphism (SNP) chip analysis and restriction site associated DNA sequencing (RAD-seq) to genotype genome-wide SNPs. In total, 45,175 and 139,634 SNPs were detected with the SNP chip and RAD-seq, respectively. The genome-wide association study (GWAS) of the combined SNP panels identified two significant loci located at chr1: 97,745,041 and chr4: 112,031,589, that explained 6.36% and 4.25% of the phenotypic variance respectively. To reduce interval containing causal variants, we imputed sequence-level SNPs in the GWAS identified regions and fine-mapped the causative variants into two narrower genomic intervals: a ∼100 kb interval containing 71 SNPs and a broader ∼870 kb interval with 432 SNPs. This fine-mapping highlighted four promising candidate genes, SKOR2, SMAD2, VAV3, and NTNG1. Additionally, the functional genes, SLC25A24, PRMT6 and STXBP3, are also located near the fine-mapping region. These results suggest that these candidate genes may have contribute substantially to the birth weight of pigs. INTRODUCTION The birth weight of pigs is an important economic trait in the livestock industry. It is closely associated with early survival, weaning weight, and growth rate after weaning (Quiniou et al., 2002;Smith et al., 2007). Pigs have been selectively bred to produce larger litters, however, with this increase in litter size, the average birth weight has decreased (Bergstrom et al., 2009;De Almeida et al., 2014). Birth weight reflects the intrauterine growth of piglets which is affected by both the maternal supply of nutrition and genetic factors (Roehe, 1999;Zohdi et al., 2012). Measures of birth weight heritability have ranged from 0.08 to 0.36 (Roehe, 1999;Roehe et al., 2010;Dufrasne et al., 2013), suggesting that it is substantially affected by own (fetal) genetic factors as well as maternal genetic effects. Therefore, it is a worthwhile endeavor to determine which genes or variants underly this variation in birth weight. A few birth weight related markers have been identified by the study of candidate genes such as MYOG, MSTN and DBH (Te Pas et al., 1999;Jiang et al., 2002;Tomás et al., 2006). With the widespread use of customized single nucleotide polymorphism (SNP) arrays, an increasing number of potential markers have been identified by genome-wide association study (GWAS). Wang X. et al. (2016) found over two hundred SNPs associated with birth weight by using first parity sows whose offspring had extreme birth weights; Zhang et al. (2018) identified 17 genomic regions associated with birth weight; Wang et al. (2017) found 12 SNPs that were significantly associated with piglet uniformity; and 27 differentially selected regions associated with the birth weight of piglets were detected by Zhang et al. (2014). However, a birth weight GWAS of Large white pigs by Wang et al. (2018) was unable to determine any significant loci. The identification of birth weight associated markers remains difficult to reproduce. With rapid development of next-generation sequencing technology, a number of techniques have been widely adopted for genotyping, including whole genome resequencing and reducedrepresentation sequencing (RRS) techniques such as genotypingby-sequencing and restriction site-associated DNA sequencing (RAD-seq) (Baird et al., 2008;Huang et al., 2009;Elshire et al., 2011). Compared to SNP chip analysis, RRS approaches are based on restriction site associated fragments and have great advantages in both the number of SNPs acquired and the ability to identify novel SNPs. Currently, RRS approaches are widely employed in combination with GWAS (Bhatia et al., 2013). As SNP chip analyses only share a small subset of SNPs with RRS (Brouard et al., 2017), the combination of the two methods in one population may improve repeatability of GWAS findings. Trait related loci can be identified with GWAS, however, the elucidation of the causative variant rather than the loci is the ultimate goal. The determination of the causative variant requires a high density of SNPs in a particular region of GWAS. If the region is not genotyped at a sequence level, the imputation technique can be used to fill in missing SNPs from the available reference panels. Due to linkage disequilibrium between SNPs, the GWAS signal extends across a large region. Although it is not always possible to directly identify the causative variant, the region containing the causative variant can be narrowed down by sophisticated methods (Fang and Georges, 2016;Huang et al., 2017). The key feature of these methods is determining SNPs that have a 95% probability of containing the causative variants, as calculated with posterior probabilities. In this study, we used the DNA variants from two different genotyping approaches, SNP chip and RAD-seq, to perform GWAS for birth weight. To finely map causative genes, we built a reference panel for a region-of-interest by deep resequencing of 28 boars, by which the merged SNPs of RAD-seq and SNP chip were imputed at the sequence level. Finally, we detected the potential causative genes within or close to the finely mapped region. Animals and Phenotypes Pedigree and phenotype records used for this study were provided by our lab. The pedigree contains 26,539 animals from 7 generations, including 14,226 Yorkshire and 12,313 Landrace animals. There were 12,661 and 10,635 records of birth weight for Yorkshire and Landrace piglets, respectively. After excluding disqualified records (missing birth date or abnormal records), 10,267 and 8,919 records Yorkshire and Landrace piglets were included, respectively. A total of 674 purebred sows (453 Yorkshire, 221 Landrace) born between 2014 and 2016 were selected for RAD-seq. After eliminating abnormal values (deviated from the third quartile), 668 high quality records were analyzed. RAD-seq With BGI-seq500 Genomic DNA was isolated from the ear tissue of pigs; the double-digest restriction enzyme associated DNA sequencing method (RAD-seq) was performed using the methods of Andolfatto et al. (2011) with appropriate modifications. Briefly, the DNA concentration of all samples was normalized to 50 ng/pr in 96-well plates, and digested with FastDigestTaq I-MspI (Thermo Fisher Scientific) in 30 µL volume containing 20 µL DNA (1 µg). An anneal adapter (10 µM) was ligated to the digestion products by T4 DNA ligase with 23 TaqI-Ms. Then, 24 ligation products were pooled together to form one library with 15 µL per sample. Agencourt R AMPure R XP Reagent was used for library size-selection. The PCR system contained 50 ng size-selection products, 25 µL KAPA HiFi HotStart ReadyMix (kapasystem), and 10 pmol primers. PCR products were purified by Agencourt R AMPure R XP Reagent. The final library quality (concentration and fragment size distribution) was determined by a Qubit 2.0 Fluorometer (Thermo Fisher Scientific) and BiopticQsep100 DNA Fragment Analyzer (Bioptic), respectively. Every four library products (96 different barcodes) were mixed together in equal parts which a total weight at 170 ng. The cycling system contained 48 µL library mix, 1 × T4 DNA ligase buffer, 0.5 µL T4 DNA ligase (600 U/µL), and 100 pmol Splint Oligo, were reactions at 37 • C and fragment size distribution were determined by a Qubit 2.0 Fluorometer (Thermo Fisher Scientific) and Bioptic Qsep100 DNA Fragment Analyze sample volume of Agencourt R AMPure R XP Reagent. Finally, the purified cyclizing libraries were sequenced with a BGI-seq500 platform (PE100). Sequenced paired-end reads for each sow were identified by barcode and aligned against the Sscrofa reference genome (version Sscrofa 11.1) 1 using the Burrows-Wheeler Aligner (version 0.7.12) software (Li and Durbin, 2009). SAMtools (version 0.1.19) ) was used to generate the consensus sequence for each sow and prepare input data for SNP calling with the Genome Analysis ToolKit (version 3.4) (McKenna et al., 2010). Raw SNPs with sequencing depth greater than 2,500 or less than 50 were removed, as SNP with extreme sequencing depth is most likely caused by a repeat DNA sequence or alignment error. The SNPs underwent quality control (QC) in which those with a call rate > 0.5, minor allele frequency (MAF) > 0.05, and p-value > 10 −6 for the Hardy-Weinberg equilibrium test were kept, resulting in 140,948 SNPs. The missing genotypes were imputed with Beagle software (Browning and Browning, 2007), and the SNPs were filtered again with the above QC criteria. Finally, 139,634 high quality SNPs were retained for subsequent analysis. SNP Chip Genotyping These individuals were also genotyped with a Geneseek Porcine 50K SNP Chip (Neogen, Lincoln, NE, United States), which contained 50,697 SNPs across autosomes and sex chromosomes. QC of the SNPs was conducted using PLINK (version 1.07) (Purcell et al., 2007). The SNPs with MAF > 0.05, call rate > 0.97, and individual call rate > 0.95 were retained. Furthermore, we removed SNPs that were not mapped to the Sscrofa 11.1 genome, leaving 45,180 SNPs. The missing genotypes were imputed with Beagle software and underwent QC with the above QC criteria. Finally, 45,175 high quality SNPs were included. Whole Genome Sequencing We sequenced the whole genome of 28 boars, the ancestors of 453 Yorkshire sows (unpublished), with an average sequence depth ∼19× (ranged from 17.06× to 22.24×). After genome alignment with Burrows-Wheeler Aligner and SNP calling with the Genome Analysis Toolkit, 17,017,067 raw SNPs were detected. These SNPs were filtered using the Genome Analysis Toolkit with parameters "QUAL < 30 || QD < 2.0 || FS > 60.0 || MQ < 40.0 || MQRankSum < −12.5 || ReadPosRankSum < −8.0, " and using PLINK with MAF < 0.05 and p-value < 10 −6 for the Hardy-Weinberg equilibrium test. We removed 761,590 additional SNPs with missing genotypes across the 28 boars, leaving 11,668,346 high quality SNPs, which were taken as the reference panel for imputation. Sequence Level Imputation The SNPs determined by RAD-seq (140,948 SNPs) and SNP chip (45,180 SNPs) were merged to produce a high density SNP set for sequence level imputation. After removing 427 duplicate SNPs from both SNP sets, 185,701 SNPs remained. We performed sequence level imputation with Beagle by taking the whole genome sequencing data of 28 Yorkshire boars (described above) and 20 Landrace pigs (downloaded from https://figshare. com/articles/data2019_tar_gz/9505259). After QC (MAF < 0.05 and p-value < 10 −6 ), we obtained 9,012,073 overlapping SNP markers for the two breeds and imputed the RAD_ chip SNPs of the Yorkshire and Landrace pigs to a genome-wide level. Variance Component Estimation and Heritability Both pedigree and RAD_SNP information were used to build a kinship matrix among individuals to estimate the variance components of birth weight. The mixed linear model for this estimation was: where Y is the phenotype vector, b is a fixed effects vector, i.e., herd-year-season, sex (only in pedigree-based estimation), breed (2 breeds in SNP-based and 6 strains in pedigree-based estimation) and birth parity, u is a vector of additive genetic effects following the multinormal distribution: u ∼ N (0, Aσ 2 a ) and ∼ N (0, Gσ 2 a ), respectively in pedigree and RAD_SNP based estimations, where A is the pedigree relationship matrix and G is the genomic relationship matrix constructed based on SNPs as described in VanRaden (2008). p is a material effects vector: p ∼ N (0, Iσ 2 p ) and e is a residuals vector: e ∼ N (0, Iσ 2 e ), and I is an identity matrix. σ 2 a , σ 2 p , and σ 2 e are the additive genetic, material genetic, and residual variances, respectively. X, Z 1 , and Z 2 are the incidence matrices for b, u, and p, respectively. The variance components were estimated using the average information restricted maximum likelihood procedure in DMU software (version 6, release 5.2 2 ). Heritability of birth weight was estimated as: The standard error of heritability was obtained as Klei and Tsuruta (2008) described. Genome-Wide Association Study The mixed model including a random polygenic effect can be expressed as: where Y is the phenotype vector, which is corrected with estimated breeding values and fixed effects (only residuals left), and estimated breeding values are evaluated with the average information restricted maximum likelihood procedure in DMU; b is the estimator of fixed effects including breed, g is the SNP substitution effect and a is the vector of random additive genetic effects following the multinormal distribution a ∼ N (0, Gσ 2 a ), in which G is the genomic relationship matrix that is constructed based on SNPs as described in VanRaden (2008), and σ 2 a is the polygenetic additive variance. X, Z, and M are the incidence matrices for b, a, and g, respectively. e is a vector of residual errors with a distribution of N (0, Iσ 2 e ). All single-marker GWAS analyses were conducted using the EMMAX software (Kang et al., 2010). Based on the Bonferroni correction, the genome-wide significant threshold was P < 1/N, where N is the number of informative SNPs. Fine-Mapping The BayesFM-MCMC package (Fang and Georges, 2016) was used to finely map causative variants, in which the threshold for SNP clustering was set as r 2 = 0.5; the length of the Markov chain was 510,000 with the first 10,000 discarded (burn-in period). The threshold to declare significance was set at 1.1 × 10 −5 , which was determined from 0.05 divided by the number of SNPs in the GWAS region. We corrected the phenotypes by subtracting the corresponding breeding values and fixed effects, where the breeding values were estimated via the DMU package. RAD-seq and SNP Chip Genotyping We obtained 139,634 SNPs from RAD-seq and only 45,175 SNPs from SNP chip analysis. First, we compared the allele frequencies (AF) of SNPs garnered from both genotyping platforms ( Figure 1A). Compared with SNP chip analysis, RADseq more frequently found SNPs with lower AF. Specifically, the likelihood of RAD-seq finding SNPs with AF < 0.1 was nearly 0.3, almost two times higher than that of SNP chip analysis (∼0.1). We also compared the distance between adjacent SNPs determined by the two genotyping methods (Figure 1B). The adjacent SNPs found by RAD-seq were much closer together than those found with SNP chip analysis, suggesting that RAD-seq is more informative and may be helpful to detect causative genes. Finally, we determined the overlapping SNPs between the two SNP sets, and surprisingly found only 427 SNP overlaps. Genome-Wide Association Study We estimated heritability prior to the association study to fully understand how much birth weight is inherited. We used pedigree information and genome SNPs to estimate heritability. There were 14,226 and 12,313 individuals in the pedigree, and 10,267 and 8,919 records of birth weight for Yorkshire and Landrace, respectively. Genome-wide SNP information was used to build kinship among individuals and heritability was estimated as 0.094 ± 0.065. Then, once again using the pedigree, we estimated heritability in Yorkshire and Landrace pigs at 0.162 ± 0.026 and 0.131 ± 0.025, respectively (see Table 1), which are closer to previous reports than the heritability found when genome-wide SNP information was used. Next, we performed an association study for genome-wide SNPs based on a mixed model that accounted for population kinship (see section "Materials and Methods"). SNP sets from RAD-seq and SNP chip analysis were merged together, with two signals on chromosome 1 and 4 exceeding the threshold (Figure 2A). The positions of the lead SNPs for the two regions were chr1: 97,745,041 and chr4: 112,031,589, respectively; the MAF of the lead SNPs were 0.24 and 0.34 and they explained 6.36% and 4.25% of the phenotypic variance, respectively. We then focused on the two GWAS regions surrounding the lead SNPs, which are determined as the surrounding 1∼2 Mb region around the lead SNP. To confirm the two GWAS signals, we performed separate GWAS for the RAD-seq and SNP chip datasets. The region on chromosome 4 was determined to be significant for the RAD-seq dataset but not for the SNP chip dataset; where the reverse was true for the region on chromosome 1 (Figures 2C,E). Despite only reaching significance in one dataset, the -logP values of both regions peak in both datasets, confirming the reliability of the GWAS signals. To check for false positives caused by population stratification, we closely examined the theoretical and observed p-values with a Q-Q plot 4 . The -logP values are well fit by a linear regression against theoretical -logP values (Figures 2B,D,F), suggesting that population stratification has been well corrected for, although, it is important to note that two breed populations were simultaneously investigated. Fine Mapping To further refine the regions containing causative genes and variants, we performed fine mapping of the GWAS region 1∼2 Mb around the lead SNP. To increase fine mapping accuracy, we utilized as many SNPs as possible by merging the SNPs from both RAD-seq and SNP chip analysis and removing duplicate SNPs. After applying a stringent filter, we obtained 5,226 and 7,184 SNPs in the fine mapping regions of chromosome 1 and 4, respectively. With this high density of SNPs, we were able to impute SNPs at a sequence level. Sequence-level imputation requires a sequence-level reference SNP set. We therefore re-sequenced 28 Yorkshire boars with an average coverage of ∼19x and downloaded the whole genome sequencing data of 20 Landrace pigs. This resulted in 11,668,346 and 18,954,748 sequence-level SNPs for Yorkshire and Landrace pigs, respectively. With these SNPs as a reference panel, we imputed the merged RAD-seq and SNP chip SNPs at a sequence level using Beagle software separately for each breed. Then, we employed BayesFM-MCMC software to narrow down the clusters containing causative variants. BayesFM-MCMC first clusters the SNPs within a GWAS region using a hierarchy clustering algorithm according to r 2 among SNPs; then it models multiple causal variants by carrying out a Bayesian model selection across the cluster and generates the posterior probability for each SNP within the cluster, by which a credible set of SNPs with >95% posterior probability is constructed. The advantages of BayesFM-MCMC are that (1) it narrows down potential causative variants by indicating causal variants in the SNP set; and (2) it efficiently identifies more than one variant if multiple variants control the investigated trait. However, because BayesFM-MCMC does not solve a mixed model with polygenic effects, we corrected the phenotype values by using the residuals (see section "Materials and Methods"). First, we conducted a single variant association for the GWAS chromosome region, 1,96,745,041-98,745,041, which produced a sharp peak in this region ( Figure 3A). We then employed BayesFM-MCMC to further refine the regions, and one cluster signal with a posterior probability equal to 1 (greater than the threshold 0.5) was identified. To examine which SNPs predominantly explained the posterior probability in this cluster, we plotted the posterior probabilities for each SNP (output from BayesFM-MCMC). Most SNPs have miniscule posterior probabilities and no one SNP gives substantial posterior probability (f.i. greater than 0.5 or 0.2) in the identified cluster (Figures 3B,C). We then employed the 95% credible set defined by BayesFM-MCMC to further refine the causal variants, which contained 71 SNPs across a ∼100 kb region from 96,895,307 to 97,098,059 (see Supplementary Table S1 for detail). This 100 kb region contained the peak identified with the scan of single variants (Figure 3A), confirming the refined 100 kb region was reliable. Fine mapping of the region on chromosome 4, 111,031,589-113,031,589 (Figure 4B), identified one cluster signal with a posterior probability equal to 1. As before, we plotted the posterior probabilities for each SNP but most SNPs once again had miniscule posterior probabilities (less than or 0.05) ( Figure 4C). The 95% credible set of causal variants in chromosome 4 contained 432 SNPs across over a ∼870 kb region from 111,700,218 to 112,569,735 (see Supplementary Table S2 for detail). The peak found in the single-SNP association profile ( Figure 4A) is covered by this ∼870 kb region, once again confirming the reliability of BayesFM-MCMC for this purpose. The correlation (r 2 ) among SNPs confirmed that they were highly linked, which explains why the individual posterior probabilities of these SNPs are very small. Candidate Genes The 71 SNPs of interest on chromosome 1 are located in the intergenic region, which lies about 53 kb upstream of SKOR2 and over 317 kb downstream of SMAD2 ( Table 2, see Supplementary Table S1 for details). We hypothesize that these variants are likely to have regulatory effects on the two nearby genes. The 432 highly linked SNPs on chromosome 4 are located within four genes, LOC106510205 (covered by 28 SNPs), LOC106510207 (covered by 26 SNPs), VAV3 (covered by 160 SNPs), and NTNG1 (covered by 218 SNPs, see Supplementary Table S2). Among these SNPs, one is a coding amino acid, seven are located in the 3 untranslated region and 414 are located in the intron (see Supplementary Table S2 for details). The coding variant is a synonymous variant (c.1136 T > A), localized in gene VAV3. The remaining variants are in non-coding sites distributed in all four genes, suggesting the causal variant may have regulatory effect. We searched for functional genes near the tightly linked region, and thereby included SLC25A24, PRMT6, STXBP3 as candidate genes ( Table 2). DISCUSSION We employed two genotyping methods, RAD-seq and a customized SNP chip assay, to obtain genome-wide distributed SNPs. The number of SNPs identified by RAD-seq was three (Rimbault et al., 2013) times greater than those identified by customized SNP chip, among these, only 427 SNPs overlapped, consistent with previous reports (Brouard et al., 2017). Furthermore, we found that RADseq was able to genotype more low-frequency SNPs than the SNP chip assay. As we known, rare and low frequency variants have been found to partially explain phenotypic variation in some human diseases and agricultural traits (Quintana-Murci, 2016;Zhang et al., 2017). By using genome-wide association combined with post-GWAS fine mapping, we refined one causative variant to a ∼100 kb region containing 71 SNPs. This region is located in the intergenic region between SKOR2 and SMAD2. Intergenic sequences are generally considered as junk sequences. However, in recent years, studies have increasingly shown that intergenic sequences contain long-distance regulatory elements and may also generate a large amount of non-coding RNA through transcription, thereby regulating the expression of surrounding genes (Chen and Tian, 2016). SKOR2 is homologous to the Ski/Sno family of transcriptional co-repressors, which has been shown to negatively regulate transforming growth factor β (TGFβ) signaling pathways by binding to Smads (Arndt et al., 2005). SKOR2 null mice are smaller than their siblings (Wang W. et al., 2011). SKOR2 polymorphism has also been reported to be associated with more rapid weight gain in African American males (Tu et al., 2015). SMAD2 is activated by TGFβ, and regulates multiple cellular processes, such as cell proliferation, apoptosis, and differentiation. As we known, TGFβ pathways play critical roles in bone development (Li et al., 2005). SMAD2 plays an essential role in regulating chondrocyte proliferation and differentiation in the growth plate (Wang W. et al., 2016). Additionally, SMAD2 was identified as the causative gene for the body-size of dogs, and was associated with the total number of piglets born in Yorkshire pigs as well as with high fecundity in dairy goats (Rimbault et al., 2013;Lai et al., 2016;Wang et al., 2018). Our results suggest that causative variants in this intergenic region may contribute to birth weight phenotypes by interfering with the regulatory function of the nearby distal regulatory elements and causing differential expression of the two surrounding genes. We have refined the causative variant on chromosome 4 to a ∼870 kb region which resides in a big linkage disequilibrium block containing 4 genes, LOC106510205, LOC106510207, VAV3, and NTNG1. NTNG1 plays an important role in cell signaling during nervous system development (Nakashiba et al., 2000) and is associated with calf birth weight in Holstein cattle (Cole et al., 2014). LOC106510205 and LOC106510207 are predicted to be long non-coding RNA (lncRNA), and has not been functionally characterized to this point. As we known, lncRNA transcription plays an important role in both cis-and trans-regulation of nearby gene expression (Sun and Kraus, 2015). VAV3 is located in the center of the fine mapping region and is near the two lncRNAs. VAV3 is a member of the VAV gene family that activates actin cytoskeletal rearrangement pathways and transcriptional alterations (Zeng et al., 2000). VAV3 is versatile and also regulates osteoclast function, bone mass, and the homeostasis of the cardiovascular and renal systems (Faccio et al., 2005;Sauzeau et al., 2006). Previous knock-out results have shown that Vav3deficient mice were protected from bone loss induced by systemic bone resorption stimuli such as parathyroid hormone or RANKL (Faccio et al., 2005). Furthermore, VAV3 is associated with hypothyroidism in humans, food conversion ratio in a male Duroc pig population, high body weight and growth rate in Boer goats, as well as sperm concentration in Holstein-Friesian bulls (Hering et al., 2014;Kwak et al., 2014;Wang et al., 2015;Onzima et al., 2018). Several genes near the ∼870 kb tightly linked region were found to be related to growth and development or have been identified in others studies ( Table 2). For example, SLC25A24 encodes a carrier protein that mediates electroneutral exchange of Mg-ATP or Mg-ADP against phosphate ions, is responsible for low fat mass in humans and mice (Urano et al., 2015), and is also related with bovine embryonic mortality (Killeen et al., 2016). Mutations in SLC25A24 have been found to be associated with fontaine progeroid syndrome in humans (Rodríguez-García et al., 2018). Furthermore, STXBP3 (also known as Munc18c), involved in insulin-regulated GLUT4 trafficking, has been found to be positively associated with body weight in Large White and Tongcheng pigs . Finally, PRMT6, is reported to be associated with bull sperm concentration (Hering et al., 2014), and the expression of PRMT6 in skeletal muscle has been found to be regulated with a strong cis-expression quantitative trait loci (personal communication). Taken together, the region spanning VAV3 and NTNG1 is a very important genetic factor underlying the birth weight of pigs. Most of the finely mapped SNPs obtained herein were located in intergenic regions or within introns. Therefore, we propose that these variants may have a regulatory effect on the expression of nearby genes, such as SKOR2, SMAD2, VAV3, and NTNG1, and thereby regulating body development. This research did not confirm such regulatory mechanisms but has highlighted them for further investigation. CONCLUSION We used the DNA markers from two different genotyping approaches to perform GWAS, and identified significant loci in chromosome 1 and chromosome 4 which explained 6.36% and 4.25% of the phenotypic variance, respectively. To increase the accuracy of fine mapping, we imputed the merged RADseq and SNP chip SNPs at a sequence level using the SNPs of high-coverage resequenced pigs as a reference panel. Then, we employed BayesFM-MCMC software to narrow down the genomic region of the clusters that contained causative variants. One cluster was located in an intergenic region, and the other in a gene coding region. Finally, we identified four promising candidate genes, SKOR2, SMAD2, VAV3, NTNG1, that have been associated with growth related traits in other species including cattle, humans, and dogs. Most SNPs in the fine mapping region were located in the intergenic region or introns, and as such we propose that these variants may have a regulatory effect on the expression of nearby genes, which deserves further investigation. The birth weight of pigs is an important economic factor in the livestock industry, identification of a causal variant would be beneficial to the molecular breeding of pigs. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the https: //figshare.com/articles/GWAS_datasets/9917462. ETHICS STATEMENT This study was carried out in accordance with the guidelines of the Science Ethics Committee of the Huazhong Agricultural University (HZAU). All animal experiments were approved by the Institutional Review Board on Bioethics and Biosafety of Beijing Genomics Institute (BGI-IRB). ACKNOWLEDGMENTS We gratefully acknowledge Ying Li, Ping Huang, Jiajin Liu, Yongjia Chen, and Qinchun Chen for their help extracting DNA and constructing RAD-seq libraries. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fgene. 2020.00183/full#supplementary-material TABLE S1 | Gene annotation of 71 single nucleotide polymorphisms in the fine mapping region of chromosome 1.xlsx. TABLE S2 | Gene annotation of 432 single nucleotide polymorphisms in the fine mapping region of chromosome 4.xlsx.
6,535.2
2020-03-27T00:00:00.000
[ "Biology" ]
How Do Viruses Avoid Inhibition by Endogenous Cellular MicroRNAs? MicroRNAs (miRNAs) are an extensive family of small regulatory RNAs that function by binding to complementary mRNAs, primarily in the 3′ untranslated region (3′UTRs), and then inhibiting their expression by reducing mRNA translation and/or stability [1]. MiRNAs are initially transcribed as long pri-miRNAs, which are sequentially processed by the RNase III enzymes Drosha, in the nucleus, and Dicer, in the cytoplasm, to generate the mature, ∼22-nt miRNA [2]. This is then loaded into the RNA Induced Silencing Complex (RISC), which consists minimally of one of the four mammalian Argonaut proteins, Ago1 to Ago4, as well as a member of the GW182 protein family. MiRNAs function as guide RNAs to target RISC to complementary mRNA sequences on specific mRNA 3′UTRs. Analysis has revealed that complementarity to nucleotides 2 through 8 of the miRNA, the so-called seed region, is particularly important for effective RISC recruitment [1], although non-canonical sites, with incomplete seed complementarity, have also been reported [3]. Importantly, RISC recruitment to target sites that are occluded by RNA secondary structure or bound proteins is very inefficient [4]. Introduction MicroRNAs (miRNAs) are an extensive family of small regulatory RNAs that function by binding to complementary mRNAs, primarily in the 39 untranslated region (39UTRs), and then inhibiting their expression by reducing mRNA translation and/or stability [1]. MiRNAs are initially transcribed as long pri-miRNAs, which are sequentially processed by the RNase III enzymes Drosha, in the nucleus, and Dicer, in the cytoplasm, to generate the mature, ,22-nt miRNA [2]. This is then loaded into the RNA Induced Silencing Complex (RISC), which consists minimally of one of the four mammalian Argonaut proteins, Ago1 to Ago4, as well as a member of the GW182 protein family. MiRNAs function as guide RNAs to target RISC to complementary mRNA sequences on specific mRNA 39UTRs. Analysis has revealed that complementarity to nucleotides 2 through 8 of the miRNA, the so-called seed region, is particularly important for effective RISC recruitment [1], although non-canonical sites, with incomplete seed complementarity, have also been reported [3]. Importantly, RISC recruitment to target sites that are occluded by RNA secondary structure or bound proteins is very inefficient [4]. Viruses and MicroRNAs Upon infection of a cell, viruses encounter a wide range of miRNA species, generally more than 50 different miRNAs per cell, and these miRNAs vary greatly between tissues. For example, miR-122 is expressed at very high levels in hepatocytes, but is absent from almost all other cells, while miR-1 is primarily expressed in muscle tissue and miR-128 in neuronal cells [5][6][7]. Indeed, many of the more than 1000 known human miRNA species show a tissue-specific expression pattern [8], meaning that viruses that infect multiple cell types need a way to avoid inhibition by a wide range of miRNAs with distinct mRNA-targeting specificities. Analyses of the interactions of viruses with cellular miRNAs have revealed that viruses can influence cellular miRNA biogenesis and effector mechanisms in several different ways. Viruses can clearly benefit from miRNA expression. For example, almost all herpesviruses that have been examined express substantial numbers of miRNAs, and these can facilitate viral replication and/or regulate viral entry or exit from latency [9]. While some DNA viruses also express miRNAs, including adeno and polyoma viruses, miRNAs have not been detected in any RNA viruses examined so far, with the exception of the retrovirus bovine leukemia virus (BLV), which transcribes short, pol IIIdriven miRNA precursors from integrated BLV proviruses [10]. Viruses can also benefit from cellular miRNA species, with the clearest example being Hepatitis C virus (HCV), which requires miR-122 for replication [5]. Moreover, several other viruses have been reported to induce specific cellular miRNAs, and it has been demonstrated that, in some instances, this induction facilitates viral replication in culture, apparently by down-regulating specific cellular mRNA targets with antiviral potential [11,12]. While certain viruses can clearly benefit from cellular miRNAs, it has been unclear how viruses avoid inhibition of viral mRNA function by cellular miRNAs. Indeed, several reports demonstrating inhibition of viruses by cellular miRNAs have been published [13][14][15]. However, especially given that cellular miRNAs are highly conserved during evolution [1], it seems unlikely that viruses would fail to evolve mechanisms to prevent cellular miRNA-mediated inhibition in their normal target tissues. What these might be, however, is currently unclear. Several possible mechanisms can be proposed: 1) Viruses block miRNA function. This appears rare, as virus-infected cells generally contain normal levels of miR-NAs, and most viruses can be inhibited by specific small interfering RNAs (siRNAs), which function indistinguishably from miRNAs in mammalian cells [16], or by insertion of target sites for endogenous cellular miRNAs into viral transcripts [17][18][19][20]. Indeed, the use of inserted cellular miRNA target sites as a way of inhibiting viral replication in tissues that express the cognate endogenous miRNA, while allowing unhindered viral replication in cells that lack this miRNA, has considerable potential in facilitating the development of novel attenuated viral vaccines or in targeting oncolytic viral vectors away from normal tissues [17][18][19][20]. Uniquely, in the case of poxviruses, it has been shown that miRNAs are degraded in infected cells [21]. In contrast, HIV-1 and influenza viruses, despite early reports to the contrary, have now been clearly shown to not block miRNA function [19,22], and indeed, the tissue and/or species tropism of influenza virus can be readily manipulated by insertion of target sites for endogenous miRNAs [19]. 2) Viruses evolve to avoid 39UTR targets complementary to cellular miRNAs. Because full complementarity to the seed is generally critical for miRNA inhibition, single nucleotide mutations should block inhibition [1]. However, for viruses that can replicate in several different tissues, each expressing more than 50 distinct miRNAs, complete avoidance of all miRNAs may be very difficult to achieve. Nevertheless, especially for viruses that display a narrow tissue tropism, this mechanism seems very likely to be important. 3) Viruses evolve very short 39UTRs. RISC recruitment to open reading frames (ORFs) does not effectively inhibit mRNA function, most probably because translating ribosomes sweep bound RISCs off the mRNA [1]. Therefore, mRNAs with very short 39UTRs would be expected to be relatively refractory to miRNA-mediated inhibition. In fact, many RNA viruses express mRNAs bearing short 39UTRs, and the short 39UTRs that are present are often highly structured, which is predicted to also inhibit RISC binding [4]. RNA virus families that appear likely to use this strategy to avoid inhibition by endogenous cellular miRNAs include flaviviruses, picornaviruses, rhabdoviruses, and reoviruses. 4) Viruses evolve structured 39UTRs. Some viruses, especially retroviruses, alphaviruses, and coronaviruses, contain extensive 39UTRs in at least some viral mRNA species. For example, the HIV-1 mRNA that encodes Gag and Gag-Pol has a 39UTR that is several thousand nucleotides in length. Similarly, the coronavirus mRNA encoding the viral ORF1a and ORF1b proteins has a 39UTR more than 10,000 nucleotides in length. How do these very long 39UTRs avoid functioning as targets for multiple miRNAs? One possibility is that these 39UTRs have evolved high levels of RNA secondary structure, which would be predicted to globally restrict binding by miRNA-programmed RISCs [4]. Relatively little is known about the secondary structure of viral RNAs, although some data suggest that high levels of secondary structure are a common feature [23]. One viral RNA that has been examined in detail is the HIV-1 RNA genome, which also functions as the mRNA for the viral Gag and Gag-Pol proteins. This RNA has been shown to fold into an extensive secondary structure with relatively few areas that are unfolded and hence, presumably, are available for RISC binding [24]. This prediction has been validated by a comprehensive analysis of the susceptibility of the HIV-1 genome to small interfering RNAs (siRNAs), which in mammalian cells function indistinguishably from miRNAs [16]. These researchers generated over 9,000 siRNAs specific for the HIV-1 genome by sliding the siRNA target along the viral genome by one-nucleotide increments [25]. Relatively few of these siRNAs were found to inhibit HIV-1 replication and gene expression effectively, and those that did were predicted to bind to the few regions of the viral RNA genome that, using biochemical approaches, were predicted to adopt an open, unfolded conformation [24,25]. Recently, the ability of the HIV-1 genome to bind to endogenous cellular miRNAs in relevant target cells (CD4 + T cells) or in a non-physiological target cell (HeLa cells) has been examined using a technology called photo-activatable ribonucleoside-enhanced crosslinking and immunoprecipitation (PAR-CLIP). The PAR-CLIP technique involves pulsing cells with the highly photoactivatable uridine analog 4-thiouridine and then crosslinking endogenous RNAs to bound proteins by irradiation at 365 nm [26]. Crosslinked proteins, and the bound RNAs, are recovered by immunoprecipitation of RISC using an Ago2-specific monoclonal antibody and the binding site footprinted by RNase treatment. The RISC binding sites are then comprehensively identified by deep sequencing of these small RNAs to generate sequence clusters that can be aligned to endogenous miRNA species. Analysis of RISC binding to the HIV-1 genome indeed identified several binding sites that were occupied by RISCs programmed by endogenous cellular miRNAs and some of these could be shown, by indicator assays, to confer a modest repression of mRNA function [27]. However, perhaps the more interesting finding was that viral mRNAs, despite contributing more than 10% of the total mRNA transcriptome in HIV-1 infected cells, in fact gave rise to only approximately 0.2% of all assignable RISC binding sites, with the remaining approximately 99.8% being contributed by cellular mRNAs. That is, viral mRNAs are, at a minimum, 50-fold less likely to bind RISC than are cellular mRNAs, consistent with the idea that HIV-1-encoded mRNAs, at least, have evolved to globally avoid cellular miRNAs by adopting RNA secondary structures that preclude RISC binding. While a more complete understanding of the interaction of viruses with cellular miRNAs must await a more detailed dissection of the effect of endogenous miRNAs on a wide range of viral species, current data suggest that viruses have likely evolved a number of strategies to avoid inhibition by these ubiquitous cellular regulatory RNAs. Whether the perturbation of these avoidance strategies has the potential to lead to the development of reagents that are useful in disease prevention, such as novel forms of attenuated viral vaccines, remains to be determined.
2,370.2
2013-11-01T00:00:00.000
[ "Biology", "Medicine" ]
Natural Hazards and Earth System Sciences Seismic safety assessment of unreinforced masonry low-rise buildings in Pakistan and its neighbourhood Pakistan and neighbourhood experience numerous earthquakes, most of which result in damaged or collapsed buildings and loss of life that also affect the economy adversely. On 29 October, 2008, an earthquake of magnitude 6.5 occurred in Ziarat, Quetta Region, Pakistan which was followed by more than 400 aftershocks. Many villages were completely destroyed and more than 200 people died. The previous major earthquake was in 2005, known as the South Asian earthquake ( Mw=7.6) occurred in Kashmir, where 80 000 people died. Inadequate building stock is to be blamed for the degree of disaster, as the majority of the buildings in the region are unreinforced masonry lowrise buildings. In this study, seismic vulnerability of regionally common unreinforced masonry low-rise buildings was investigated using probabilistic based seismic safety assessment. The results of the study showed that unreinforced masonry low-rise buildings display higher displacements and shear force. Probability of damage due to higher displacements and shear forces can be directly related to damage or collapse. Introduction Earthquakes frequently hit different regions in Pakistan and its neighbourhood (see Fig. 1).On 29 October, 2008, a magnitude of 6.5 earthquake and more than 400 aftershocks hit Ziarat, Quetta Region, the provincial capital of Baluchistan, Pakistan.The Quetta region is one of the popular resort regions in Pakistan (see Fig. 2).This earthquake was responsible for more than 200 deaths, most of the existing Correspondence to: K. A. Korkmaz (armagan@mmf.sdu.edu.tr)buildings were collapsed, many villages were completely destroyed and left more than 40 000 people homeless.The previous major earthquake, also known as the South Asian earthquake, occurred in Kashmir in 2005 and 80 000 people died.Quetta was flattened in 1935 by an earthquake that killed 30 000 people. The vulnerability of urban areas to natural disasters in places like Pakistan and its neighbourhood has attracted significant attention among practical engineers in industry and researchers in academia (Mitomi et al., 2000).Although much research has been dedicated to prevention of at least reduction of disaster damage and loss, a satisfactory level has not been achieved.The utmost importance to achieve an effective solution is the understanding of the real structural behaviour.Time history analysis is one of the accurate methods for reliable definition of the structural behaviour of building stock in the region.Various researchers have studied damage assessment by using different applications for this region (Khan and Khan, 2008;Lisa et al., 2005;Naseer et al., 2007;Zare et al., 2008). This study focuses on seismic safety assessment of unreinforced masonry low-rise buildings in Pakistan and the neighbourhood using multiple methodologies.The seismic safety of unreinforced masonry buildings that dominate the building inventory in the region was investigated by multiple approaches.Four different representative buildings were modelled to demonstrate the building stock in the region.Nonlinear time history analysis and probabilistic based seismic assessment analysis were performed on the representative buildings.The analysis results showed that unreinforced masonry low-rise buildings present higher displacements, shear forces and probability of damage that can be directly related to damage or collapse. Seismicity of Pakistan and its neighbourhood Pakistan and its neighbourhood are prone to earthquakes.A high frequency of earthquakes has been experienced, resulting in loss of life and property destruction.According to records, this region sits in a moderate to a high seismic risk level.Parts of the North West Frontier Province, in the vicinity of Quetta and along the border with Iran, are in the high risk areas.Historically, earthquakes in the M w 7.0 range have been experienced in Baluchistan and along the border with Afghanistan and India (ASC, 2009).The seismic map of Pakistan and its neighbourhood is given in Fig. 3 and the major earthquakes are given in Table 1 (GSP, 2009;Zaré et al., 2008;Khan and Khan, 2008).Earthquakes with a magnitude of M>5 in between 1973 and 2009 are depicted in Fig. 4. The Seismic hazard map is shown in Fig. 5 (USGS, 2009). Damage pattern The majority of the existing buildings in the region have not been designed to withstand earthquakes.Most of them are typically unreinforced masonry low-rise buildings and have been exiguously designed.This structural type is ubiquitous all over the region.Heavy damages and collapses after earthquakes demonstrated the vulnerability of such unreinforced masonry buildings. The buildings which have received the most severe damage in the recent earthquakes are mostly unreinforced stone, concrete block, and masonry buildings (see Fig. 6). Figure 6 shows some example buildings suffering such damage (Naseer et al., 2007;Khan and Khan, 2008).Performances of stone masonry schools, colleges, universities, old hospitals and official buildings were very bad during the earthquakes.Old age, poor construction and materials, and improper design were among the main factors for bad performance.Damages suffered by the masonry buildings could be explained by diagonal shear failure, combined in and outplane effects, flexure failure of pier, failure of building corners, separation of orthogonal walls, damages at walls and failure of external masonry and parapet walls (Amjad et al., 2007). In the region, the construction application is based on experiments performed by contractors.In general, the stone masonry buildings have the light weight wooden roof trusses instead of a roof or floor diaphragm to brace the walls.Therefore, the walls are allowed to span horizontally to perpendicular walls that are less likely to withstand an earthquake.The spacing between walls is wider (Green, 2007;Tolles et al., 2000). After the Kashmir earthquake in 2005, the Pakistani Earthquake Reconstruction-Rehabilitation Committee was established to organize and manage the construction of buildings in the damaged areas.The committee stated that structural designs of buildings must be approved by this committee for compliance with international codes.The Pakistan Engineering Council was established to revise and update the Pakistan Building Code.The committee also decided to adopt the 1997 edition of Uniform Building Code and to modify its provisions to make it compatible with Pakistan.The 2007 20 Seismic assessment There are two main seismic assessment approaches, namely deterministic and probabilistic.The deterministic based seismic assessment is comparatively simple and does not account for the uncertainties and probability of occurrence of an earthquake.For accurate deterministic seismic assessment, it is recommended that related long term ground motion data should be used in the analyses.However, this type of data is lacking or limited for most regions.A similar situation of lack of sufficient data also exists for this region (Lisa et al., 2005).For such cases, probabilistic based seismic assessments are more reliable.Probabilistic based assessment is denoted by the probability of seismic intensities exceeding a particular value within a specified time interval.In probabilistic based seismic assessment, the seismic vulnerability assessment is explained by a recurrence relationship, defining the cumulative number of events per year versus their magnitude.Analyses for different sites can be used to generate hazard curves which define seismicity of the region. In seismic assessment, seismic hazard assessment is used for the determination of the seismic ground motion of the region; seismic safety assessment is used for explaining the seismic vulnerability of the buildings. Seismic hazard assessment Seismic hazard assessment can be explained in two ways, pre-and post-evaluations.Pre-evaluation is done immediately following an earthquake.Rapid visual assessment is one of the important tools for a fast evaluation.The methodology provided in HAZUS is a post evaluation model that can be applied in regions that are at risk of earthquake disaster.For long-term evaluation, detailed post-earthquake evaluation is done to consider the structural damage.Long term evaluation includes nonlinear based performance evaluation analyses. HAZUS methodology generates estimates of the consequences to a city or region due to a scenario earthquake.A scenario earthquake could be a specified earthquake with magnitude and location.The evaluation methodology consists of three basic steps (Korkmaz, 2009;Schneider and Schauer 2005). 1. Study region definition: definition of the region, selection of the application area, and selection of the appropriate data from the earthquakes. 2. Hazard characterization: definition of the earthquake hazard, hazard type and source, fault type, and earthquake location. 3. Damage and loss estimation: social and economic loss estimation, structural hazard estimation. HAZUS, is a user friendly risk assessment model that can address the seismic hazard in the US.It was developed more than 17 years ago in the US.It is a state-of-art decision support tool for assessing disasters.Capabilities in HAZUS include earthquake, flood and hurricane hazard characterization, building, essential facilities damage analysis, computing direct economic losses and secondary hazards (FEMA, 1999). However, HAZUS is not ready for use in Pakistan and its neighbourhood.National and district boundaries and characterization of the earthquake data used in HAZUS are currently only available for the US.HAZUS can be used to provide a starting point for the development of a disaster risk assessment tool which could be used in Pakistan and its neighbourhood considering user requirements and data availability (FEMA, 1999). Seismic safety assessment In the study, seismic safety assessment was used for evaluation of buildings.Buildings were evaluated using nonlinear analyses.There are various different methodologies available in the literature for modelling and nonlinear analyses (Agbabian, 1984;Bruneau, 1994;Kim and White, 2004;Lam et al, 2003;Mistler et al, 2006;Moon et al, 2006;Priestly, 1985;Sucuoglu and Erberik, 1992;Vera et al, 2008;Yi et al, 2006). Since the concern is unreinforced masonry buildings for the region, the evaluation of the seismic behaviour of unreinforced masonry buildings requires specific procedures.The masonry buildings' response to dynamical loads often differs substantially from those of ordinary buildings.In order to obtain a reliable estimation of the seismic risk, it is desirable to perform full dynamic analyses that describe the effective transmission and dissipation of the energy coming from the ground motion into the building (Pena et al., 2007). Probabilistic seismic safety assessment for unreinforced masonry buildings is currently available from HAZUS, a seismic loss estimation framework developed by the Federal Emergency Management Agency (FEMA, 1999).HAZUS uses a systematic approach for probabilistic damage assessment of buildings, in which building response is characterized by building capacity curves and seismic hazard is represented by demand spectra for example, capacity spectrum method (Park, 2009). Defining seismic risk is the first important step of probabilistic based assessments.Probabilistic based assessments consider probabilistic parameters and variables.FEMA proposes performance based procedures.The procedures do not consider probabilistic variables.In the processes, building performance and levels of damage to structural components are considered.Building performance or damage levels are specified as a function of the maximum drift the building sustains during an earthquake (FEMA, 2000;Park et al, 2009). For unreinforced masonry buildings, three performance levels are defined in FEMA 356 as Collapse Prevention, Life Safety, and Immediate Occupancy.HAZUS defines four damage levels as Slight, Moderate, Extensive, and Complete Damages.The procedure does not consider probabilistic variables.HAZUS uses the maximum drift ratio as the damage measure following FEMA 356, and the threshold values of the maximum drift ratio corresponding to the limit states are defined based on comprehensive review of past studies on seismic building damage (FEMA, 1999(FEMA, , 2000;;Park, 2009). In the study, seismic safety of unreinforced masonry lowrise buildings in Pakistan and its neighbourhood was investigated.Therefore, first, nonlinear time history analyses then probabilistic based assessment analyses were conducted for representative buildings.In the following section, modelling and analysis of the representative building are explained. Seismic safety assessment of unreinforced masonry low-rise buildings in Pakistan and its neighbourhood In the study, structural performances of unreinforced masonry low-rise buildings in Pakistan and its neighbourhood were estimated through nonlinear time history analysis.Therefore, four different representative buildings were modelled and analyzed. Modelling representative buildings When modelling representative buildings two critical modelling issues should be considered: (a) representation of the real structural behaviour and (b) non-linear material relationships.Masonry consists of mainly unit element and mortar. Most common unit elements are brick and stone.Mortar is used for connecting the units.Compressive, tensile and shear strength, durability, water absorption coefficient and thermal expansion affect the load bearing capacity of masonry.Masonry buildings include blocks connected by mortar joints that are geometrically complex and reflected in the computational effort needed.Modelling of joints is specifically important, since the sliding at joint level often starts the crack propagation.Mortar joints in masonry buildings cause the masonry to be anisotropic.Two different approaches have been adopted to model such anisotropy: the "micro-model" or "two-material approach" and the macro-model.In both models, the discretization follows the actual geometry of both the blocks and mortar joints, adopting different constitutive models for the two components. Strength of stone masonry depends on the material properties and bond materials used.The stone is massive and stiff.Type and thickness of mortar is more effective on the compressive strength of stone masonry than stone units.The strength of stone does not have much effect on stone masonry.The joint behaviour of unit and mortar determines the (Alvarenga, 2002). strength of stone masonry.If the mortar strength is weaker than units, masonry strength primarily depends on the mortar strength.The shear strength of the stone masonry is approximately 25% of the compressive strength (Unay, 2002;Lourenco, 1998). To demonstrate the construction type and determine a path for seismic safety assessment for the region, four different representative buildings were modelled similar to the real examples.The existing building types in the region were adopted from the literature and real examples (Naseer et al., 2007).Models shown in Figs. 8 and 9 were nonlinearly investigated via nonlinear time history analyses (Wilson and Habibullah, 1998).As depicted in Figs. 8 and 9, the plan areas of typical masonry buildings in the region are close to these values.Material properties are given in Table 2. Material properties are defined according to the stone masonry material which is commonly used for construction in the region.Masonry constructions subjected to seismic events can present damping ratios around 8%-10% (Rivera, 2008).In the present study, damping ratio was taken as 8%. Seismic safety assessment of representative buildings To obtain a set of performance estimates for representative buildings, a series of nonlinear time history analyses were conducted (Wilson and Habibullah, 1998).An extraordinarily important step for application of time history analysis is the selection of the earthquakes.For each model, 60 different ground motion data were selected.Since, local data is not available for the region, the earthquake data was chosen from all over the world.Non-linear dynamic analyses were performed considering tensile strength, tensile fracture energy and damping.In total, 240 time history analyses were conducted for evaluation.developed based on analysis results and simplified probabilistic modelling.For each building, the mean and standard deviation values of these maximum drifts were assumed as parameters of normal distribution functions.According to Cornell et al., a log-normal distribution for the statistical description of the building response would be a reasonable assumption (Cornell et al., 2002).By using the distribution functions, probability of exceeding the different drift levels was computed for each model (Park et al., 2009).These distribution functions gave the probabilistic drift demand curves (see Fig. 12).Using these drift demand curves, probability of exceeding different displacement capacity thresholds (displacement limits) chosen from displacement capacity curves for each model were estimated.The present study investigates the seismic safety of common unreinforced masonry low-rise buildings in Pakistan and its neighbourhood, using different assessment and estimation approaches.Probabilistic based seismic safety assessment is processing of a combination of nonlinear analysis and probabilistic approach.Time history analyses were conducted with 60 different earthquake data (Table 3).According to time history analysis results, four different representative building models were evaluated.The analysis results show that there are significant amounts of displacements at the unreinforced masonry low-rise buildings.Analysis results indicate quite different responses of the representative buildings to earthquakes.The non-linear time history analyses indicated that masonry buildings are susceptible to seismic damage that has higher displacements.The results of the analyses are given in the figures.According to analyses results, the studied types of unreinforced masonry low-rise buildings in Pakistan present higher displacements and shear forces that can cause damage or collapse.By using analysis results, seismic vulnerability of unreinforced masonry low-rise buildings in the region was investigated.Seismic safety of the investigated unreinforced masonry low-rise buildings is questionable. There are many lessons learned from the recent earthquakes.Based on these lessons, in the short and long term, measurements in seismic prone areas in Pakistan and its neighbourhood are essential.In conclusion, a detailed seismic safety assessment for all unreinforced masonry low-rise building types in the region is very important.The next step could be definition of strengthening of unreinforced masonry buildings via simplified and fast methodologies. Edited by: M. E. Contadakis Reviewed by: two anonymous referees Figure 1 .Fig. 1 . Figure 1.Map of Pakistan and its neighbourhood Figure 2 . Figure 2. Map of Pakistan and Location of Ziarat 25Figure 7 . Figure 7.Typical behaviour of quasi-fragile materials under uniaxial loading and definition of the fracture energy a) tension loading b) compression loading (Alvarenga 2002). Figure 8 .Figure 8 .Fig. 8 . Figure 8. Plan and 3-D view for one and two-story of Model 1 Fig. 9 . Figure 9. Plan and 3-D view for one and two-story of Model 2 Figure Figure 10.Time History Analysis Results For Model 1 Figure Figure 10.Time History Analysis Results For Model 1 Figure 10 .Fig. 10 . Figure 10.Time History Analysis Results For Model 1 Fig. 10.Time history analysis results for Model 1. Figure Figure 11.Time History Analysis Results For Model 2 Figure Figure 11.Time History Analysis Results For Model 2 Table 1 . Major earthquakes in Pakistan. Table 3 lists the peak ground accelerations (PGA) and peak ground velocity (PGV) of the records.The time history results are given in Figs. 10 and 11.After time history analyses, probabilistic evaluations for drift levels were made to provide performance likelihoods during design-basis earthquakes expected in the future.
4,022.2
2009-06-29T00:00:00.000
[ "Engineering", "Environmental Science" ]
Looking for sources of human rights in Japanese traditions be good the bad be ground of good. most people an impossible choice to make, the imposition of which can only generate anxiety and alienation. Conversely, to insist on specific and exclusive cultural roots for human rights in the traditions of the "Christian Occident", as a coalition of liberal cultural relativists and Euro-American conservatives would have it, does mean to deny to anybody who is identified, on their own accord or by someone else, as coming from another background the possibility to make the concept of human rights their own. This would also mean to consciously or inadvertently sound a note of Western suprematism, which can only alienate people in areas that were exposed to Western cultural and political imperialism. What follows from this is that, in order to avoid a self-defeating method in advancing human rights, we have to search for ways to picture them as standing in continuity with cultural traditions around the globe, without necessarily having to postulate that they were explicitly acknowledged in all areas, and at all times. In this manner, looking into historical sources can also help us to become aware of the relativity of each specific formulation of human rights to the social conditions of the age. It is worth noting that this strategy of establishing a continuity without insisting on identity was advanced in Japan already in the early decades of modernization by liberals such as Nishi Amane 1 (Steineck 2013) or (in a less reflective manner) Yamaji Aizan (Squires 2001) against nationalist authoritarians such as Yamagata Aritomo, Shinagawa Yajirō, or Inoue Tetsujirō (Ito 2010). It is with these preliminary thoughts in mind that I want to look, in the following pages, for possible intellectual sources of the concept of human rights in the premodern intellectual heritage of Japan. In my search, I shall consciously employ the same "hermeneutics of good will" that has traditionally been granted to slaveholders (Plato and Aristotle), profiteers of human trafficking (Locke; discussed in Glausser 1990), or racists (Kant; discussed e.g. in Bernasconi 2008, but see also Kleingeld 2007, who argues that he eventually changed his mind) when rooting human rights in the European traditions. I shall to some extend follow the pattern established by Micheline Ishay in her seminal The History of Human Rights (2008) in looking for both implicit and explicit acknowledgements of norms pertaining to liberty (namely, injunctions for tolerance, and restrictions of state interference in private affairs), equality (provisions for legal and social justice and universal welfare), and fraternity (willingness to extend respect and welfare to all human beings, regardless of social divisions). I will, however, not follow her in highlighting some ancient sources before jumping to the modern history of human rights. Instead, I will quite selectively mention some sources from antiquity and the medieval period, before discussing a particular early modern (Edo period), Confucian school of thought and its (partly diverging) ideas on human nature in more detail. This is not to privilege Confucianism as the single source relevant in --------------------------------------------1 Japanese names are given in the customary Japanese order (family name first, personal name second). As for the references given, apart from primary sources directly discussed here, I have privileged sources in Western language over the vast and insightful pertinent scholarship in Japanese language simply for reasons of accessibility for the presumed readership of this volume. All translations from the Japanese are mine if not stated otherwise. Open Access --https://www.nomos-elibrary.de/agb establishing pre-cursors to the idea of human rights in Japan. Rather, I want to focus on the said 17 th century school of thought for two reasons: First, Confucian "role ethics" have been presented as contrasting, if not contradicting "European universalism" in parts of the literature, while others have argued quite to the contrary, that Confucianism is a token of "post-conventionalist", universalist ethics (most notably Roetz 1993). The controversy usually revolves around the "classical Confucianism", i.e. sources from Chinese antiquity. However one reads these sources, taking up early modern Japanese positions allows us to see something more of the diversity of opinions that developed within Confucianism, which in itself helps to foster a nuanced appreciation of the universalist and the conventionalist/particularistic tendencies and potentials in this tradition. As suggested above, in my view the problem is not so much whether ancient to early modern Confucianism (or Buddhism, or Shinto, for that matter) conceived of "human rights" in the way we do today. They did not, and could not, for Confucius, or Itō Jinsai, could no more than Plato or Aristotle have thought "equality" the way we do, nor of a "right to nationality" (UDHR, Art. 15) or a "right to work" (UDHR, Art. 23), which make sense in the modern system of nation states and capitalist economy only. The question, then, is whether a formulation and legitimation of human rights can be developed in Confucian etc. terms. Looking at one later formulation of Confucian thought will also allow us to grasp, at least in part, the historical and dynamic character of this tradition, in contrast to the singular focus on its earliest and classical sources, which (albeit often inadvertently) serves to strengthen the conviction that there was no history of thought, no development of ideas, and thus, no progress, outside Europe. Elements of human rights in ancient and medieval Japan In the following, I want to turn the light on some important sources and developments in ancient and medieval Japanese history that can be read as documenting insights fundamental to the idea of human rights. For various reasons, the first text to turn to is the so-called "Seventeen Article Constitution" (in fact, more a collection of principles of government and administration then a fundamental law), documented in the Nihon shoki (fasc. 22), the first official history of the Japanese Kingdom, and attributed there to the Prince Regent Shōtoku (574?-622?), a figure in which retrospective imagination and historical facts are decidedly mixed (Como 2008, 4-5). Whatever the true origin of this document and the factual character of its putative author, both have time and again served as points of departure for discourses on what Japan is or should be like (Como 2008;Itō 1998), and in this vein the text has also recently been used as the corner stone for the largest collection of source-texts in Japanese Philosophy to date. (Heisig 2011, 35-39) While the text as a whole evidently bears witness to the Yamato dynasty's effort to centralize government and install itself as the one and only source of authority (which in itself was -very benevolently -interpreted by the above-mentioned Yamaji Aizan as an attempt to protect the common people "from the despotism of the nobility" (Squires 2001, 152;cf. Yamaji 1965, 315), several of its articles are of relevance to our search for sources of human rights in the Japanese traditions. Yamaji (ibd.) specifically takes up article 12, which bars provincial officials and nobility from exacting levies, as evidence of the intention to protect the welfare of the common people: Neither Provincial governors nor the provincial nobility shall collect levies or impose labour on the good people. The country does not have two lords, and the people do not have two masters. The king is the master of all people in the land. The officials he entrusts are all his vassals. How can they, aside from the government, collect levies from the common people? (NST 2 ,18;19; compare the modernizing translation in Heisig 2011, 38) If this is about protecting the "good people" (J. hyakusei 百姓, also read ohomutakara, lit. "the (king's) treasure", which may be understood quite literally as those who contribute to the royal treasury by tax and corvée labor, as distinguished from the itinerant and lowly, tax evading senmin 賤民), it is at least as much about establishing a royal tax monopoly, and a principle of suzerainty hitherto unknown in Japan. Less ambiguous are the articles 5, 6, 10 and 11, which relate to values such as impartiality, tolerance, and justice, and explicitly mean to protect the weak from extortions by the rich and powerful, including corrupt officials. Article 5 reads: Stop craving for delicacies, give up greed, and decide petitions in a transparent manner. … Recently, those deciding over petitions take profit as their constant, and hear cases with a view to bribes. Thus the petitions of the wealthy pass like stones being thrown into the water. Those of the poor resemble water being thrown at a stone. If things are like this, poor commoners have nowhere to turn to. Consequently, the way of the vassal-ministers is compromised. (NST 2, 14; 15); cf. again Heisig 2011, 37) While there is no idea of a "right" of the common subjects to receive equal treatment, the principle of impartiality in itself is evidently, and explicitly, formulated -a principle that, under the conditions of the rule of law, may well be transposed into the idea that "All are equal before the law and are entitled without any discrimination to equal protection of the law." (UDHR, Art. 7) Our text's Article 10 calls for tolerance with respect to diverging opinions. This may well have first and foremost referred to policy discussions among the governing elite, but the injunction is put forth in a general way: Stop being wrathful, discard rage, and don't be angered by people who differ. Everyone has a mind, and something his heart clings to. What they think is good may be bad for us. What we think is good may be bad in their eyes. We are not necessarily sages, nor are they necessarily stupid. After all, we all are just ordinary, deluded people (J. bonpu 凡夫). Who can decide what is good or bad? We all alternate between wisdom and stupidity, without end, like a ring. Thus, even if the other side is enraged, we should be worried lest we ourselves are mistaken. Even if we think we alone have grasped the matter, we should submit to consensus. (NST 2,18;19) While the last sentence may read like a license to opportunism, it is generally understood as an injunction against the magisterial enforcing of judgements on a reluctant majority. (NST 2, 383) The passage may thus be understood as putting forth the principle of tolerance not only of divergent views, but also of different ways of life, as a principle of governance -a principle translatable into the granting of civil liberties, such as granted by the UDHR's Art. 12. One could also find at least the first traces of an idea of human equality here in the absence of reference to privileged insights of the ruler. But no mention is made (as yet) of a human intelligent or moral nature that would substantiate claims for human dignity -instead, the text refers to the limitations of human capacities in arguing against the arrogance of those in positions of power. Articles 6 and 11 of "Shotoku's Constitution" request rulers to "punish wickedness and encourage goodness", and to honor merit appropriately. Since Article 6 precedes article 10, it also restricts its apparent relativism, implying the indisputability of certain moral standards. Again, this is not articulated in terms of formal law and legal justice, but Article 6 makes explicit reference to a "golden rule of antiquity" (Heisig 2011, 37;NST 2, 14;15), which makes it open to a reading conformant to principles of equality before the law, impartiality of justice, and the like. The political and legal thinking of these injunctions is largely informed by the Chinese classics that, in the West, have been subsumed under the term "Confucianism", with notable elements of Buddhist thought (as apparent in the term "ordinary, deluded people" in Art. 10). The latter is also deemed mainly responsible for the remarkable discontinuation of capital punishment in later antiquity subsequent to a decree issued by Saga Tennō in 818 that "all capital punishment for theft be mitigated into incarceration," which lasted until militant political conflict resurfaced in the capital with the Hōgen disturbance of 1156 (Schmidt 2002, 11; see ibd. 11-12 for an overview of the explanations given in the pertinent literature). In the ensuing medieval period, marked by the preponderance of armed conflict and a military aristocracy, penal law and practice reverted to increasingly violent means, and capital punishment was frequently applied even for minor offenses (Schmidt 2002, 12-15). On the other hand, new concepts conducive to a universalist understanding of human nature and dignity were developed within the dominant religious paradigm that combined the cult of local and national deities with Buddhist creeds and practices. In the field of Buddhist doctrine, the resurgence of armed conflict and the increasing social violence resonated with the notion of a "final age of the Dharma" (mappō 末法) in which human beings would be increasingly dependent on the grace of Buddhas and Bodhisattvas for salvation. While the widely popular concept of reliance on the "other power" (tariki 他力) of Buddha Amida was arguably designed to work against spiritual elitism and arrogance (see the statements by Hōnen and Shinran, quoted in (Heisig 2011, 247-248;253-255), it did translate into tendencies for social egalitarianism, including the establishment of a regional non-aristocratic selfgovernment in one area of Japan that lasted for over 100 years (Pauly 1985). Conversely, other schools of Buddhism emphasized the idea of a "Buddha nature" (busshō 仏性) shared by all living beings and of an "original enlightenment" (hongaku 本覚) inherent to human nature to counter the notion that some human beings were born without the moral and spiritual capacity for enlightenment. (Both terms were interpreted in various ways; see Stone (1999) for a window into this discourse.) These concepts were, with notable contributions by Buddhist monks, translated into a fledgling Shinto theology (Teeuwen 1998;Fabio Rambelli 2009). The Ruijū jingi hongen 類聚神祇本源 ("The Original Source of the Classified Texts pertaining to the Heavenly and Earthly Deities"), a fascinating text that attempts a Shintō synthesis of the Chinese classics, Buddhist sources, and Japanese Mythology, has in its chapter on "The Deeper Meaning of the Way of the Gods" an early formulation of a thought that is, to say the least, open to readings supportive of a positive notion of human dignity: "Man is the divine / sacred being on earth." (Hito wa sunawachi tenka no shinbutsu nari 人ハ乃チ天下ノ神物也。NST 19, 115) While modern Shintoists have generally more excelled in championing Japanese particularism, if not chauvinism (Antoni 1998), this serves to show that there is no need to exclude this tradition when connecting the idea of human rights to older Japanese sources. Discourses on Humaneness and Human Nature in Early Modern Japan From these perfunctory glosses into Buddhist and Shintō sources from late antiquity and the medieval period, I turn to the literature generally categorized as "Old Learning" (J. Kogaku 古学). Kogaku, which as a historical label refers more to a methodological paradigm of giving prevalence to "old" (Zhou to Han period) over "new" (Song and Ming period) sources than to a unified school of thought, emerged in the 17 th century from critical reflections on the then dominant school of Chinese Learning that based itself on the metaphysics of Chinese Song-period thinkers such Zhou Dunyi (1017-1073) and Zhu Xi (1130-1200. With the firm regulation of religion in general and Buddhist institutions in particular enacted by the Tokugawa Shogunate, the language of Classical Chinese scholarship (which is a more appropriate circumscription of what in the West is popularly called "Confucianism") within a few decades became the dominant idiom of intellectual discourse, supplying paradigms which in turn inspired new forms of Buddhist scholarship and a national learning movement that was seminal for the development of a modern nationalist ideology. The government-sponsored Hayashi School (founded by Hayashi Razan 林羅山, 1583-1657) emphasized a grand, but ultimately static view of the natural and social cosmic order (Brüll 1970;Boot 1979;Brüll 1989) that resonated well with the shogunate's efforts to stabilize the realm and its own grip on power (Totman 2000, 219-225). In terms of individual moral practice, this school, in line with the inspiration it drew from Buddhist sources and practices, emphasized "quiet sitting" (seiza 静坐) as a means of reverting to one's original nature inherently in tune with cosmic principle (Tucker 2004, but see Tucker 2002 for an interesting exception to the rule). In contrast, the "Old Learning" paradigm, represented by thinkers with no (or severed) ties to the shogunate, and partly a townspeople rather than an aristocratic background, preferred a more dynamic reading of human nature that foregrounded moral and political action. In the following, I turn to two representative thinkers of this tradition, Itō Jinsai 伊藤仁斎 (1627-1705) and Ogyū Sorai (1666-1728), not least because the latter inspired the great Meiji period philosopher Nishi Amane 西周 (1829 -1897) to conceive of a modern and liberal variant of Confucian philosophy (Steineck 2013). Itō Jinsai's Gomō jigi 語孟字義 (Meanings of terms in the Lunyu and Mengzi) discusses seminal terms of Confucius' Analects and Mencius with a critical, sometimes polemical view to their interpretation in the tradition of Song and Ming scholars. Chapter eight is dedicated to "human nature" (sei). In its second paragraph, Jinsai refutes the notion of an "original state of human nature" (honzen no sei 本然の性) that would be characterized by an apriori form of goodness, regardless of an individual's empirical capacity for, and record of, moral action (Tucker 1998, 134-135;NST 33, 48-50). Jinsai, who identifies human nature with the inborn disposition, declines the idea of a kind of inherently good spiritual nature that would be the same in all human beings. To the contrary, he believes that humans are born with different dispositions, but that they all share a moral sense that sets them apart from animals. This implies two things: being able to discern good from bad, and a natural preference for what is good that may, however, loose out to other impulses according to individual disposition and habitus (i.e., degree of moral cultivation). This is how he interprets Mencius' concept of the goodness of human nature: Mencius also explained, "People can become good because of their feelings (jō 情). That is what I mean in saying human nature is morally good." Mencius' point was that chickens and dogs, lacking any ethical understanding, cannot be taught goodness. But human feelings are such that, despite the reality of extremely inhumane deeds such as theft and murder, people are happy when praised and upset when chastised. In knowing the good to be good and the bad to be bad, both are the ground for the doing of good. That is what we mean by saying that human nature is good, and not that everyone's nature on earth is the same and we don't find anyone who is bad. From this it is clearly evident that Mencius' saying about human nature being good is not at variance with Confucius' saying that humans are similar in nature (NST 33, 50; the translation of the first three sentences is taken from Tucker 1998, 135). The ensuing paragraphs of his discussion of human nature make it even more clear that Jinsai identifies the specific and indelible trait of human nature that distinguishes humanity from other animals not as a substantial goodness, but as the ability to know good from bad, which is, as he goes on to say, is the foundation for all moral as well as immoral acts; in paragraph 4 he writes: All talk of the good is in contradistinction to what is bad. When there is the good, there is also the bad. But if we, by conjecture, attempt to grasp their ultimate origin, we will inevitably end up relying on goodness (NST 33, 52, cf. Tucker 1998, 138). Two points are of special importance in the context of our discussion. Firstly, we have a clear formulation of the moral nature of human beings that can serve as a conceptual foundation for the articulation of human dignity (Art. 1, UDHR) and might, via its implicit reference to human freedom, be developed into a Confucian articulation of civil liberties. And secondly, this moral nature is conceptually removed from questions of moral substance or merit: even "thieves and thugs who commit horrible crimes" (Tucker 1998, 139;cf. NST 33, 52) in Jinsai's eyes are not exempt from partaking in it. In thus clearly distancing the notion of human moral nature from questions of merit or demerit, Jinsai not only explicitly emphasizes the possibility and importance of moral cultivation (which connects to the right to education and participation in social life). His concept also speaks against acts that deny to humans the status of moral agents, regardless of the origin, status, and moral record of the person in question. It is, in this respect, interesting to see how Jinsai deals with the traditional distinction between the noble few and the common crowd (kunshi shōjin 君子小人; chapter 23). First, he points to the history of the terms, which initially indicated status distinctions, but later came to refer to differences in record and were, as Jinsai says, polemically used in cases where there was a positive or negative mismatch between merit and social status. He then goes on to further identify "the way of the refined / noble person" (kunshi no michi 君子の道) from the "way of the sage" (seijin no michi 聖人の道): while the latter is only accessible for people of exceptional capacity, the former is open to all and characterized by its unobtrusiveness. Jinsai, who hailed from a non-aristocratic background, seems to consciously emphasize the constant, even pedestrian nature of the "way of the noble person" in an effort to subvert the social and political elitism traditionally connected with the above named distinction (NST 33, 80-81;Tucker 1998, 195-197). Ogyū Sorai 荻生徂徠 (1666-1728), a member of the warrior class and for a period of time counselor to Shogun Tsunayoshi's chief advisor Yanagisawa Yoshiyasu (Lidin 1973, 38-51), is certainly less of a candidate than Jinsai if one were to search for precursors of democratic values in premodern Japan. However, his criticism of the tendency of the tradition to conflate the virtues of good government with general thoughts on human nature and the cultivation of benevolent feelings may be seen as laying the groundwork for the separation of the spheres of law and morality, and thus, for a theory of a state of law. This was, incidentally, the way in which the 19 th century liberal philosopher Nishi Amane read and developed his ideas (see below). Furthermore, Sorai continues Jinsai's championing of moral action in contrast to the emphasis on "returning to the source" and "quiet sitting" (although he criticizes Jinsai rather severely on other points, see e.g. Tucker 2006, 189). In chapter three of his own work on terminology, Benmei ("Discussion of terms"), Sorai defines the central virtue of "humaneness" (C. ren, J. jin 仁) as "the virtue of being a leader of men and providing for the peace and stability of the common people." (NST 36, 53;Tucker 2006, 186 omits the part about leadership.) To his understanding, it is thus clearly a virtue of the governing elite, and not for commoners. He goes on to criticize the received view: Confucians of later generations did not fathom the way of the sages, and so misunderstood humaneness. They claimed, "Humaneness is the principle of live and the virtue of the mind." They further alleged, "Humaneness appears when selfish desires are fully cleansed and the principles of heaven flow actively." … Their insights on humaneness derived from the teachings of Buddhism and Daoism. Consequently, they emphasized notions such as "principle" and "the mind." Because later Confucians misread the Doctrine of the Mean and Mencius, they interpreted humaneness as human nature. … Their idea was that the humane person loves humanity. However, love is simply a feeling. If the feelings are quieted, as they advocated, how could love become manifest?" (Tucker 2006, 188;cf. NST 36, 55) It is, by the way, clear for Sorai that, while "humanity" consists in the enactment of practical policies for the sake of the whole population, the moral cultivation of those assuming governmental authority remains an essential element: Practicing humane government takes self-cultivation as its foundation. If selfcultivation is not engaged in, the people will not follow even if humane government is enacted (Tucker 2006, 191;cf. NST 36, 57). Taking this as an aside as to the importance of moral credibility in politics that one would certainly wish contemporary politicians to take to heart, I want to focus here on Sorai's emphasis on governing in a way that is practically beneficial -and not just abstractly benevolent-to all in favoring "creative production" (as Tucker aptly translates sei 生 in this context; Tucker 2006, 186;NST 36, 53) and fostering social cooperation while taking account of the divergences in individual dispositions. Sorai highlights providing for a peaceful and stable social environment and creating a social structure in which everyone can prosper by contributing to social live according to their talent as the two central embodiments of "humaneness" (NST 36, 53-54;Tucker 2006, 186-187). Sorai specifically insists that precisely because humans differ in their individual dispositions, they can be, and need to be, integrated into a society where their capacities are made to work for mutual benefit -and it is the responsibility of those with governmental authority to provide for such a framework: While human nature does differ from person to person, regardless of an individual's knowledge or ignorance, worthiness or worthlessness, all are the same in having minds that mutually love, nourish, assist, and perfect one another. People are also alike in their capacity to work together and undertake tasks cooperatively. Thus for government, we depend on a ruler; for nourishment, we depend on the people. Farmers, artisans, and merchants all make a living for themselves by relying upon each other. One cannot forsake society and live alone in a deserted land: it is simply human nature that makes it so. Now, "the ruler is one who organizes people into groups." Were it not for the virtue, humaneness, how could people possibly be so well organized and unified into society? (Tucker 2006, 187;cf. NST 36, 54) Sorai also affirms the old Chinese idea that this is what in the long run legitimizes governmental authority (NST 36, 57;Tucker 2006, 190). In stressing the importance of concretely beneficial policies over against the quietistic contemplation of lofty ideals, he is certainly closer to a "materialist" reading of human rights. He may thus be read with an eye towards to rights for securing individual survival, participation in social and cultural life, and, to some extent, distributive justice (UDHR, Art. 1, 2, 22). To reiterate, by highlighting these thoughts of Sorai, I do not want to imply that he or any other source previously quoted here had a theory of human rights in mind. I simply want to indicate that the idea of human rights and some of its more concrete articulations do resonate well with seminal concepts from various older Japanese traditions. In other words, I want to suggest that Japanese is a possible "native language" of human rights -that human rights can be formulated by making use of traditional terminology and with reference to time-honoured ideas from what has been received as the canon of Japanese thought. This will, however, not be possible if one subscribes to a form of traditionalism that accepts these sources as authorities that reign supreme. Instead, one has to opt for a kind of creative reading of the tradition that allows for their re-interpretation and adaptation in the light of new insights and circumstances. As I have mentioned already in the introductory paragraph, such a reading is not without precedent in Japanese modernity: It was already employed in the early decades of political modernization by various liberal intellectuals in their struggle against the nativist authoritarianism that ultimately carried the day -eventually leading up to the cataclysms of Japanese imperialist chauvinism, and the breakdown of the empire in 1945. Since we are currently witnessing a resurgence of nationalist ideologies in Japan, this alternative, and its continuity to what may arguably be the best of the intellectual tradition of Japan, surely deserves renewed attention. I therefore want to close this paper by very briefly highlighting how Nishi Amane, often termed "the father of [scil. modern] Japanese philosophy" (Botz-Bornstein 2006, 70), attempted to critically develop a "Confucian" theory of the rule of law, and civil liberties. While Nishi is today mainly "known for his pioneering work in introducing European philosophy and other disciplines into Japan" (Heisig 2011, 583), and most notably for coining the term tetsugaku 哲学, which has come to denote philosophy in the Western tradition, he also strove to connect what he had learned in Europe to the tradition he had first studied -Classical studies in the tradition of Ogyū Sorai. This is most obvious in his "New theory of the unity of the various fields of learning" (Hyakuichi shinron 百一新論, 1874;NAZ 1, 232-289). Consider the following paragraph, which reads like a modernized version of Sorai: Some scholars suppose that by coming to know the 'principle' of all things and to have a sincere heart and 'mind' they can spontaneously govern the country without further study; without investigating and clarifying what is in its interests or to its advantage. It is painful to think of the harm that would result from governance based on something like a Zen monk doing 'zazen'. (Heisig 2011, 584;NAZ 1, 237-238) Witness also the following reflections from a later treatise: … Human society, too, comes about because of the benefits it brings: the morality of mutual support (as with husband and wife, or father and son), the laws of division of labor (the exchange and distribution of work), the distinction between leaders and commoners (those in office and those not in office) and between government and citizens (the judiciary prevents conflicts, the army protects the nation). Hence, seeking what is beneficial is the basis of morality. The way of freedom does not gainsay the pursuit of gain. (Heisig 2011, 584;cf. NAZ 2, 312) One immediately notices the similarities to the paragraph from Sorai quoted above, but also how the old ideas are transposed into a new key, which is adapted to modern circumstances and based on a notion entirely absent in Sorai (or any of the other sources quoted above), i.e. that of "freedom" (J. jiyū 自由). (The treatise bears the title: "On the Idea that Freedom is Independence.") Nishi converts the old idea that maintenance of authority is dependent on the ability to exert it for the benefit of all into the modern concept that the establishment of the state limits individual discretion for the sake of general freedom. However, he ingeniously connects traditional "Confucian" notions to modern liberalism by defining freedom through its relation to benefit: "Freedom is the freedom to attain what is beneficial (jiyū wa iwayuru shūri no jiyū nari 自由者所謂就利之自由也) ", and thus connects Sorai's notions of social cooperation, beneficial government and dutiful behavior to the golden rule of the modern liberal state: Those who are loose with the limits [of freedom] cannot but be treated strictly, and therefore we cannot use our own freedom to violate the freedom of fellow human beings. ... Only animals, insects, fish and the like are free to pursue and gain benefit for themselves alone. In human society, one forfeits this smaller, lower form of freedom to obtain a greater, higher freedom. (NAZ 2, 312; the English translation of the last two sentences taken from Heisig 2011, 584-585) One of the central points of Hyakuichi shinron that is pertinent to our discussion of human rights is Nishi's critique of the tradition for its neglect to distinguish the sphere of law properly from that of morality, and his subsequent introduction of the term "right" (J. ken 権) into the discourse (NAZ 240-247). Having argued that "law" (J. hō 法) and morality (J. kyō 教) differ both in intension and extension (NAZ 1, 263-265), Nishi goes on to reassure his readers that "law has its origin in human nature" (J. hō wa motomoto hito no sei ni motozuku mono 法は元人の性に基づくもの) and is therefore implicitly present in the (Chinese) classics as well, where it is, he says, subsumed under the term gi 義 ("obligation"). Nishi explains: What is called "obligation" emerges in the relation between two people. For example, the retainer has the "obligation" to serve his lord, and conversely, the lord has the "obligation" to support the retainer. With the respect to such obligations, there also emerges what is called a "right" (ken 権) -but in the Han Classics, this also was called "obligation," and the two were not clearly differentiated. In contrast, in Western thought both are separated, and consequently there is much talk of "rights" there. However, the idea of a "right" is present in the Analects when they speak of there being an "obligation" that one grasps later, or that one looks to an "obligation" with an eye to profit. In Mencius, it is discussed whether one would be righteous or unrighteous when taking such an "obligation" -and while this is also called an "obligation", it is said with respect to the side of the one who takes, and is different from the obligation to give something. This is what in the West is called a "right". For example, the retainer has a "right" to receive support from the lord. The lord has a "right" to expect obedience from the retainer. Thus, rights and duties spring forth mutually between two people. (NAZ 1, 272-273) We can see here how Nishi at once critiques the tradition and strives to maintain continuity with it. Through this creative form of reading he is able to introduce modern notions of liberties, the rule of law, and the like while maintaining the link to the cultural heritage. Nishi by the way anticipated that the introduction of a discourse of rights might be perceived as fomenting dissent and conflict in society, and that some would take recourse to the revisionist idea of replacing the discourse of "rights" entirely by the promotion of morality. But while he did believe that "rights" can only work properly and beneficially if complemented by "morality", an awareness of duties that complements the awareness of rights, he was adamant that regress to the confusion of the spheres of law and morality is not viable, and could only be to the detriment of both. (NAZ 1,274) Conclusion I have shown in the preceding paragraphs that seminal documents from various pre-modern Japanese schools of thought provide for terms and concepts that can be used to articulate and legitimize the idea of human rights in a language that is in continuity with Japanese (and East Asian, for that matter) tradition. There is thus no need to invoke East-West dichotomies, or fear that by promoting human rights, one would of necessity impose "alien" ideas on Japanese society. My remarks are first and foremost addressed at a Western audience, and meant as a critical intervention with an eye to both a conservative "universalism" that assumes the concept of human dignity to be a Christian Occidental invention and prerogative, and a liberal "cultural relativism" that shies away from posing hard questions to "traditionalist" defenders of Asian variants of authoritarianism. However, I would also like to express the hope that Japanese philosophers and intellectuals re-discover and re-appropriate the strategies of Meiji liberals like Nishi Amane, and move confidently beyond the dichotomical paradigm that has, for the most part of the 20 th century, and certainly not for the better of Japanese society, forced Japanese to choose between authoritarian traditionalism and a liberal (or Marxist) universalism that spoke with a distinctly "Western" tongue. An immediate qualification is in place: By demonstrating that it is possible to couch human rights issues in terms that invoke continuity to older Japanese traditions, I do not want to say that it is necessary for Japanese to do so. First of all, there is no duty to cater to national tradition, and one may have good reason to keep one's distance. Secondly, even if one chose, while arguing for human rights, to also participate in the work of imagining the national community, one might as well opt for highlighting one of the modern intellectual traditions of Japan (such as Kantianism or Neo-Marxism, for example). It is simply my point that the option exists to connect the idea of human rights to traditions pre-dating the influence of Western thought, and thus present it in a genuinely "Japanese" light. Since we live in societies where individuals are to some extent defined by attributions of nationality, and large parts of the population identify themselves by belonging to a nation, this option seems important if one wants the idea of human rights to succeed. And, to re-iterate, the said proposition goes both ways: while it affirms the possibility to be a "good Japanese" and at the same time champion human rights issues, it negates an exclusive link of human rights to Occidental traditions. Human rights are a revolutionary idea -but one which can draw on sources from all parts of the world.
8,642.2
2012-01-01T00:00:00.000
[ "Philosophy" ]
Omics multi-layers networks and identication of genes involved in fat storage and metabolism in poultry using interactomics approach Fatty acid metabolism in animals has a major impact on production and disease resistance traits. According to the high rate of interactions between lipid metabolism and its regulating properties, a holistic approach is necessary. To study multi-omics layers of adipose tissue and identication of genes involved in fat metabolism, storage and endocrine signaling pathways in two groups of broiler chickens with high and low abdominal fat, high-throughput screening (HTS) techniques were used. The Gene-miRNA interacting bipartite and metabolic-signaling networks were reconstructed using their interactions. Abstract Background Fatty acid metabolism in animals has a major impact on production and disease resistance traits. According to the high rate of interactions between lipid metabolism and its regulating properties, a holistic approach is necessary. Methods To study multi-omics layers of adipose tissue and identi cation of genes involved in fat metabolism, storage and endocrine signaling pathways in two groups of broiler chickens with high and low abdominal fat, high-throughput screening (HTS) techniques were used. The Gene-miRNA interacting bipartite and metabolic-signaling networks were reconstructed using their interactions. Results In the analysis of microarray and RNA-Seq data, 1835 genes were detected by comparing the identi ed genes with signi cant expression differences. Then, by comparing, 34 genes and 19 miRNAs were detected as common and main nodes. The literature mining approach was used and 7 genes were identi ed and added to the common gene set. Module nding revealed three important and functional modules. The detected modules 1, 2, and 3 were involved in the PPAR signaling pathway, biosynthesis of unsaturated fatty acids, and Alzheimer's disease metabolic pathway, adipocytokine, insulin, PI3K-Akt, mTOR and AMPK signaling pathway. Conclusions This approach revealed a new insight for a better understanding of the biological processes associated with adipose tissue. Background Total carcass fat of broilers varies depending on sex, bird age, nutrition, and genetic factors and it is about 12% [1]. The main fat stored in broiler carcass is two kinds of subcutaneous fat and ventricular fat, Which a portion of carcass fat (about 18 to 22 percent) is stored in the ventricular area [2]. In the following, in humans, as the main consumer of poultry meat, over-fat storage in skeletal muscle is associated with metabolic diseases such as type 2 diabetes, cardiovascular disease and subsequently will lead to the risk of a heart attack. Fat production in poultry is a high-inheritance polygenic trait regulated by various behavioral, environmental, and hormonal factors [3]. Analysis Of Microarray Data Microarray data were pre-processed in software R, using package Lumi [17] and Affy [18]. The processed data were then evaluated using packages Limma [19], GEOquary [20], and Biobase [21]. Among the number of identi ed genes, the genes which were common in terms of ve accession numbers were identi ed; and the gene list was considered as a gene set 1. Analysis Of RNA-Seq Data Various software were used to analyze the RNA-Seq data related to the accession numbers Initially, FastQC quality control software [22] was used to control the quality of existing data. After converting the le format, the data required to trim the adapters using Trimmomatic software [23]. For aligning, TopHat2 software [24] was used to map the reads on the reference genome of the chicken species (Gallus Gallus domesticus). Finally, different gene expression was performed using CuffDiff software [25]. The genes which were common in terms of accession numbers were identi ed and considered as a gene set 2. Main Gene List Genes with signi cant differences related to microarray and RNA-Seq data were examined and listed as gene set 1 and 2, respectively. Finally, genes that were common in two sets were chosen as the main gene list. Ideti cation Of miRNAs And Target Genes Accession number GSE118611 [29], which related to miRNA in mouse species and associated with lipid metabolism, was analyzed. For this purpose, Blast N (available tool in miRBase database) was used [30], miRNAs of Gallus gallus domesticus that were involved in lipid metabolism were detected. miRNA target genes was performed based on the bioinformatics platform miRWalk 3.0 [31]. The platform integrates information from different miRNA-target databases and TargetScanVert software [32]. Reconstruction Of Omics Multi-layers Networks STRING [33] and GeneMania databases [34] were used for detecting interactions between different omics layers. Protein-protein interaction (PPI) data were abstracted from Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Biological General Repository for Interaction Datasets (BioGRID) and Mammalian Protein-Protein Interactions Database (MIPS). Networks were reconstructed using Cytoscape 3.7.2. [35]. The metabolic-signaling pathways involved in the lipid metabolism and storage was reconstructed by different databases and Cell Designer version 4.4.2 [36]. Modules And Hub Nodes Detection For nding sub-graphs and hub nodes, MCODE, one of the Cytoscape plugins was used [37]. Results 2914 genes were extracted after analyzing the microarray data to express the gene differences; and 612, 107, 582, 104 and 46 genes were respectively identi ed after considering the expression change threshold in each of the available accession numbers, GSE37585, GSE8812, GSE45825, GSE10052, and GSE3867; and the difference in numbers of genes refers to differences in time and place of broiler chicken farming. In general, 1451 genes were signi cantly different. In the data analysis of RNA-Seq, 1867 genes were identi ed; and then 314 and 70 genes were detected after considering the threshold (P < 0.00001) of expression change in accession numbers of GSE49121 and GSE42980, respectively. A total of 384 genes were signi cantly different. Identi cation Of miRNA And Target Genes In the data analysis of microRNA to express differences in miRNAs, 250 miRNAs were detected, and 19 miRNA and 15 genes were identi ed by taking into account the threshold (LogFC <-2, LogFC > 2 and Pvalue < 0.00001) of expression change in the accession number of GSE118611 to identify suppressing miRNAs as well as target genes in the same gene set (Table 2). Finally, the data analysis of miRNAs indicated 11 miRNAs with higher expression and 8 miRNAs with lower expression in the greater abdominal fat storage compared to the lower abdominal fat storage. 384 genes in data of RNA-Seq and 1451 genes in the microarray data had differentially expressed genes (DEG). 34 genes were common in two gene sets 1 and 2 relating to microarray data and RNA-Seq data sets, respectively. In this regard, 16 and 18 genes were related to lipogenesis and lipolysis processes ( Table 2). THBS1 and INSIG2 genes in the gene set associated with the lipogenesis process as well as COLEC12, HMGCR, APP and IRS1 genes in the gene set associated with the lipolysis process had the highest level of suppression by miRNAs. Nine out of 16 genes had higher expression in the gene set associated with the lipogenesis; and 15 out of 18 genes in the gene set associated with the lipolysis had lower expression in higher abdominal fat tissue compared to the lower abdominal fat tissue (Table 2). Reference Gene List Articles related to the process were also reviewed to ensure the accuracy and seven genes, which did not exist in the list of previous data, were selected and added to the gene list. The selected seven genes included BACE1, BACE2, PSEN1, PSEN2, PERP, SIK1 and LOC421682 genes. It should be noted that the list of genes in Table 2 (34 genes) and seven genes identi ed by investigating the sources were considered as the list of reference genes (41 genes). Gene Interaction Network, GO Terms And Pathways Figure 2 shows the network of the reconstruction of genes interactions, GO terms and pathways. In this network, APP, SREBF1, HMGCR, FADS2, SCD, ACAT1, FASN, HADHB and EHHADH genes had the highest interaction with other genes in the network. Reconstruction Of The Interactive Bipartite Network Of Gene-miRNA The network contains 49 nodes (including 32 genes and 17 miRNAs) and 95 edges. The reconstructed network with .cys format was stored for other analyses (Fig. 3). Important And Functional Network Modules According to interactive bipartite network of Gene-miRNA and nding the relevant sub-networks or modules, 3 modules contained 31 genes and 7 miRNA as presented in Table 3. The table also presents important signaling pathways and cellular processes (metabolic pathways) in which the relevant modules are involved (Fig. 2). Discussion The priority of abdominal fat tissue in broiler chickens to identify genes involved in metabolism and fat storage is due to the fact that it can be as a prior model in other species and individuals of a species due to its speci c metabolic characteristics [38]. The present study detected a total of 34 common genes that played roles in the main process of synthesis route control, metabolism and fat storage, and signaling pathways of endocrine glands activated by adipokines, AMPK and PPAR. The lower expression of a large number of genes associated with the lipolysis indicated a reduction in decomposition of fats and then an increase in the anabolism and fat storage in broiler chickens, especially in abdominal fat tissue. On the contrary, the higher expression of a large number of genes in the gene set associated with the lipogenesis con rms the increase in metabolism and abdominal fat storage. Chickens with greater abdominal fat had hyperplasia and hypertrophy of fat cells at younger ages compared to chickens with lower abdominal fat. SREBF1, SREBF2, SCD, FASN and THRSPA were among the most important genes that play major roles in fat storage and metabolism [10]. Hub genes in this study were APP, SREBF1, HMGCR, FADS2, SCD, ACAT1, FASN, HADHB and EHHADH (Fig. 2). APP gene is a cell surface receptor and an extra-membrane precursor protein that is decomposed by enzymes to form a number of peptides. Some of these peptides are secreted and can be bound to an acetyl transferase complex APBB1/TIP60 to strengthen the transcription activities, while other proteins create amyloid plaques in brains of patients with Alzheimer's disease [39]. It enhances the transcription through binding to APBB1/KAT5, and inhibits Notch signals through interaction with Numb. Sterol regulatory element-binding transcription factor 1 (SREBF1) is a protein encoding gene. Fatty liver disease is a SREBF1 gene-related disease; and mTOR signaling pathway is a pathway associated with SREBF1. Annotation of this gene includes the DNA and chromatin binding transcription factor activity and it regulates the rate of transcription of the LDL receptor gene, Fatty acid and the cholesterol synthesis pathway to a lesser extent. HMGCR or 3-hydroxy-3-methylglutaryl coenzyme A reductase is a protein encoding. The Terpenoid backbone biosynthesis pathway is a pathway associated with this gene [40]. Fatty acid desaturase 2 (FADS2) is a protein-encoding gene with pathways such as fatty acid betaoxidation (peroxisome) and Alpha-linolenic acid metabolism. This gene is a part of the lipid metabolic pathway that catalyzes the biosynthesis of unsaturated fatty acids from unsaturated fatty acids of linoleic acid (18: 2n-6) and linolenic acid (18: 3n-3) [41]. Stearoyl-coenzyme a desaturase (SCD) encodes the enzyme that is involved in the biosynthesis of fatty acids, so that it is rst responsible for the synthesis of oleic acid. Its produced protein belongs to the desaturase fatty acid family [42]. ACAT1 (Acetyl-Coenzyme A acetyltransferase 1) is a protein-encoding gene that involved in metabolic pathways of ketone body metabolism and the Terpenoid backbone biosynthesis. The gene plays a key role in the ketone body metabolism [43]. FASN (Fatty acid synthase) is a protein-encoding gene with pathways such as the metabolism of watersoluble vitamins and cofactors as well as the enzymatic complex pathway of AMPK. Hydroxyacyl-CoA Dehydrogenase Trifunctional Multienzyme Complex Subunit Beta (HADHB) is a proteinencoding gene with pathways such as beta-oxidation of mitochondrial fatty acids and biosynthesis of glycerophospholipids [44]. EHHADH (Enoyl-CoA Hydratase And 3-Hydroxyacyl CoA Dehydrogenase) is a protein encoding gene with pathways such as PPAR alpha pathway and propanoate metabolism. The gene annotation includes the binding of signaling receptors and oxidoreductase activity [45]. Given the ontology expression and functions of important and main genes in the network of genes interactions, it can be stated that these genes are the main genes in the metabolism and fat storage as well as the signaling pathways of endocrine glands, especially AMPK and PPAR signaling pathways. In Fig. 3, green quadrilateral nodes representing the genes with the highest interaction in the network are the main candidates in lipid metabolism and storage and play roles in the list of desired gene (reference), metabolic and signaling pathways. The genes with the highest repression levels include THBS1, SIK1, COLEC12, and BACE1 respectively. A combined biological system approach is used to detect metabolic and signaling pathways associated with the interactive bipartite network of Gene-miRNA in the process of fat storage and metabolism of broiler chicken. Fat stored in the skeletal muscles plays roles in important metabolic processes such as immune function, food consumption, hormone sensitivity and relevant signaling pathways [46]. In module 1, gga-miR-1710 suppress HMGCR Gene. gga-miR-1710 has an expression reduction. Its target gene represents the increased expression in higher abdominal fat tissue compared to lower abdominal fat tissue. The gene is classi ed into a set of genes associated with the lipolysis process. Therefore, reducing expression of gga-miR-1710 and increased gene expression of HMGCR lead to the lipolysis process, thereby reducing the abdominal fat. HMG-CoA reductase protein-encoding gene is the cholesterol synthesis limiting enzyme that regulates the product of catalyzed reaction by reductase through a negative feedback mechanism by sterols and non-sterol metabolites derived from mevalonate. The enzyme in mammalian cells is usually suppressed by cholesterol derived from the construction and destruction of low-density lipoprotein (LDL) through the LDL receptor [40]. SCD gene indicates the higher expression in larger abdominal fat tissue compared to the lower abdominal fat tissue. SCD gene is put into the set of genes associated with the lipogenesis process. Therefore, increasing the SCD gene expression raises the amount of fat storage in the body, especially in abdominal part. SCD gene (Stearoyl-coenzyme A desaturase) is a protein-encoding gene with pathways including adipogenesis and angiopoietin such as the protein 8 regulatory pathway. It also plays an important role in the lipid biosynthesis and regulating the expression of genes in the mitochondrial fatty acid oxidation and lipogenesis cycle [47]. gga-miR-6554-5p suppress the IRS1 gene. This miRNA has higher expression; and its target gene shows a lower expression in greater abdominal fat tissue compared to the lower abdominal fat tissue. IRS1 gene is among the set of genes associated with the lipolysis process. Therefore, increasing expression of gga-miR-6554-5p miRNA decreases the IRS1 gene expression, thereby reducing the amount of fat catabolism and increasing the abdominal fat storage and anabolism. IRS1 gene encodes a protein that is phosphorylated by insulin receptor tyrosine kinase. Mutations in the gene are associated with type 2 diabetes and insulin resistance [48]. SREBF1 gene shows the higher expression in greater abdominal fat tissue compared to the lower abdominal fat tissue. The gene is among the set of genes associated with lipogenesis process. The higher SREBF1 gene expression increases the abdominal fat storage and anabolism. SREBF1 gene encodes the Helix-Loop-Helix-Leucine Zipper (bHLH-Zip) that binds the sterol-1 regulator. It is also found in the promoter for low density lipoprotein receptor gene and other genes in the sterol biosynthesis [49]. In this module, HMGCR gene is suppressed by miRNAs. The gene is associated with the lipolysis process. Therefore, its suppression can prevent the fat tissue catabolism and lead to the higher fat storage and anabolism in abdominal fat tissue of broiler chickens. In this module, there are six genes, namely HMGCR, SREBF1, SCD, FASN, HADHB and ACAT1 with certain color (green) and have the highest interaction with other genes involved in the module. The enzyme which is encoded by FASN gene is a multi-functional protein. Its main function is the canalization of the synthesis of Palmitate from Acetyl-CoA and Malonyl-CoA in the presence of NADPH to long-chain saturated fatty acids. ACAT1 gene encodes a topical mitochondrial enzyme that catalyzes the reversible form of Acetoacetyl CoA from two acetyl CoA molecules. HADHB Gene is responsible for encoding the beta subunit of mitochondrial function protein and catalyzes three nal three stages of the mitochondrial beta-oxidation process of long-chain fatty acids [44]. The gene set of this module, as presented in Table 3, encodes signaling pathways AMPK and PPAR as well as metabolic pathways of fatty acid synthase, unsaturated fatty acid synthase, and cholesterol metabolism pathways. Therefore, it can be concluded that the module and genes involved in the process can be functional modules associated with abdominal fat metabolism and storage in broiler chickens. The receptor increases the insulin-mediated glucose uptake and improves the blood lipid pro le by regulating the lipid metabolism, glucose, and free fatty acid oxidation [50]. Target genes of peroxisome proliferator-activated receptors are related to several proteins that are necessary for absorption, intercellular transfer, and beta-oxidation of fatty acids, while they include Fatty acid transport protein, Fatty Acid Translocase enzyme, and synthase enzyme involved in the production of acetyl CoA (for longchain fatty acids) and Carnitine palmitoyltransferase I [51]. Peroxisome proliferator-activated receptors play roles in the regulation of the gene transcription process (P2) of fat cells, so that the lean and fat-free meat can be produced by manipulation of the differentiation of fat tissue cells and their fat content through these receptors. The cellular response to insulin includes the regulation of blood sugar levels by increasing the glucose uptake in muscles and fat tissues in a way that energy reserves in fat tissue, liver and muscles increase by stimulating the lipogenesis, glycogen synthesis, and protein synthesis. Insulin signaling pathway decreases the glucose production by the liver and the total inhibition of energy stored through lipolysis, glycogenolysis and breakdown of proteins. This pathway also acts as a growth factor and stimulates the cell growth, differentiation and survival [52]. The insulin signaling pathway is an important biochemical pathway that regulates some basic biological functions such as glucose and lipid metabolism, synthesis of proteins, cell proliferation and differentiation, and apoptosis [53]. Signaling pathway of Phosphatidylinositol (PI3K)/ Protein Kinase B (Act) is involved in the regulation of many physiological cell processes by activating effective cross-downstream molecules that play important roles in the cellular cycle, growth and proliferation [54]. Mammalian Target of Rapamycin (mTOR) signaling pathway has both internal and external signals and acts as a main regulator of cellular metabolism, growth, proliferation, and survival. Exploration carried out over the past decade indicates that the mTOR signaling pathway is activated in various cellular processes such as tumor formation and angiogenesis, insulin resistance, lipid metabolism, and lymphocyte T activation, and is regulated in human diseases such as cancer and type 2 diabetes [55]. In module 2, APP gene plays the main role. The gene is suppressed by gga-miR-6554-5p. gga-miR-6554-5p represents the up-regulation; and its target gene represents the down-regulation in greater abdominal fat tissue compared to the lower abdominal fat tissue. APP gene is a set of genes associated with the lipolysis process. Therefore its repression by miRNAs in humans is necessary. In poultry, its lower expression is equivalent to a decrease in abdominal fat; and a decrease in body fat is equivalent to an increase in proliferation performance and other functional traits. Increased body weight or obesity caused by increased body fat storage is characterized by excessive accumulation of fat in the body and increased levels of adipokines and in ammatory cytokines. This indicates an increased risk of Alzheimer's disease, type 2 diabetes, and cardiovascular diseases. It has been recently found that the gene expression level of APP increases as brain tissue fat and the fat storage tissues increase in the body [56]. gga-miR-6554-5p and gga-miR-466 miRNAs suppress BACE1 gene. These two miRNAs represent the upregulation and their target genes indicate the down-regulation in greater abdominal fat tissue compared to lower abdominal fat tissue. BACE1 gene encodes an enzyme that cuts the amyloid precursor protein (APP) and produces Amyloid beta peptides that cause amyloid plaque in brains of patients with Alzheimer's [57,58]. BACE2 is an important paralog of this gene. gga-miR-6562-5p and gga-miR-3532-5p suppress the PSEN1 gene. These two miRNAs indicate the upregulation; and their target gene indicates the down-regulation in the greater abdominal fat tissue compared to the lower abdominal fat tissue. PSEN1 encodes a protein that is called Presenilin 1. Presenilins are APP regulators according to their effects on gamma secretase as APP-decomposing enzymes [59]. PSEN2 gene, which has about 67% of similarity to PSEN1 gene, was identi ed after PSEN1 gene. PSEN2 gene indicated a lower expression in greater abdominal fat tissue compared to the lower abdominal fat tissue. PSEN2 gene is a protein encoding gene with associated diseases such as Alzheimer's disease and heart muscle diseases. It encodes the intermediate signaling Presenilin and Wnt/ Hedgehog/Notch pathways [60]. gga-miR-3532-3p suppress BACE2 Gene. The miRNAs indicate the up-regulation; and their target gene, BACE2, indicates the low expression in greater abdominal fat tissue compared to lower abdominal fat tissue. BACE2 gene encodes a full membrane glycoprotein that is known as an Aspartic protease [61]. Five genes are involved in this module that is associated with the pathway of Alzheimer's disease. This module and its involved genes encode the Notch signaling pathway and metabolic pathway of Alzheimer's disease. Five genes involved in the module indicated the low expression in fat tissue in greater abdominal fat tissue compared to lower abdominal fat tissue. APP gene plays roles in the lipolysis process; and BACE1, BACE2, PSEN1 and PSEN2 genes play roles in the lipogenesis process. In this module, the lower APP gene expression and lower expression of other genes in the process of lipolysis reduce the fat accumulation, and thus reduce the risk of Alzheimer's disease. If amounts of fat stored in the body increases, it leads to the higher expression of BACE1, BACE2, PSEN1 and PSEN2 genes, especially BACE1 genes, thereby leading to breakage of the protein encoded by APP gene and creation of conditions for development of Alzheimer's disease. Nutrition of unsaturated fatty acids can play a signi cant role in reducing Alzheimer's disease. A study indicated that metabolisms of unsaturated fatty acids are signi cantly regulated in brains of patients with different degrees of Alzheimer's pathology [62]. Another study indicated that the high intake of unsaturated fats could have a protective role against Alzheimer's disease, while the consumption of saturated or trans-unsaturated fats increases the risk of developing Alzheimer's disease [63]. acetyl coenzyme A molecules [43]. Given the roles of three main genes involved in the structure of this module as well as using the online database, this module encodes metabolic pathways of cholesterol metabolism and the metabolism of fatty acids. In the Notch signaling pathway, Notch receptor is phosphorylated and it activates NICD gene in collaboration with PSEN1 gene as a γ-secretase complex. Inside the cell nucleus, this gene encodes the sequence of FABP7 gene and triggers the construction of FABP7 mRNA by cooperation with RBPJ/CBF1 complex. FABP gene is activated by two phosphorylated receptors, called FATP and FATCDB6, in the cell membrane. Thereafter, three signaling complexes, and , are activated. These signaling pathways encode genes relating to the fat storage and metabolism in the cell nucleus. These complexes in the nucleus are related to Lipid transport, Lipogenesis, Cholesterol metabolism, and Fatty acid oxidation, leading to the process of lipid metabolism by transcription and translation of the genes. In the signaling path of PPAR, complex is associated with the insulin-related signaling pathway through the phosphorylated mTORC1 gene in the mTOR pathway. The phosphorylation of this gene results in activation of complex. AMPK signaling pathway is also associated with mTORC1 gene and has an inhibitory effect, in a way that the AMPK pathway prevents the phosphorylation of mTORC1 gene, so that complex is not activated; and the lipid metabolism process (e.g. lipogenesis, cholesterol and oxidation metabolisms) is not performed. Two signaling pathways, PPAR (the main pathway of lipid metabolism) and AMPK (the main pathway of cellular energy exchanges), are important in this metabolic-signaling network. These two signaling pathways control each other by mTORC1 gene in the mTOR signaling path, so that increasing or decreasing the intracellular energy levels of AMPK signaling pathway with an inhibitory or activating effect on mTORC1 gene can cause anabolism or catabolism of lipids in cells (Fig. 7). According to the ontology and functions of genes, which encode two signaling pathways, AMPK and PPAR, these two pathways are the main pathways of cellular energy exchange and lipid metabolism respectively. Peroxisome proliferator-activated receptors (PPARs) are transcription factors belonging to the nuclear receptor superfamily, and they are activated by long-chain unsaturated fatty acids with several double bonds, eicosanoids, and lipid-lowering agents such as brates. Among the unsaturated fatty acids with and double bonds, Eicosapentaenoic Acid (EPA) and Docosahexaenoic acid have been widely studied because of their ability to activate peroxisome proliferator-activated receptors (PPARs). The expression pro le of in different organs of poultry is largely similar to mammals, in such a way that it expresses similar functions of in poultry and mammals. Peroxisome proliferator-activated receptors (PPARs) are nuclear hormone receptors that are activate by fatty acids and their derivatives. Each of them is encoded in a separate gene and bind fatty acids and eicosanoids. Ligand property of PPAR-RXR heterodimers for fatty acids causes the binding of these heterodimers to "Speci c Receptor Elements" in the promoter region of several genes and changes the transcription of downstream genes involved in immune processes, lipid metabolism, and cholesterol metabolism [65]. AMP-activated protein kinase (AMPK) is a serine/threonine kinase that has a high protective system. The AMPK system acts as a cellular energy sensor. When AMPK s activated, it simultaneously inhibits the energy consumption in biosynthetic pathways, such as protein, fatty acids, and glycogen synthesis and activates the catabolic pathways (breakdown) of ATP production, including the fatty acid oxidation and glycolysis [66]. The reduced regulation of liver AMPK activity plays a pathophysiological role in lipid metabolic disorders. However, the signaling pathway of AMPK for regulation of cellular energy balance is essential for the lipid metabolism, so that the pathway activates the catabolism of fat in the shortage of energy in the cell to provide the necessary rate of ATP. Therefore, the AMP-activated protein kinase (AMPK) is a main regulator of cell metabolism and metabolism organ in eukaryotes and it is activated by lowering the intra-cellular ATP level. AMPK plays an important role in the growth regulation and replanning of cell metabolism [67]. Conclusion The combination of Omics data for obtaining and identifying genes with differences in gene expression led to a successful identi cation of 1835 genes in abdominal fat tissue in two groups of broiler chickens with greater and lower abdominal fat storage. The same identi ed genes were involved in the signaling pathways of endocrine glands, AMPK, and PPAR associated with lipid metabolism and energy catabolism and could be considered as the genes that were similar in different species. The present study identi ed important common genes relating to the lipid metabolism and metabolic and signaling pathways, and detected the mechanisms associated with lipid transfer by different cell membranes and tissues by explanation of relevant genes. Furthermore, the gene-gene and gene-miRNA interactions were also examined by investigating the biological system and reconstruction of various regulatory and interactive networks. The overall result of study included the identi cation of 41 genes in the main process of metabolism (anabolism and catabolism), fat storage, and signaling pathway of endocrine glands, and the cell membrane. Furthermore, the gene interaction networks, interactive bipartite network of Gene-miRNA, and functional modules, and the metabolic-signaling network were reconstructed to identify metabolic and signaling pathways associated with the fat storage and metabolism. Declarations ETHICS STATEMENT Ethical review and approval was not required for the animal study because this study is just analysis. AUTHOR CONTRIBUTIONS FGH, MS, AB, and SRMA designed the study. FGH, MS, and AB carried out experimental procedures. FGH and AB analyzed the data and wrote the manuscript. MS, AB and SRMA supervised the study. All authors approved the nal version of the manuscript. CONFLICT OF INTEREST The authors declare that the research was conducted in the absence of any commercial or nancial relationships that could be construed as a potential con ict of interest. ACKNOWLEDGMENTS This work was nancially supported by the University of Tehran, Iran. The authors thank all the teams who worked on the experiments and who provided technical assistance in the laboratory during this study. We also thank the anonymous reviewers whose critical comments helped in improving the manuscript. Figure 1 Schematic view of the work ow to reconstruct the metabolic pathways of abdominal fat storage in poultry. The main gene list was prepared from three RNA-Seq and microarray data sets. The Gene-gene Interaction Network (GGI), Gene Regulatory Network (GRN) and interactive bi-partite network of Gene-miRNA network were reconstructed using Cytoscape. Functional modules were detected using related plugin in Cytoscape and the metabolic-signaling network using CellDesigner. Figure 2 The Gene, Gene Ontology and pathway, related interaction network involved in the abdominal fat storage of the poultry. Module 1: 20 genes and 2 miRNAs in the interactive bipartite network of Gene-miRNA. In this network, the quadrilateral points represent the genes and the octagonal points represent miRNAs. In this interactive bipartite network, the Gene-miRNAs of quadrilateral nodes represent genes and octagonal nodes represent miRNAs. For miRNAs and target genes, the edges indicate the suppressing roles of miRNAs. The edges of genes also indicate the gene-gene interactions. The green quadrilateral nodes represent the hub genes. Purple quadrilateral nodes have the highest rates of suppression by miRNAs. Schematic of the regenerated metabolic-signaling network associated with fat metabolism and storage using CellDesigner.
6,883.8
2020-06-08T00:00:00.000
[ "Biology", "Agricultural and Food Sciences" ]
Silicon and mechanical damage increase polyphenols and vitexin in Passiflora incarnata L. Passiflora incarnata L. is a species of global pharmacological importance, has not been fully studied in the context of cultivation and management. It is known that silicon acts on abiotic stress and promotes phenols synthesis. The practice of mechanical damage is widely used in P. incarnata crops, and its interaction with silicon can have a significant influence on plant metabolism. Therefore, our objective was to investigate the effects of silicon and mechanical damage on photosynthesis, polyphenols and vitexin of P. incarnata. The experiment was conducted in a factorial design with SiO2 concentrations (0, 1, 2, 3 mM) and presence or absence of mechanical damage. It was found that mechanical damage improved photosynthetic performance at lower concentrations or absence of silicon. Moreover, this condition promoted an increasing in vitexin concentration when SiO2 was not provided. The application of 3 mM Si is recommended to increase polyphenols and vitexin, without harming dry mass of aerial part. The interaction between silicon and mechanical damage could be a tool to increase agronomic yield and commercial value of the P. incarnata crop. www.nature.com/scientificreports/ In the commercial crops of P. incarnata more than one harvest is expected, enabling a continuous supply of leaves and stems to the pharmaceutical production chain 27 . The complete harvesting of the aerial part promotes mechanical damage, which can signal the production of phenolic compounds, since this stress may be influence the activity of the PAL enzyme and other enzymes in the polyphenol pathway 28 . The Si supply and mechanical damage can result in an increase in biomass and active molecules, contributing to the production chain of the species. The objective of this study was to investigate the Si and mechanical damage effects on photosynthetic metabolism and on the polyphenols and vitexin synthesis in P. incarnata. Results Chlorophyll a fluorescence and gas exchange. In the absence of silicon, the potential quantum efficiency of the open reaction center (Fv′/Fm′) at 140 days after sowing (DAS) was higher in plants that received mechanical damage. In the absence of damage, at 169 DAS, Fv′/Fm′ was lower in plants grown with 3 mM SiO 2 ( Fig. 1a,b). At 140 DAS, plants that received mechanical damage and cultivated with 1 and 2 mM SiO 2 showed higher photosystem performance, represented by photochemical quenching (qL), electron transport rate (ETR) and effective quantum efficiency of photosystem II (ФPSII) than intact plants at the same concentrations (Fig. 1g,i,k). In the absence of mechanical damage, energy fraction absorbed by PSII antenna that is dissipated as heat (D) was higher and energy not dissipated and not used for the photochemical phase (Ex) was lower in plants subjected to 0 and 3 mM SiO 2 , which may indicate photoprotection (Fig. 1c,e). At 169 DAS, regardless the damage, plants cultivated at concentration 1 mM SiO 2 showed lower ETR and ФPSII (Fig. 1j,l). Among plants that did not receive Si, those that received mechanical damage had higher D and qL and lower Ex compared to intact plants (Fig. 1d,f,h). At 140 DAS plants subjected to damage had higher transpiration rate (E) regardless the SiO 2 level (Fig. 2a). At 169 DAS, plants with 2 and 3 mM SiO 2 with mechanical damage showed high transpiration rate. Among intact plants, those with Si had a lower E (Fig. 2b). Stomatal conductance (g s ), CO 2 assimilation rate (A net ) and RuBisCO carboxylation efficiency (A net /C i ) were higher at 140 DAS in plants subjected to mechanical damage, except for plants grown with 3 mM SiO 2 (Fig. 2c,e,g). At 169 DAS, g s , A net and A net /C i were higher in plants with damage at the highest SiO 2 concentrations (Fig. 2d,f,h). Hydrogen peroxide and lipid peroxidation. Plants with 1 mM SiO 2 showed a higher concentration of hydrogen peroxide (H 2 O 2) when damage occurred. In intact plants, SiO 2 supply reduced hydrogen peroxide, except for the 2 mM SiO 2 concentration. (Fig. 3a). In the mechanical damage absence, lipid peroxidation, presented as malondialdehyde concentration (MDA) was higher in plants grown with SiO 2 and in the presence of mechanical damage, there was no difference between plants (Fig. 3b). (c,d) energy fraction absorbed by PSII antenna that is dissipated as heat (D) p < 0.01; (e,f) energy fraction not dissipated in the antenna that cannot be used for photochemistry stage (Ex) p < 0.01; (g,h) photochemical quenching (qL) p < 0.01; (i,j) electron transport rate (ETR) p < 0.01, (k,l) effective quantum efficiency of photosystem II (ΦPSII) p < 0.01 in Passiflora incarnata L. with mechanical damage (w/MD) and without mechanical damage (intact), subjected to SiO 2 variations at 140 and 169 days after sowing. Values corresponding to the averages ± SE. Capital letters compare plants with and without mechanical damage and lower letters compare SiO 2 variations. ETR and PSII results were significant only for SiO 2 variations. Other results were significant for the interaction between SiO 2 variations and the presence/absence of mechanical damage. In this evaluation, there was no significant effect of mechanical damage or even its interaction with SiO 2 levels (Fig. 3c). Plants collected at 169 DAS, regardless the mechanical damage, had the highest vitexin content with 3 mM SiO 2 (Fig. 3d). Plants grown without Si and with mechanical damage showed a higher content of vitexin when compared to intact plants (Figs. 3d and 4). Carbohydrates. Plants without silicon subjected to mechanical damage showed investment in reserve carbohydrates, such as starch, while intact plants showed high total soluble sugars concentrations (Fig. 5a,d). (Fig. 5a). Plants subjected to mechanical damage showed a higher amount of reducing sugars, regardless the SiO 2 concentration (Fig. 5b). Intact plants with 3 mM SiO 2 showed lower sucrose concentration when compared to intact plants without Si. In plants with mechanical damage, the highest concentration of sucrose was found in plants with 1 mM SiO 2 , which did not differ from plants without Si (Fig. 5c). Among plants cultivated without Si and with 3 mM SiO 2 , those with mechanical damage showed higher starch accumulation. In general, plants grown with 1 mM SiO 2, and damage showed less starch accumulation. In the intact plants, the starch concentration did not differ (Fig. 5d). Growth indices. Plants cultivated with 2 mM SiO 2 showed higher dry mass of leaves (LDM) and total (TDM), highlighting that in TDM, plants with 2 mM SiO 2 did not differ from those with lower concentration or Si absence. When 3 mM SiO 2 was provided, TDM was reduced (Fig. 6a). The leaf mass ratio (LMR) was higher in plants that received 2 mM SiO 2 , not different from plants grown with 1 and 3 mM SiO 2 (Fig. 6b). The LMR expresses the plant area useful for photosynthesis, resulting in plant mass, and the specific leaf area (SLA) data reveal area per leaf mass, indicating its thickness (Fig. 6c). The concentration of 2 mM SiO 2 increased the LMR and 1 mM SiO 2 decreased SLA compared to control plants. These results indicate that the source of Si used for the conditions of this study was adequate. Si was quickly translocated to the leaves, since SiO 2 supply occurred at 124 DAS and leaf collect, which resulted in biochemical evaluations, was performed at 169 DAS (Fig. 7). Heatmap. A heatmap was drawn up to demonstrate the similarity between treatments and the correlation between biochemical variables (Fig. 8). It is possible to observe the formation of two groups in which the treatments in each group have similarity for the variables. The first group consisted of treatments with damage and 0, 2 and 3 mM SiO 2 . The second group consisted of treatments with intact plants and plants with damage and 1 mM SiO 2 . The treatments in the first group showed the highest averages (red squares) for the variables reducing sugars and starch, and the lowest for sucrose and MDA. In this group, treatments 0 and 3 mM SiO 2 with damage had the highest averages for vitexin. On the other hand, the treatments in the second group had the lowest averages for reducing sugars and high for sucrose. In this group, the intact plants treatments that received Si had higher MDA averages. We highlight in this group the treatment with 3 mM SiO 2 which presented the highest vitexin average, opposite to the others, which indicates a relationship with the SiO 2 level supplied. When 3 mM SiO 2 were used in plants with or without damage, higher vitexin averages were verified. However, intact plants with 3 mM SiO 2 revealed high MDA concentration. Discussion Our study results emphasize the Si action on the metabolism of plants subjected to abiotic stress, providing better performance under adverse conditions, as observed in other studies 29 . www.nature.com/scientificreports/ The mechanical damage in P. incarnata at 140 DAS stimulated remained bud's photosynthetic activity, suggesting compensatory photosynthesis 30 , since the removal of old branches allows interception of solar radiation better by young branches. The requirement of higher demand for photoassimilates by new tissues, can stimulate the development and photochemical activity, enabling increased electron flow 31,32 . It's responsible for higher production of reducing agents used in carbon assimilation, observed in the present study, respectively, by higher ETR and ФPSII. P. incarnata plants with 1 and 2 mM SiO 2 and mechanical damage were efficient in overcoming this damage and restoring themselves, which is observed in the high photochemical efficiency at 140 DAS and high A net /Ci, A net and g s. At 169 DAS the supply of 2 mM SiO 2 promoted increase of g s and A net . These results are in agreement with those verified in the literature 25,26 , in which data of increase in g s , A net , E and dry mass of leaves are verified in P. edulis when Si was supplied. According to the study by Zhang et al. 22 , the supply of Si may have promoted greater expression of the genes PetE, PetF, PsbP, PsbQ, PsbW and Psb, which are important for the photochemical step of photosynthesis. Gene expression may have contributed to the production of reducing agents used in the biochemical stage of photosynthesis, as indicated in other studies 29,33 , and observed in the high qL at 140 DAS, presented in this study. At 169 DAS the mechanical damage was preponderant to maintain photochemical energy's direction for the production of reducing agents, since the highest qL was observed in plants without Si. The mechanical damage may favor photochemical activity increase, stimulated by high nitrogen demand for new tissues formation, as well as higher incident radiation stimulates nitrate absorption by the roots. Besides, the reduction of nitrate occurs mainly in leaves, as it is a strong electron drain, it can stimulate greater photochemical activity 31 . It is noteworthy that nitrogen source used in this work was mainly nitric. The higher photosynthetic activity, reflected by A net, g s and A net /C i , may have contributed to a high concentration of total and reducing sugars, directing resources for growth, biomass accumulation and lower MDA accumulation, a result that indicates low stress in plants grown with 2 mM SiO 2 and mechanical damage. The Si supply in plants with different stress modalities promotes an increase in the activity of antioxidant enzymes, which neutralize reactive oxygen species, decreasing lipid peroxidation [34][35][36] . Si supply was effective in signaling polyphenol synthesis, as described in the literature 23,24 . We highlight the increase in vitexin provided by the higher dose of Si supplied to P. incarnata. Si promotes greater activity of the PAL enzyme, which participates in the phenols and flavonoids synthesis 23,24 . Potassium silicate (5, 7.5 and 10 mM) influences apigenin 19 , a precursor flavone of vitexin, which may explain, in this study, the accumulation of vitexin in P. incarnata cultivated with 3 mM SiO 2. The signaling for vitexin production is dependent on Si concentrations and seems not to be related to higher lipid peroxidation and activation of the enzymatic antioxidant system. Mechanical stress can also influence the activity of the PAL enzyme and other enzymes in the polyphenol pathway, as suggested by the results of Liu et al. 28 , and confirmed in this study in the control treatment with absence of Si and with w/MD. In the presence of mechanical damage, plants grown with 1 and 2 mM SiO 2 were efficient in overcoming stress and these concentrations contributed to the synthesis of polyphenols. It is suggested that these concentrations were enough to signal the PAL metabolic pathway, which promoted an increase in the polyphenol index. As observed in the evaluation of vitexin, the increased activity of the PAL enzyme is stimulated by the supply of Si, resulting in an increase in the content of other phenolic compounds, as related in other studies 23,24 . Damaged plants accumulated more starch than intact ones in the absence of Si. Starch may have been the result of mechanical stress, activating the enzymatic antioxidant system that reduced free H 2 O 2 . Among plants that didn't receive Si, the stress that resulted in the accumulation of starch may be related to a higher content of vitexin, since stored starch may act as source of carbohydrates for the development of new tissues, in addition to providing carbon skeletons for flavonoid synthesis. Results by Castrillón-Arbeláez et al. 37 reveal that mechanical damage is related to the expression of starch synthase, demonstrating an increase in this carbohydrate. In plants with Si, the accumulation of vitexin should not be related to starch resulting from stress, but the possible signaling triggered by the higher dose of Si supplied, acting on vitexin precursors 19 . Among plants that received 3 mM SiO 2 , the absence of difference in total soluble sugars, reducing sugars and sucrose may indicate that the production of carbon skeletons was not altered. The starch concentration in plants with 3 mM SiO 2 and mechanical damage suggests accumulation to overcome stress, similar to that observed in plants without Si and with mechanical damage. The supply of 1 mM SiO 2 in plants with mechanical damage increased H 2 O 2 concentration in leaves, but did not result in higher MDA. Plants with damage and Si also had lower MDA than intact plants with Si, indicating the supply of Si under stress conditions contributes to the efficiency of the enzymatic antioxidant system 29 . The Si supplied to intact plants resulted in an increase in lipid peroxidation, although a higher free H 2 O 2 content was not detected, also pointed out by Coskun et al. 29 . Only the 3 mM SiO 2 concentration was effective in increasing the vitexin content. The results observed with Si supply in intact plants indicate that the stress demonstrated by the higher MDA seems not to be related to the higher vitexin synthesis, which suggests another signaling pathway. We discovered that P. incarnata showed greater photosynthetic performance when subjected to mechanical damage, which may have triggered a signaling cascade and, associated with Si, resulted in less MDA, with damage recovery and accumulation of phenolic compounds. At a concentration equal to 3 mM SiO 2 , there was higher vitexin accumulation in the plants and a lower dry mass than other treatments. At low Si concentrations, the photosynthetic performance suggests overcoming the mechanical damage. In P. incarnata crops, mechanical damage is performed by removing the aerial part, which can lead to an increase in vitexin production. The application of 3 mM Si is recommended to increase polyphenols and vitexin, without harming dry mass of aerial part. Supplying 3 mM SiO 2 with increased vitexin by 150% and polyphenols by 130%, suggesting the potential of Si in the phenolic compounds increase in plants 23 www.nature.com/scientificreports/ in the herbal medicines development for the treatment of diseases related to the central nervous system 9,38 . Thus, the interaction between silicon and mechanical damage could be a tool to increase agronomic yield and commercial value of the P. incarnata crop. hydrochloric acid was used to adjust the pH, which was kept between 5.5 and 6.5. Measurement of chlorophyll a fluorescence and gas exchange. Chlorophyll a fluorescence and gas exchange were evaluated at 140 and 169 DAS, using the Infra-Red Gas Analyzer, model GFS-3000 Fl-Walz, with a coupled portable modulated light fluorometer. The evaluations took place between 9 a.m. and 11 a.m. on a fully expanded leaf. The variables evaluated were potential quantum efficiency of open PSII center (Fv′/Fm′) energy fraction absorbed by PSII antenna that is dissipated as heat (D), energy fraction not dissipated in the antenna that cannot be used for photochemistry stage (Ex), photochemical quenching (qL), electron transport rate (ETR), effective quantum efficiency of photosystem II (ΦPSII), CO 2 assimilation rate (A net , μmol CO 2 m −2 s −1 ), transpiration rate (E, mmol H 2 O m −2 s −1 ), stomatal conductance (g s , mmol m −2 s −1 ), and Ribulose 1,5-diphosphate carboxylase/ oxygenase (RuBisCO) carboxylation efficiency, by the CO 2 assimilation rate and internal CO 2 concentration in the sub-stomatal chamber (A net /C i μmol m −2 s −1 Pa −1 ). Plant material samples for biochemical analysis, vitexin and leaf silicon content. At 169 DAS leaves were collected and frozen in liquid nitrogen to determine carbohydrates, H 2 O 2 and lipid peroxidation. Part of the collected leaves were dried at 38 °C in a forced ventilation oven to determine vitexin and leaf Si content. Determination of total sugars, reducing sugars, sucrose and starch. The total soluble sugars were obtained by triple extraction, with 80% ethanol and supernatants were combined. The pellet from this stage was frozen for subsequent extraction of starch 40 . Them, starch was extracted by triple extraction with chilled 52% perchloric acid and the supernatants were pooled in falcon until reading. The quantification of total soluble sugars was performed using the anthrone method, with a spectrophotometer reading at 620 nm, expressed in a standard glucose curve 41,42 . Reducing sugars were quantified with the use of dinitrosalicylic acid (DNS), with a reading at 540 nm and a curve expressed in a glucose pattern 43 . Sucrose quantification occurred with the use of an anthrone + 30% KOH, with 620 nm reading and curve expressed in a sucrose pattern 44 . The starch was determined by the anthrone method, and the reading occurred at 620 nm, with a glucose pattern curve. Determination of hydrogen peroxide and lipid peroxidation. H 2 O 2 content was determined with trichloroacetic acid (TCA) and reading on a spectrophotometer at 390 nm 45 . Lipid peroxidation was determined with thiobarbituric acid (TBA) and trichloroacetic acid (TCA) and expressed by the formation of malonaldehyde (MDA) 46 . Determination of vitexin and polyphenols. Determination of vitexin according to Wosch et al. 47 , used 200 mg of crushed dry leaves (38 °C), with the addition of 8 mL of 60% ethanol in 15 mL test tubes. Then, the tubes were vortexed (15 s) and submitted to an ultrasound bath (30 min). Each extract was filtered with cotton and the volume was made up with solvent extractor (ethanol). Samples were filtered with a Millex LCR filter (non-sterile 0.45 μm 13 mm PTFE membrane) and placed in amber glass bottles at 4 °C. The quantification of vitexin in the
4,554.2
2021-11-11T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Small Lava Caves as Possible Exploratory Targets on Mars: Analogies Drawn from UAV Imaging of an Icelandic Lava Field Volcanic-aeolian interactions and processes have played a vital role in landscape evolution on Mars. Martian lava fields and associated caves have extensive geomorphological, astrobiological, and in-situ resource utilization (ISRU) implications for future Mars missions which might be focused on subsurface exploration. Although several possible cave “skylights” of tens to >100 m diameter have been spotted in lava fields of Mars, there is a possibility of prevalence of meter-scale features which are an order of magnitude smaller and difficult to identify but could have vital significance from the scientific and future exploration perspectives. The Icelandic volcanic-aeolian environment and fissure volcanoes can serve as analogs to study lava flow-related small caves such as surface tubes, inflationary caves, liftup caves, and conduits. In the present work, we have tried to explore the usability of unmanned aerial vehicle (UAV)-derived images for characterizing a solidified lava flow and designing a sequential methodology to identify small caves in the lava flow. In the mapped area of ~0.33 km2, we were able to identify 81 small cave openings, five lava flow morphologies, and five small cave types using 2 cm/pixel high-resolution images. The results display the usefulness of UAV imaging for such analogous research, and also highlight the possibility of the widespread presence of similar small cave openings in Martian lava fields. Such small openings can facilitate optimal air circulation within the caves while sheltering the insides from physical weathering and harmful radiations. Using the available best resolution remote sensing images, we extend the analogy through the contextual and geomorphological analysis of several possible pit craters in the Tharsis region of Mars, in a region of extremely vesicular and fragile lava crust with pahoehoe-type morphology. We report two possible pit craters in this region, with diameters as small as ~20 m. The possibility that such small cave openings can lead to vast subterranean hollow spaces on Mars cannot be ruled out considering its low gravity. Introduction Subsurface environments on Mars are expected to provide shielding from space radiation with controlled diurnal temperature variations [1,2]. Additionally, in the case of caves, these semi-opened protected environments may have micro-climates where the relative humidity and temperatures may allow for the stable existence of liquid briny water [3]. Therefore, subsurface will possibly be the focus of the next phases of Mars exploration owing to its significance for astrobiology [4][5][6][7], in-situ resource utilization (ISRU) [8], and future human exploration [3,9]. Unlike exposed surfaces, caves, regardless of their dimensions, display steady geophysical, environmental, and geochemical conditions, suitable for habitation and life in extreme extraterrestrial conditions [10]. For example, caves demonstrate a moderate diurnal thermal range and a steadier seasonal regime of temperatures than on the open surface environments. They are well-protected from physicochemical decay triggered particularly by fluvio-aeolian processes and strong fluxes of high intensity ultraviolet, cosmic, and solar ionizing radiations [9]. Subsurface caves such as lava tubes, piping caves, and sub-ice volcanic caves may provide options to perform profiling of paleogeology, paleoclimate, astrobiology, and mineralogy from surface to tens or hundreds of meters subsurface [9]. Moreover, lava caves on Mars can be abundantly icy [11,12] beyond certain depths and can act as a long-term freshwater source to support any habitation [13]. Because of their environmental conditions that favor habitability which may allow for Earth-like life forms to survive, caves have been considered as potential "Special Regions" on Mars, and thus, require dedicated measures for planetary protection [1,3]. In addition to enormous time and budgetary constraints in making artificial subsurface habitats, there are numerous technical difficulties and unknowns associated with drilling to reach the subsurface [14,15]. Thus, having access to this subsurface environment through the natural cave openings could facilitate the easier implementation of any future Mars subsurface exploration program. Certainly, there are several constraints and aspects to be considered while conducting remote sensing-based research on Martian caves. First, spotting and confirming such cave openings or 'skylights' in remote sensing images is difficult. It was only in the previous decade that several such skylights of 100 to 252 m diameter were spotted and confirmed using both, visible and thermal orbiter images for the first time in the Arsia Mons region on Mars [16,17]. Thus, multiple remote sensing datasets in various wavelengths ranging from visible to microwave spectrum and of suitably high spatial resolutions are needed to confirm the existence of such caves. Second, even if we can identify and confirm such caves or lava tubes using orbiter remote sensing platforms, they cannot be straightaway projected as the sites of future Martian settlements or exploration simply because it is even more difficult to ascertain through remote sensing whether a particular tube or cave will be structurally sound and approachable. Third, we need to consider the lower Martian gravity, which is almost 0.38 of Earth's gravity, and thus could have allowed the formation of larger underground caves following the past volcanic activities. This means that while we are more enthusiastic about the larger skylights and associated tubes, we should also consider the Martian equivalents of smaller terrestrial lava cave types such as surface tubes, inflationary caves, conduits, and liftup caves. The opening of these cave types on earth usually displays diameters of several tens to hundreds of centimeters while they can be of several meters in lengths [18]. Due to the lower gravity of Mars, the equivalents of such smaller terrestrial caves can be up to an order of magnitude larger but still be hard to resolve in meter-resolution images. Nonetheless, these dimensions are substantial enough to consider such smaller caves too as the potential targets of astrobiological and ISRU interests. Thus, what we refer to as "small" lava caves here is a relative term and should be considered with respect to the geographical setting and evolution of the parent lava flow. To characterize the lava cave entrances in an environment such as that of Mars, it would be extremely important to understand the lava flow surface morphology, recognizing features of a lava flow that may harbor a cave. We have provided several of these details on morphological interpretations with respect to our observations in our Results and Discussion section. However, there are several notable works which provide a detailed background for an interested reader. Calvari and Pinkerton [19,20] surveyed and provided useful details on lava tube morphology and lava flow emplacement mechanisms for Mount Etna. They produced evidence of a strong relationship between developed tumuli, vents, lava tubes, and parent lava flows in terms of their relative emplacement and significant role in enabling the wider lava spread, further proving the importance of lava tubes in the evolution of extensive pahoehoe and aa flow fields. Duncan et al. [21] further reported on the types and development of tumuli in the 1983 aa flow for Etna. They presented several skylights in aerial and field photographs with a description of their morphologies. Favalli et al. [22] employed an unmanned aerial vehicle (UAV)-based survey to characterize the 1974 Etna lava flows at unprecedented resolutions. They reached an important conclusion that forms the basis for our study as well, i.e., the obtained high-resolution terrain data from UAVs resolves surfaces at submeter resolution, making the identification of folds and small openings possible. Similar works on the morphological characterization of lava flows have been done for Kilauea Volcano, Hawaii. Hon et al. [23] and Peterson et al. [24] provided the evidence that after the formation of lava tubes in Kilauea flow, the flow velocities could reach up to several kilometers per hour compared to a slower moving front, and the tube formation provided an efficient means of lava transport. Kauahikaua et al. [25] further described the lava tube morphology of Kilauea pahoehoe flow by providing dimensional details; lava tube heights varied from 1-20 m depending on the slopes of the terrain and the tubes showed nearly elliptical cross-section with widths several times more than the heights. Orr et al. [26] provided some interesting observational details of sinuous tumuli formation on a lava tube in Kilauea flow. Based on morphological similarities, they also proposed these sinuous tumuli as analogs for possible sinuous ridges in the Tharsis volcanic province on Mars. The lava fields on Mars have experienced continuous transformations throughout its geological history owing to past volcanic-aeolian interactions and ongoing aeolian erosional/depositional processes [27,28]. The aeolian dunes on contemporary Mars are largely taken as evidence of past volcanism [29,30]. Thus, volcanic and aeolian landforms and processes on Mars are considerably interconnected as the contributors to its landscape evolution. It is this strong interconnection that requires identifying a similar terrestrial setting to perform analogous Mars research related to smaller lava caves. Iceland provides an analogous environment that significantly displays such volcanic-aeolian interlinking and has about 15,000 km 2 of active sandy deserts which consist of volcanic materials along with its vast lava fields [31]. These Icelandic lava fields are known to harbor several well-explored huge lava tubes/caves [5,[32][33][34]. Additionally, the Icelandic lava flows are also reported to display various types of small caves [18]. For example, lava rise caves usually display a crust of 40-50 cm over an opening of 90-120 cm, pressure ridge caves exhibit a height of~1 m, lava tumulus caves can be up to several meters long with an entrance of~50-130 cm height, and gas blister caves can only be of a few centimeters to several meters in dimensions [18]. Detection, mapping, and morphometry of such small caves require extremely high-resolution imaging and photogrammetry, which is possible using a UAV. UAVs, as an aerial remote sensing platform, act as a bridge between spatially discontinuous, costly, and time-consuming field observations and spatially continuous but costlier and coarser spaceborne remote sensing [35,36]. Realizing such research prospects of using UAVs for Mars research, the National Aeronautics and Space Administration (NASA) is sending the first UAV to Mars with the agency's Mars 2020 rover mission, which is currently scheduled to launch in July 2020 [37]. As one of the initial works to employ a UAV for active volcano monitoring, Nakano et al. [38] studied the landform evolution using high-resolution images in the wake of the Nishinoshima volcano eruption in Ogasawara Islands in November 2013. This volcanic eruption formed and enlarged a new island, and the UAV-derived digital terrain model (DTM) and orthomosaic helped in estimating the area and volume of the new island. Turner et al. [39] employed UAV flights for lava flow hazard prediction and repeat monitoring of the 2014-2015 Pāhoa lava flow crisis, Hawaii. They generated a series of 1 m resolution DTMs and associated paths of steepest descent over the study area. The modeled flow paths for future eruptions showed the possibility of deflection of future flows by the newly emplaced lava, thus possibly threatening new communities in the surrounding regions. In the present research, Remote Sens. 2020, 12, 1970 4 of 30 our objective was to employ UAV imaging to derive high-resolution orthomosaic and morphometric information about a part of the lava flow as an analog site to study small lava caves. To the best of our knowledge, there is a lack of any published study surveying a lava flow full of small lava caves as a Mars analog environment, by employing UAV-based high-resolution 3D and morphometric mapping to suggest the methods of identifying the small cave openings. We hypothesize that small caves might be abundant on Mars but are challenging to find due to the present-day spatial resolution limitations of space-borne remote observations and also due to the prevalent dust obscuring the underlying land features. A recent research article [40] provides evidence of the presence of small voids or caves in possible Martian mudflows that propagate and appear like terrestrial pahoehoe lava flows. As detailed above, the common lava flow morphologies on Earth are well-explored, and in our research, we do not intend to discover a new morphology. Instead, we aim to highlight how UAV imaging can improve our visualization and understanding of the lava terrain and morphologies at unprecedented resolutions covering large spatial domains. A vast majority of the previous studies on the morphological characterization of lava flows have been either mostly field-based with spatial discontinuity or helicopter/aircraft aerial imaging-based with coarser spatial resolutions. As a result, the wide distribution and frequency of possible small cave openings/folds, or the submeter three-dimensional terrain parameters, which we have characterized for a confined portion of the huge lava flow, are significant in highlighting the prospects of high-resolution and high-quality images for geomorphology research. In addition, in the following sections, we have provided ample horizontal perspective views in form of field photographs using high-zoom tripod-based cameras to depict and verify the discussed morphologies and cave openings in the aerial orthomosaic obtained from vertical nadir viewing. Thus, the purpose of our work is to define a terrestrial analog that may help to understand the frequency of formation of small cave openings/folds in lava environments and further understanding their typical geomorphological features. We base our analysis on high-resolution remote sensing observations of a terrestrial analog and use ground-based validation to assess the limitations and potential of our proposed mapping to extrapolate or infer the true conditions which may be found on Mars. Finally, based on this method we present the detection of a few Martian small-sized possible caves openings which seem to have similar characteristics to the ones found in the terrestrial analog environment. Thus, the present study aims at filling the research gap with the following objectives: 1. To perform UAV-based high-resolution imaging survey for the part of a lava flow showing all the main morphologies and abundance of small caves; 2. To design a sequential methodology for identifying and characterizing the small cave openings on the UAV images with respect to the lava flow morphology; 3. To perform a high-resolution comparison of the Icelandic lava flow with some examples from Mars. In the subsequent sections, we briefly introduce the study area. We also provide details on the methods of high-resolution UAV imaging, 3D terrain generation, morphometric analyses, and cave identification. We further discuss the implications of our results for the possible small lava caves on Mars. Study Area The selection of the study area was based on five main requirements. First, there was a need to have an appropriate UAV launch site approachable, flat, and close enough to the area of interest. Second, the area of interest had to be away from the regular walking paths and banned for direct approach to observe solidified lava flow and caves in their natural environment without any anthropogenic factor affecting the terrain. Third, the drone flying over the area of interest and the remote controller at the launch site had to be in direct line-of-sight all the times without any hillock in between to ensure uninterrupted control and flight. Fourth, the area had to display noticeably changing elevation and topography of lava flow for observing varying frequency of caves with respect to the topography. Fifth, the study area needed to cover all main morphology classes of the lava field. Considering these requirements and the acquired permission from the authorities for the fieldwork, we opted for imaging a part of the Leirhnjúkur fissure volcano lava field, situated in Krafla Considering these requirements and the acquired permission from the authorities for the fieldwork, we opted for imaging a part of the Leirhnjúkur fissure volcano lava field, situated in Krafla Caldera of Remote Sens. 2020, 12, 1970 6 of 30 Iceland (Figures 1 and 2). This lava field is known for the presence of vent caves formed by upwelling and withdrawing of the basalt lava directly from the magma chamber [32,34]. Although the surface openings of these vent caves are rather small (1-2 m wide), they widen out towards the bottom reaching up to 4-5 m in dimensions [18,32], and thus, perfectly match the premise of our research objectives. The measured average height of this lava flow is 6 m above its surroundings at the flow margins [34]. However, a knowledge of pre-flow topography confirms the presence of considerable topographic depressions at several places, thus indicating an average lava flow thickness of 11 m [34]. The lava field is predominantly shelly-type formed by very vesicular pahoehoe lava with fragile lava crust, flow lobes, and small lava tubes which eventually became hollow inside due to downslope draining or degassing [34]. However, our area of interest equally consisted of the slabby pahoehoe lava flow. The rifting episode in the Krafla caldera is known as "Krafla Fires" and it lasted between 1975 and 1984 [34,41]. This region was modified by a series of fissure eruptions during 4-18 September 1984 [34]. We further considered the most recent map presented in Figure 2 of Aufaristama et al. [42] for deciding the boundary of the area of interest to ensure that it covers all the main morphology classes of the lava field, i.e., spiny pahoehoe, slabby pahoehoe, shelly pahoehoe, rubbly aa, and cauliflower aa. Remote Sens. 2020, 12, x FOR PEER REVIEW 6 of 31 Caldera of Iceland (Figures 1 and 2). This lava field is known for the presence of vent caves formed by upwelling and withdrawing of the basalt lava directly from the magma chamber [32,34]. Although the surface openings of these vent caves are rather small (1-2 m wide), they widen out towards the bottom reaching up to 4-5 m in dimensions [18,32], and thus, perfectly match the premise of our research objectives. The measured average height of this lava flow is 6 m above its surroundings at the flow margins [34]. However, a knowledge of pre-flow topography confirms the presence of considerable topographic depressions at several places, thus indicating an average lava flow thickness of 11 m [34]. The lava field is predominantly shelly-type formed by very vesicular pahoehoe lava with fragile lava crust, flow lobes, and small lava tubes which eventually became hollow inside due to downslope draining or degassing [34]. However, our area of interest equally consisted of the slabby pahoehoe lava flow. The rifting episode in the Krafla caldera is known as "Krafla Fires" and it lasted between 1975 and 1984 [34,41]. This region was modified by a series of fissure eruptions during 4-18 September 1984 [34]. We further considered the most recent map presented in Figure 2 of Aufaristama et al. [42] for deciding the boundary of the area of interest to ensure that it covers all the main morphology classes of the lava field, i.e., spiny pahoehoe, slabby pahoehoe, shelly pahoehoe, rubbly aa, and cauliflower aa. Materials and Methods The following methodological steps were taken to achieve the research objectives. Materials and Methods The following methodological steps were taken to achieve the research objectives. UAV Imaging System We used a DJI Phantom 4 Pro quadcopter (Figure 2d) for the study. This UAV weighs~1.4 kg inclusive of the battery and propellers. The diagonal dimension (excluding the propellers) is 35 cm. It can fly for a maximum duration of~30 min. The drone can be flown up to a height of~6000 m above sea level (asl). However, in the present study, we flew it below~650 m asl at all the times. The UAV can fly within a maximum wind speed of 10 m/s and a temperature range of 0 • -40 • C. The wind speed in the highlands of Iceland can be extremely high during a larger part of the day and therefore depending on the weather forecast, we planned the flights between 11 am and 12:30 pm local time on 11 July 2018 with a wind speed of 2-3 m/s and a temperature of 15 • C. The UAV is equipped with an integrated 3-axis gimbal that provides an extremely narrow angular vibration range (±0.02 • ) and always maintains the preferred camera look-angle as per our preference. DJI Phantom 4 Pro uses both global positioning system (GPS) and global navigation satellite system (GLONASS) satellites and operating frequencies of 2.4-2.483 GHz and 5.725-5.825 GHz, which provide it a high hover accuracy range with respect to GPS positioning (vertical: ±0.5 m; horizontal: ±1.5 m) up to 7 km from the launch site. However, in the areas with undulating topography and dense vegetation, it is better not to send the UAV too far from the launch site and in our case, the UAV was sent up to a maximum aerial distance of 1200 m from the launch site. The DJI Phantom 4 Pro camera produces photographs with standard RGB channels using a 1" complementary metal-oxide-semiconductor (CMOS) sensor. The 20-megapixel sensor has a manually adjustable aperture from F2.8 to F11, supporting autofocus with a focus range from 1 m to infinity. The sensor has a field of view (FOV) of 84 • and the mechanical shutter facilitates still imaging for fast-moving UAV or object of interest. This camera sensor captured georeferenced images at a high spatial resolution of <2 cm/pixel, even from a flying altitude of 70 m for our study. This UAV system was recently successfully employed for another Mars analog research to study seasonal brines [43]. Flight Planning to Mitigate Systematic Error in Absence of Ground Control Points (GCPs) The main requirement of our work was to obtain high-resolution overlapping images to make extensive visual observations related to cave openings and to perform terrain modeling for generating orthomosaic of the area of interest within an undisturbed solidified lava field using structure-from-motion (SfM) photogrammetry [44]. This meant that we had to opt for a pristine area of interest which was approachable to fly the UAV and yet banned for a direct human approach. This was needed to capture the part of the lava field in its natural setting where the solidified lava flow had been modified primarily through natural processes during the past three and half decades after the last eruptions, to propose a reliable analogy with Mars. However, this also meant that we were not permitted to acquire ground control points (GCPs) using a differential global positioning system (DGPS) unit to ensure very high positional accuracy of the obtained DTM and orthomosaic. Nevertheless, this did not put a constraint on deriving relevant inferences for our research objectives as more than high positional accuracy, i.e., exact latitude, longitude, and elevation, we were interested in mitigating systematic errors for achieving high relative accuracy and in deriving terrain derivatives such as slope, roughness, and elevation profiles for morphometry. High relative accuracy refers to the same relative distance between any two points on the modeled terrain and the distance between those points on real earth terrain. Although with the same flying plan settings as used in the study (Table 1), we later tried to estimate the positional accuracy of the generated DTM for our UAV system with respect to Trimble R10 Integrated Differential Global Navigation Satellite System (DGNSS) System. The obtained root mean square error (RMSE) was~5 m in vertical and~2 m in horizontal; sufficient enough for our objectives which are independent of the requirement of the absolute positional accuracy and mainly focused on high-resolution imaging of the terrain. This range of RMSE is reported by another recent study [45] for similar flight plans like ours. Systematic vertical errors arise mainly due to a combination of near-parallel imaging directions and inaccurate correction of radial lens distortion [46] and affect the relative elevation between two points within a DTM by producing a "vertical doming" of the surface [46]. In the absence of GCPs, such errors can still be significantly reduced through the collection of oblique imagery [46,47]. Images acquired on orthogonal routes at 20 • -30 • inclination to the vertical combined with images acquired at 0 • inclination to the vertical (nadir view), and with high along-track and across-track overlaps have been reported to considerably minimize both positional and systematic errors in absence of GCPs [45,46]. Our flight planning was also in accordance with such considerations ( Table 1). The area of interest could not be completely covered in one flight and on a single battery. Moreover, we had to choose an orthogonal dense flight plan and obtain oblique imagery as well with a tilted camera, meaning higher battery consumption. Therefore, we decided to cover the area of interest in two overlapping flight plans. We made two flights (at 0 • and 20 • tilt from vertical) for each of the two segments of the study area, thus four flights in total. Table 1 highlights the various flight plan and image parameters that we employed in the Pix4Dcapture flight planning freeware app during the field data acquisition. Pix4Dcapture provides the option to allow tilted image acquisitions as per our requirements. Additionally, this freeware also gives options for a flight plan called "double grid" (Figure 1c) which is an orthogonal dense flight plan as we required. The launch and landing sites were the same for all the flights. To increase the density and accuracy of the point clouds and stereo-imaging, we ensured a high degree of overlap (side overlap = 80% and front overlap = 85%) between the images. Generation of DTM and Orthomosaic We used Agisoft PhotoScan Pro stand-alone licensed software for processing the aerial photos to generate the DTM and orthomosaic using SfM photogrammetry. For SfM processing, Agisoft PhotoScan Pro is a proven performer amongst several widely used software packages, such as EyeDEA (University of Parma), ERDAS-LPS, PhotoModeler Scanner, and Pix4UAV [48] and has been widely used in a variety of environmental research in recent years (e.g., [43,[49][50][51][52]. Agisoft PhotoScan Pro has a fully automated workflow for 3D reconstruction and, in addition to its proven capability for robust surface modeling (e.g., [48]), it can derive sensor parameters intrinsically to perform calibration and local processing to generate outputs in multiple file-formats compatible with other geospatial software [49]. The intrinsic SfM processing in PhotoScan is detailed in a paper by Verhoeven [53]. Here, we briefly highlight the three main processing steps for deriving the DTM and orthomosaic from the aerial survey data using Agisoft PhotoScan Pro: 1. Photograph alignment (bundle adjustment): Agisoft PhotoScan aligns the photos from a UAV survey using the camera location coordinates and algorithms, automatically detects stable common features among the overlapping images, and determines the location and alignment of each camera position with respect to others [48,49]. This process of bundle adjustment generates a 3D sparse point cloud using the stereo-imaging, projection, and intersection of pixel rays from the different positions [49]. Using a very high computing hardware system (Intel Xeon E5-2650 v4, 12 cores, 24 threads central processing unit, 256 GB random-access memory, and Nvidia Geforce Titan XP 12 GB GDDR5X graphics card), we employed highest processing parameters within Agisoft PhotoScan workflow to derive the best possible results. For photograph alignment, we opted for the "Highest" accuracy and the highest possible numbers of tie points and key points in the processing tool window. The results of the alignment process are shown in Figure 1c. 2. Geometry building and dense point cloud generation: A densification technique is applied within the software on the already generated sparse point cloud through the bundle adjustment to derive a 3D dense point cloud using multi-view stereopsis (MVS) or depth mapping techniques [54]. The model geometry is corrected by the intrinsic process of matching features to complete the final phase of geometry building to generate an accurate high-resolution 3D dense point cloud [49]. For this step, we opted for the "Ultra high" processing parameter and "Aggressive" depth filtering to derive the best possible results. 3. Texture building and DTM generation: In this step, the generated 3D dense point cloud provides a continuous surface that can be triangulated and rendered with the original imagery to build a textured 3D mesh and create the final DTM [49] and, subsequently, the orthomosaics. For the DTM generation, the dense point cloud was selected as the source data, with enabled interpolation and a pixel resolution of 2 cm/pixel, and WGS 1984 UTM Zone 28N was assigned as the coordinate system for the final outputs. For orthomosaic generation, the DTM were selected as surface data, with enabled hole filling and 2 cm/pixel output resolution. Morphometry Deriving geomorphometric parameters to study the terrain of a lava field can provide extremely useful information [55]. We derived terrain derivatives, such as the slope, aspect, and surface roughness, for morphometric analyses (e.g., [56][57][58][59][60][61]. We derived the slope and aspect parameters using the Spatial Analyst toolbox of the ArcGIS software version 10.6.1. The Slope tool computes the maximum rate of change in elevation value for a given elevation pixel, from that pixel to its eight contiguous pixels [58,62]. The Aspect tool calculates the alignment of the surface slope as the maximum rate of change from each pixel to its eight neighbors [58,62]. We used the Roughness tool within the Geospatial Data Abstraction Library (GDAL) of QGIS 2.18.23 software to derive the roughness parameter. The Roughness tool accepts the modeled elevation surface as input and calculates the largest inter-cell difference of a central pixel and its surrounding pixels for each of the pixels in the surface raster [63]. We further employed the Reclassify tool within the Spatial Analyst toolbox of ArcGIS 10.6.1 to categorize the aspect parameter into suitable classes. The classification schemes for aspect is explained in the respective tool help sections of the ArcGIS 10.6.1 software. The Spatial Analyst toolbox was also used to classify elevation into three classes using the "Natural Breaks" option as this classification is based on natural groupings that are inherent to the data and classes are identified based on similar values and maximum differences between classes [64]. Cave Identification UAVs provide access to areas that are hard to reach and/or dangerous, such as vertical or overhanging rock outcrops or gas-rich and unstable volcanic areas. The present study demonstrates the scientific and operational potential of UAV-derived high-resolution orthoimage and DTM to detect and identify probable lava cave openings. To correctly identify and map cave openings, we adopted a methodology taking into account the 3D model of the surface along with visual interpretation ( Figure 3). Figure 3. Cave opening identification strategy: (a) our systematic approach using DTM and contours, histogram enhanced image, and topographic profiling for identifying cave openings, illustrated here for a collapsed lava channel; (b) use of 3D perspective view to confirm the dark pixels (red arrow) in orthomosaic shown within the red rectangle in (a) as a shadow caused by topography, whereas the cyan arrow depicts one of the possible cave openings; (c) use of 3D perspective view to confirm the dark pixels (cyan arrows) in orthomosaic shown within the cyan rectangle in (a) as cave openings; (d) 3D perspective view of another collapsed lava channel with openings and shadows. Cyan and black arrows in all the figures represent confirmed cavities in images and profiles, respectively. Red and dotted black arrows show spots that are not cave openings, but are shadows caused by topography in images and profiles, respectively. For this purpose, first, the DTM of the region of interest was used to understand the topographic pattern and the direction of the slope. The use of contour lines overlaid on the orthoimage and DTM proved to be useful in distinguishing several of the caves from topographic shadows. In cartography, a contour line (often called a "contour") joins points of equal elevation above a given level, such as mean sea level. Often in topographic analysis, the trend of the contours is taken into account overlooking the values of each contour. For our analysis, the contour lines were overlaid and labeled so that the values can be compared for correct identification. In addition to precision, this technique also ensured that the shadows would not be misinterpreted as cave openings using solely visual . Cave opening identification strategy: (a) our systematic approach using DTM and contours, histogram enhanced image, and topographic profiling for identifying cave openings, illustrated here for a collapsed lava channel; (b) use of 3D perspective view to confirm the dark pixels (red arrow) in orthomosaic shown within the red rectangle in (a) as a shadow caused by topography, whereas the cyan arrow depicts one of the possible cave openings; (c) use of 3D perspective view to confirm the dark pixels (cyan arrows) in orthomosaic shown within the cyan rectangle in (a) as cave openings; (d) 3D perspective view of another collapsed lava channel with openings and shadows. Cyan and black arrows in all the figures represent confirmed cavities in images and profiles, respectively. Red and dotted black arrows show spots that are not cave openings, but are shadows caused by topography in images and profiles, respectively. For this purpose, first, the DTM of the region of interest was used to understand the topographic pattern and the direction of the slope. The use of contour lines overlaid on the orthoimage and DTM proved to be useful in distinguishing several of the caves from topographic shadows. In cartography, a contour line (often called a "contour") joins points of equal elevation above a given level, such as mean sea level. Often in topographic analysis, the trend of the contours is taken into account overlooking the values of each contour. For our analysis, the contour lines were overlaid and labeled so that the values can be compared for correct identification. In addition to precision, this technique also ensured that the shadows would not be misinterpreted as cave openings using solely visual interpretation. The elevation contour panel of Figure 3a elucidates the effectiveness of this approach as the decreasing contour lines can be seen forming concentric curves around the cave openings. Second, we performed contrast stretch and histogram enhancement on the orthomosaic to try and visualize the terrain within shadow for confirming if it is an opening. The histogram enhanced images in Figure 3a can be observed in comparison with the corresponding unenhanced images for reduced darkening caused by topographic shadow. Third, topographic profile analysis was performed to further distinguish between cave openings and shadow and it proved to be extremely effective. The topographic profiles presented in Figure 3a highlight the significant dips of 2-5 m for the openings along the transects. The dark pixels marked by the red arrow in Figure 3a were hard to characterize using only contours and histogram enhanced views. However, the profile analysis at once clarified that the dark pixels are just the results of shadow and there was no cave opening present. Fourth, visual interpretation of 3D perspective views of orthoimage draped over DTM was performed to further confirm the ambiguous cases. For example, the red arrow zone shown in Figure 3a was observed in 3D ( Figure 3b) from various angles to confirm that it was only a topographic shadow. On the contrary, the 3D perspective view in Figure 3c confirmed the dark pixels marked by cyan arrows within the cyan rectangle in Figure 3a as the cave openings. The fragile part of the roof of a small lava channel could collapse, making a visible entrance to the lava channel, which is called a pit crater [65]. Although the particular features in Figure 3c appear more like open vertical conduits with hornitos [18,66], a contextual look at them confirms them to be part of the same lava channel shown in leftmost panel of Figure 3a. Fifth, limited ground truth was conducted from the closest permissible points of approach to the area of interest using Canon's PowerShot SX740 HS camera with 80× Zoom Lens, wherein observations for different types of cave openings were made and captured. This was especially important to identify the side-facing caves or cave openings below cliffs, which could be hidden from view using only aerial remote sensing. Thus, we analyzed and cross-checked all the identified cave openings using the systematic approach explained above and the adopted methodology is reproducible for any similar future research. Results and Discussion We present our findings within three broad sections. First, we introduce the typical lava flow morphologies, terrain characteristics, and cave distribution within our region of interest. Second, we discuss the cave types in different lava morphologies and terrain. Third, we discuss possible analogy with Martian lava flows and caves. The following sections summarize the key results of our study within the predefined objectives. Lava Flow Morphologies and Terrain Parameters Lava differs in its composition and thus in its viscosity and depending on the nature of solidified lava flow surfaces, there are mainly two types of flows: (1) pahoehoe, and (2) aa. The pahoehoe basaltic lava displays varying topography in form of smooth, hummocky, and ropy exterior and typically moves as a sequence of small lobes and toes continually breaking out from a cooled crust [67]. The aa lava differs from pahoehoe lava as it displays rough rubbly surface formed by broken block features called clinkers [67]. The high-resolution images and DTM helped us in characterizing all the main lava morphologies as the surveyed terrain was resolved at unprecedented resolution to identify the associated features of various lava flows. Figures 4 and 5 provide the topography and contextual information for the lava flow morphologies and morphometries shown in Figures 6 and 7. Table 2 provides the field photographs and descriptions of these morphologies. The regions presented in Figures 6 and 7 have been selected based on the distinct morphologies to highlight the variations in the terrain parameters. The entire region of interest displayed elevations within a range of~25 m, i.e., 553-578 m ( Figure 5). We used the Natural Breaks method [64] to classify the DTM into three major elevation classes based on similar values and maximum differences between the classes (Figure 5b). Figure 4 remarkably highlights the improvement in visual quality and terrain characterization using UAV DTM. We compared the hillshaded views generated for 2 cm/pixel UAV DTM and 2 m/pixel ArcticDEM [68]. The ArcticDEM is the highest resolution open-access digital elevation model (DEM) available for this region and we used the National Land Survey of Iceland web portal [69] to download it. The ArcticDEM is derived from satellite sub-meter stereo imagery such as those of WorldView 1-3 and GeoEye-1, and has a vertical accuracy better than 1 m and the horizontal accuracy of 3 m for our study area [70]. As visible in Figure 4, even the 2 m/pixel hillshaded view generated from the ArcticDEM is not sufficient to enable the lava flow characterization. The visual enhancements in terrain observation presented in Figure 4 confirm the premise of our research, i.e., UAV-derived DTM has the potential to bridge the gap between discrete field observations and spatially continuous but coarser resolution satellite observations for volcanology. A similar resolution limitation was observed and reported by Müller et al. [71] where they could identify centimeter-scale fractures in the Holuhraun eruption site, Iceland, using UAV images, as compared Figure 4 remarkably highlights the improvement in visual quality and terrain characterization using UAV DTM. We compared the hillshaded views generated for 2 cm/pixel UAV DTM and 2 m/pixel ArcticDEM [68]. The ArcticDEM is the highest resolution open-access digital elevation model (DEM) available for this region and we used the National Land Survey of Iceland web portal [69] to download it. The ArcticDEM is derived from satellite sub-meter stereo imagery such as those of WorldView 1-3 and GeoEye-1, and has a vertical accuracy better than 1 m and the horizontal accuracy of 3 m for our study area [70]. As visible in Figure 4, even the 2 m/pixel hillshaded view generated from the ArcticDEM is not sufficient to enable the lava flow characterization. The visual enhancements in terrain observation presented in Figure 4 confirm the premise of our research, i.e., UAV-derived DTM has the potential to bridge the gap between discrete field observations and spatially continuous but coarser resolution satellite observations for volcanology. A similar resolution limitation was observed and reported by Müller et al. [71] where they could identify centimeter-scale fractures in the Holuhraun eruption site, Iceland, using UAV images, as compared to meter-scale fractures identified using the WorldView-2 datasets for the same region. The various lava flow morphologies based on the UAV DTM are further discussed in Table 2 to meter-scale fractures identified using the WorldView-2 datasets for the same region. The various lava flow morphologies based on the UAV DTM are further discussed in Table 2, Figures 6 and 7. [34,74] The two types of lava flows are further classified into subclasses based on their inherent morphology. For our region of interest, the lava types observed are (1) shelly pahoehoe, (2) slabby pahoehoe, (3) spiny pahoehoe, (4) cauliflower aa, and (5) rubbly aa. A concise description of these morphologies along with our field photographs and references for interested readers are provided in Table 2. Slabby pahoehoe is the most predominant lava flow morphology within our region of interest followed by shelly pahoehoe and rubbly aa. Spiny pahoehoe and cauliflower aa are confined within the regions marked in Figure 5. The lowest elevation range (~553-558 m) displayed a clear dominance of slabby pahoehoe ( Figure 5). Rubbly aa and shelly pahoehoe were observed together in both, lowest and middle elevation zones (~558-564 m) ( Figure 5). Spiny pahoehoe was most distinctively observable in middle elevations while cauliflower aa was confined to the top elevation range along with shelly pahoehoe (~564-578 m) ( Figure 5). Slabby pahoehoe is formed when relatively fast-moving pahoehoe flows become more viscous with subsequent heat loss, allowing the molten lava to grab and rip the pahoehoe crust into chunks [75]. The reported dimensions of the slabs usually reach up to several meters across and a few centimeters to decimeters in thickness [74,75]. For our study area, the observed diagonal dimensions for the slabs varied between ~1 and 5 m, and the thickness that we could estimate using the generated DTM varied between ~8 and 15 cm (Figure 6b). Although each of the individual slabs display a smooth exterior, the arbitrary and cluttered placement of these slabs gives the flows a rough and crusty appearance (Table 2, Figure 6b) [34,[74][75][76]. The estimated mean surface roughness was ~15.32 mm, the second-lowest among the identified flow morphologies. The aspect image in Figure 6b clearly shows this brittle morphology. The largely smooth texture of individual slabs is the reason behind the low mean slope and roughness values of slabby pahoehoe flow in our study area ( Table 2). The smoothest lava morphology, as expected, was of spiny pahoehoe flows with the estimated [34,74] The two types of lava flows are further classified into subclasses based on their inherent morphology. For our region of interest, the lava types observed are (1) shelly pahoehoe, (2) slabby pahoehoe, (3) spiny pahoehoe, (4) cauliflower aa, and (5) rubbly aa. A concise description of these morphologies along with our field photographs and references for interested readers are provided in Table 2. Slabby pahoehoe is the most predominant lava flow morphology within our region of interest followed by shelly pahoehoe and rubbly aa. Spiny pahoehoe and cauliflower aa are confined within the regions marked in Figure 5. The lowest elevation range (~553-558 m) displayed a clear dominance of slabby pahoehoe ( Figure 5). Rubbly aa and shelly pahoehoe were observed together in both, lowest and middle elevation zones (~558-564 m) ( Figure 5). Spiny pahoehoe was most distinctively observable in middle elevations while cauliflower aa was confined to the top elevation range along with shelly pahoehoe (~564-578 m) ( Figure 5). Slabby pahoehoe is formed when relatively fast-moving pahoehoe flows become more viscous with subsequent heat loss, allowing the molten lava to grab and rip the pahoehoe crust into chunks [75]. The reported dimensions of the slabs usually reach up to several meters across and a few centimeters to decimeters in thickness [74,75]. For our study area, the observed diagonal dimensions for the slabs varied between~1 and 5 m, and the thickness that we could estimate using the generated DTM varied between~8 and 15 cm (Figure 6b). Although each of the individual slabs display a smooth exterior, the arbitrary and cluttered placement of these slabs gives the flows a rough and crusty appearance (Table 2, Figure 6b) [34,[74][75][76]. The estimated mean surface roughness was~15.32 mm, the second-lowest among the identified flow morphologies. The aspect image in Figure 6b clearly shows this brittle morphology. The largely smooth texture of individual slabs is the reason behind the low mean slope and roughness values of slabby pahoehoe flow in our study area ( Table 2). The smoothest lava morphology, as expected, was of spiny pahoehoe flows with the estimated mean surface roughness of~14 mm, the lowest among the identified flow morphologies ( Table 2, slope and roughness maps in Figure 6c). The smooth glassy surface is the result of its formation under very low strain rates when the lava is extremely crystalline and viscous [23]. However, this smooth topography of gently undulating billows and ropes are at centimeter or coarser scales; on a millimeter-scale, this morphology displays a spiny and granulated surface [23]. The surface resembles a segment of coiled rope and the jumbled aspect map shown in Figure 6c highlights this. Spiny pahoehoe is commonly formed as the leakage from dying or stagnating lobes of pahoehoe flows or from the edges and the fronts of aa flows [75]. The marked adjacency of spiny pahoehoe with shelly pahoehoe and rubbly aa can be observed in Figure 5. Shelly pahoehoe is the second most predominant morphology within our region of interest. It is an extremely vesicular lava flow morphology with fragile lava crust [23,72,74] and therefore primarily consists of observable small cave openings (Table 3). This lava morphology displays small hollow lava tubes left behind by drained lava or hollow flow lobes created by the degassing of the molten lava ( Figure 6a) [34]. This morphology is typical of very slow-moving lava causing ponding in the area of hundreds of meters in diameter while the crust consolidates. Successive outflow beneath the crust leads to subsidence, creating the extensively undulating surface and piled up slabs [23,34,74]. Owing to this, the estimated mean surface roughness of shelly pahoehoe in our study area was~24.93 mm, highest amongst the pahoehoe flows ( Table 2). The associated lobes and ripples are visible in the orthomosaic, DTM, and aspect maps given in Figure 6a. Rubbly aa closely follows shelly pahoehoe in terms of areal extent and location in our region of interest. Rubbly aa is characterized by a clinkery and blocky surface with breccia sizes varying between sand to meters long blocks [34,74] (Figure 7a). In our study area, these morphologies could be observed mainly at the transition of basaltic lava from shelly pahoehoe to aa. This lava morphology is generated after attaining high thermal maturity as a result of which the crust during the flow is broken by brittle failure [34]. Due to these geomorphic processes, the eventual surface displays rough topography; the estimated roughness was~27.42 mm, nearly double of the spiny pahoehoe flows (Table 2). Cauliflower aa morphology is marked by irregular outcrops that resemble cauliflowers on the lava surface (Table 2, Figure 7b), typically characterized as smoothly undulating zones with characteristically clinkery surfaces [34]. This lava morphology is usually intermediary during the transformation from pahoehoe to rubbly aa [34]. The protrusions or outcrops are initially attached to the underlying lava, but with time, break and form loose debris [34]. This geomorphic process results in a particularly rough surface and the estimated mean roughness in our region of interest for cauliflower aa was the highest, i.e.,~30.04 mm. However, we identified a region, shown in Figure 7b, where many of these protrusions were intact and attached to the lava flow and provide a fine visual example of cauliflower aa. Cauliflower aa is commonly found in the shelly and slabby pahoehoe-dominated regions where lava flows spilled out after the formation of these morphologies [74] and even in our region of interest, cauliflower aa was found to be closely associated with shelly pahoehoe in the highest elevation zone ( Figure 5). Cave Opening Distribution and Characterization The formation of lava tubes and observable cave openings due to collapse in the lava crust is typical of pahoehoe flow morphologies [24]. This was also observed for the present region of interest where the maximum number of small cave openings were reported from the areas of pahoehoe flows; mainly from shelly pahoehoe, followed by spiny pahoehoe flows (Table 3). Shelly pahoehoe flows were observed in all three elevation classes ( Figure 5, Table 3). As explained in the previous section, shelly pahoehoe has extremely tubular morphology with fragile lava crust collapsed at several places making observable small cave openings. The highest elevation zone in our region of interest had the least areal extent but had a significant number of possible small cave openings and thus, highest cave opening density within both, shelly pahoehoe and cauliflower aa flows (Table 3). These openings were hardly of~1 m 2 of the area on an average ( Table 3). The middle elevations had predominance of both, shelly and spiny morphologies and consequently the highest number (~59% of total) of possible small cave openings with the largest average area of~1.35 m 2 ( Table 3). The lowest elevation class displayed remarkably flat surface with predominantly slabby pahoehoe-type morphology and only 6 possible cave openings out of the total 81 observed (Table 3, Figure 5). The primarily flat topography also resulted in the smallest average area of 0.45 m 2 (Table 3) for the cave openings which could be seen only near the boundary of middle and low elevations ( Figure 5). The cave density was highest for the high elevation zone (900 km −2 ), followed by medium (~282 km −2 ), and low elevations (~46 km −2 ) ( Table 3). Gadányi [18,32] mentions that even such small vent cave openings of 1-2 m can widen out towards the bottom reaching up to 4-5 m in the study area. However, the variations in cave frequencies need to be viewed in the light of a possible vent-proximity variable. We suspect that the highest elevation region and the gentle sloping were a result of the thick lava accumulation due to underlying topography and proximity with the fissure vents. On similar shallow slopes in Hawaiian pahoehoe flow fields, Walker [78] also reported that tumuli and lava rises covered a substantial proportion, exceeding 50%, of the total area. Based on the pre-flow topography, Rossi [34] confirms the presence of considerable topographic depressions at several places with the average lava flow thickness reaching up to 11 m. In our study site, the average elevation rise from low-to-medium and medium-to-high elevations reaches up to~10 m and explains the high cave density due to underlying hollowness or depressions. We observed mainly five types of possible cave openings in the study area (Table 4). In the mapped area, open vertical conduits and collapsed lava tunnels were predominantly observed. This, however, does not necessarily indicate that the small tumulus caves and lava rise caves would be less prevalent in similar lava flows, as the openings of hidden or tumulus caves and lava rise caves are lateral and difficult to observe in down-looking aerial photos. Therefore, the oblique (20 • ) UAV survey as performed by us, coupled with 3D perspective views can be useful in identifying such openings. Table 4 provides the morphological descriptions of these caves along with relevant references. Gadányi [18] has provided detailed discussions on such caves in Iceland. In Figure 8, we display these small caves through field photographs, aerial photographs, and DTMs. For open vertical conduits (Figure 8a), a distinct locally elevated vent terrain is visible as oval or round shaped vertical passageways, where lava rose to the surface and then waned. Collapsed lava tunnels are identifiable as the locally elevated channels in the high-resolution DTM in Figure 8b. The visible holes in the collapsed roof are often referred to as "Skylights". While on Mars, skylights of tens to more than 100 m diameters have been reported, in our study area, using the high-resolution imaging, we characterize skylights of even 1-2 m diameters. Tumulus lava caves (Figure 8c) are rightly called "Hidden" caves as they are hard to detect remotely. However, the high-resolution DTM that we generated marks a contour around the elevated opening of such caves (Figure 8c). Such caves are formed due to the collapse of the unstable section of the crust formed due to the bulging and solidification of injected lava. These cave openings can also be as small as a few centimeters to a meter. Lava rise cave (Figure 8d) was another type of caves in the study area that was tough to spot in the aerial images. Due to the largely flat topography and their lateral openings, even the high-resolution DTM of these caves did not show any significantly marked elevation rise (Figure 8d). The 3D perspective views proved to be useful for identifying such openings. Figure 8e shows small surface fractures in the form of open cracks formed due to tensile stress in lava during and after solidification. The elongated terrain of one of such fractures can be observed in the DTM shown in Figure 8e. Although there are large fractures in the Leirhnjúkur lava field [81], the average width of the small surface fractures reported here are ≤1 m. Based on aerial photography, Opheim and Gudmundsson [81] characterized more than a thousand of fractures with exceptionally high width-to-length ratios (1:20 to 1:40) in this lava field. These previously reported fractures [81] were an order of magnitude larger than the ones we report here using cm-resolution UAV images. However, the width-to-length ratio for these smaller fractures is still the same as reported by Opheim and Gudmundsson [81]. The majority of these fractures end bluntly as tectonic caves [81]. To further confirm our findings, it is worth mentioning here that similar centimeter-scale fractures have also been reported by Müller et al. [71] in the Holuhraun fissure eruption site, Iceland, using UAV images and photogrammetry. The cave density was highest for the high elevation zone (900 km -2 ), followed by medium (~282 km -2 ), and low elevations (~46 km -2 ) ( Table 3). Gadányi [18,32] mentions that even such small vent cave openings of 1-2 m can widen out towards the bottom reaching up to 4-5 m in the study area. However, the variations in cave frequencies need to be viewed in the light of a possible ventproximity variable. We suspect that the highest elevation region and the gentle sloping were a result of the thick lava accumulation due to underlying topography and proximity with the fissure vents. On similar shallow slopes in Hawaiian pahoehoe flow fields, Walker [78] also reported that tumuli and lava rises covered a substantial proportion, exceeding 50%, of the total area. Based on the preflow topography, Rossi [34] confirms the presence of considerable topographic depressions at several places with the average lava flow thickness reaching up to 11 m. In our study site, the average elevation rise from low-to-medium and medium-to-high elevations reaches up to ~10 m and explains the high cave density due to underlying hollowness or depressions. Skylights are openings where the roof of the lava tube has collapsed. In an active flow, these skylights allow convective cooling of the lava. [79,80] Lava rise cave Lava rise caves are formed as a result of inflation due to fluid lava accumulating under the solidified surface crust. Once the lava drains leaving a deflated center, if the uplifted surface crust can support itself, a flat cave remains under it. [18] Hidden or tumulus cave Tumulus lava caves are formed when during volcanic activity below the arching surface crust, liquid lava is injected causing the surface crust to bulge as it solidifies without any horizontal shortening. Once the lava drains, the unstable section of the crust collapses revealing the tumulus cave. [18] Surface fractures The observed small surface fractures are deep open cracks that are formed due to tensile stress in lava during and after solidification. [71,81] We observed mainly five types of possible cave openings in the study area (Table 4). In the mapped area, open vertical conduits and collapsed lava tunnels were predominantly observed. This, however, does not necessarily indicate that the small tumulus caves and lava rise caves would be less prevalent in similar lava flows, as the openings of hidden or tumulus caves and lava rise caves are lateral and difficult to observe in down-looking aerial photos. Therefore, the oblique (20°) UAV survey as performed by us, coupled with 3D perspective views can be useful in identifying such [18,66] Collapsed lava tunnel (Skylights) Skylights are openings where the roof of the lava tube has collapsed. In an active flow, these skylights allow convective cooling of the lava. Remote Sens. 2020, 12, x FOR PEER REVIEW 19 of 31 The cave density was highest for the high elevation zone (900 km -2 ), followed by medium (~282 km -2 ), and low elevations (~46 km -2 ) ( Table 3). Gadányi [18,32] mentions that even such small vent cave openings of 1-2 m can widen out towards the bottom reaching up to 4-5 m in the study area. However, the variations in cave frequencies need to be viewed in the light of a possible ventproximity variable. We suspect that the highest elevation region and the gentle sloping were a result of the thick lava accumulation due to underlying topography and proximity with the fissure vents. On similar shallow slopes in Hawaiian pahoehoe flow fields, Walker [78] also reported that tumuli and lava rises covered a substantial proportion, exceeding 50%, of the total area. Based on the preflow topography, Rossi [34] confirms the presence of considerable topographic depressions at several places with the average lava flow thickness reaching up to 11 m. In our study site, the average elevation rise from low-to-medium and medium-to-high elevations reaches up to ~10 m and explains the high cave density due to underlying hollowness or depressions. Skylights are openings where the roof of the lava tube has collapsed. In an active flow, these skylights allow convective cooling of the lava. [79,80] Lava rise cave Lava rise caves are formed as a result of inflation due to fluid lava accumulating under the solidified surface crust. Once the lava drains leaving a deflated center, if the uplifted surface crust can support itself, a flat cave remains under it. [18] Hidden or tumulus cave Tumulus lava caves are formed when during volcanic activity below the arching surface crust, liquid lava is injected causing the surface crust to bulge as it solidifies without any horizontal shortening. Once the lava drains, the unstable section of the crust collapses revealing the tumulus cave. [18] Surface fractures The observed small surface fractures are deep open cracks that are formed due to tensile stress in lava during and after solidification. [71,81] We observed mainly five types of possible cave openings in the study area (Table 4). In the mapped area, open vertical conduits and collapsed lava tunnels were predominantly observed. This, however, does not necessarily indicate that the small tumulus caves and lava rise caves would be less prevalent in similar lava flows, as the openings of hidden or tumulus caves and lava rise caves are lateral and difficult to observe in down-looking aerial photos. Therefore, the oblique (20°) UAV survey as performed by us, coupled with 3D perspective views can be useful in identifying such openings. Table 4 provides the morphological descriptions of these caves along with relevant [79,80] The cave density was highest for the high elevation zone (900 km -2 ), followed by medium (~282 km -2 ), and low elevations (~46 km -2 ) ( Table 3). Gadányi [18,32] mentions that even such small vent cave openings of 1-2 m can widen out towards the bottom reaching up to 4-5 m in the study area. However, the variations in cave frequencies need to be viewed in the light of a possible ventproximity variable. We suspect that the highest elevation region and the gentle sloping were a result of the thick lava accumulation due to underlying topography and proximity with the fissure vents. On similar shallow slopes in Hawaiian pahoehoe flow fields, Walker [78] also reported that tumuli and lava rises covered a substantial proportion, exceeding 50%, of the total area. Based on the preflow topography, Rossi [34] confirms the presence of considerable topographic depressions at several places with the average lava flow thickness reaching up to 11 m. In our study site, the average elevation rise from low-to-medium and medium-to-high elevations reaches up to ~10 m and explains the high cave density due to underlying hollowness or depressions. Table 4. Description of types of possible small cave openings observed in the study area. Type Description Sketch Reference Open vertical conduit These structures have oval or round shaped vertical passageways and are found in recent volcanic rocks, where lava rose to the surface and then waned. The openings are typically marked by a rootless small spatter cone called hornito. [18,66] Collapsed lava tunnel (Skylights) Skylights are openings where the roof of the lava tube has collapsed. In an active flow, these skylights allow convective cooling of the lava. [79,80] Lava rise cave Lava rise caves are formed as a result of inflation due to fluid lava accumulating under the solidified surface crust. Once the lava drains leaving a deflated center, if the uplifted surface crust can support itself, a flat cave remains under it. [18] Hidden or tumulus cave Tumulus lava caves are formed when during volcanic activity below the arching surface crust, liquid lava is injected causing the surface crust to bulge as it solidifies without any horizontal shortening. Once the lava drains, the unstable section of the crust collapses revealing the tumulus cave. [18] Surface fractures The observed small surface fractures are deep open cracks that are formed due to tensile stress in lava during and after solidification. [71,81] We observed mainly five types of possible cave openings in the study area (Table 4). In the mapped area, open vertical conduits and collapsed lava tunnels were predominantly observed. This, however, does not necessarily indicate that the small tumulus caves and lava rise caves would be less prevalent in similar lava flows, as the openings of hidden or tumulus caves and lava rise caves are lateral and difficult to observe in down-looking aerial photos. Therefore, the oblique (20°) UAV survey as performed by us, coupled with 3D perspective views can be useful in identifying such openings. Table 4 provides the morphological descriptions of these caves along with relevant [18] Hidden or tumulus cave Tumulus lava caves are formed when during volcanic activity below the arching surface crust, liquid lava is injected causing the surface crust to bulge as it solidifies without any horizontal shortening. Once the lava drains, the unstable section of the crust collapses revealing the tumulus cave. Remote Sens. 2020, 12, x FOR PEER REVIEW 19 of 31 The cave density was highest for the high elevation zone (900 km -2 ), followed by medium (~282 km -2 ), and low elevations (~46 km -2 ) ( Table 3). Gadányi [18,32] mentions that even such small vent cave openings of 1-2 m can widen out towards the bottom reaching up to 4-5 m in the study area. However, the variations in cave frequencies need to be viewed in the light of a possible ventproximity variable. We suspect that the highest elevation region and the gentle sloping were a result of the thick lava accumulation due to underlying topography and proximity with the fissure vents. On similar shallow slopes in Hawaiian pahoehoe flow fields, Walker [78] also reported that tumuli and lava rises covered a substantial proportion, exceeding 50%, of the total area. Based on the preflow topography, Rossi [34] confirms the presence of considerable topographic depressions at several places with the average lava flow thickness reaching up to 11 m. In our study site, the average elevation rise from low-to-medium and medium-to-high elevations reaches up to ~10 m and explains the high cave density due to underlying hollowness or depressions. Table 4. Description of types of possible small cave openings observed in the study area. Type Description Sketch Reference Open vertical conduit These structures have oval or round shaped vertical passageways and are found in recent volcanic rocks, where lava rose to the surface and then waned. The openings are typically marked by a rootless small spatter cone called hornito. [18,66] Collapsed lava tunnel (Skylights) Skylights are openings where the roof of the lava tube has collapsed. In an active flow, these skylights allow convective cooling of the lava. [79,80] Lava rise cave Lava rise caves are formed as a result of inflation due to fluid lava accumulating under the solidified surface crust. Once the lava drains leaving a deflated center, if the uplifted surface crust can support itself, a flat cave remains under it. [18] Hidden or tumulus cave Tumulus lava caves are formed when during volcanic activity below the arching surface crust, liquid lava is injected causing the surface crust to bulge as it solidifies without any horizontal shortening. Once the lava drains, the unstable section of the crust collapses revealing the tumulus cave. [18] Surface fractures The observed small surface fractures are deep open cracks that are formed due to tensile stress in lava during and after solidification. [71,81] We observed mainly five types of possible cave openings in the study area (Table 4). In the mapped area, open vertical conduits and collapsed lava tunnels were predominantly observed. This, however, does not necessarily indicate that the small tumulus caves and lava rise caves would be less prevalent in similar lava flows, as the openings of hidden or tumulus caves and lava rise caves are lateral and difficult to observe in down-looking aerial photos. Therefore, the oblique (20°) UAV survey as performed by us, coupled with 3D perspective views can be useful in identifying such openings. Table 4 provides the morphological descriptions of these caves along with relevant references. Gadányi [18] has provided detailed discussions on such caves in Iceland. In Figure 8, we [18] Surface fractures The observed small surface fractures are deep open cracks that are formed due to tensile stress in lava during and after solidification. Remote Sens. 2020, 12, x FOR PEER REVIEW 19 of 31 The cave density was highest for the high elevation zone (900 km -2 ), followed by medium (~282 km -2 ), and low elevations (~46 km -2 ) ( Table 3). Gadányi [18,32] mentions that even such small vent cave openings of 1-2 m can widen out towards the bottom reaching up to 4-5 m in the study area. However, the variations in cave frequencies need to be viewed in the light of a possible ventproximity variable. We suspect that the highest elevation region and the gentle sloping were a result of the thick lava accumulation due to underlying topography and proximity with the fissure vents. On similar shallow slopes in Hawaiian pahoehoe flow fields, Walker [78] also reported that tumuli and lava rises covered a substantial proportion, exceeding 50%, of the total area. Based on the preflow topography, Rossi [34] confirms the presence of considerable topographic depressions at several places with the average lava flow thickness reaching up to 11 m. In our study site, the average elevation rise from low-to-medium and medium-to-high elevations reaches up to ~10 m and explains the high cave density due to underlying hollowness or depressions. Table 4. Description of types of possible small cave openings observed in the study area. Type Description Sketch Reference Open vertical conduit These structures have oval or round shaped vertical passageways and are found in recent volcanic rocks, where lava rose to the surface and then waned. The openings are typically marked by a rootless small spatter cone called hornito. [18,66] Collapsed lava tunnel (Skylights) Skylights are openings where the roof of the lava tube has collapsed. In an active flow, these skylights allow convective cooling of the lava. [79,80] Lava rise cave Lava rise caves are formed as a result of inflation due to fluid lava accumulating under the solidified surface crust. Once the lava drains leaving a deflated center, if the uplifted surface crust can support itself, a flat cave remains under it. [18] Hidden or tumulus cave Tumulus lava caves are formed when during volcanic activity below the arching surface crust, liquid lava is injected causing the surface crust to bulge as it solidifies without any horizontal shortening. Once the lava drains, the unstable section of the crust collapses revealing the tumulus cave. [18] Surface fractures The observed small surface fractures are deep open cracks that are formed due to tensile stress in lava during and after solidification. [71,81] We observed mainly five types of possible cave openings in the study area (Table 4). In the mapped area, open vertical conduits and collapsed lava tunnels were predominantly observed. This, however, does not necessarily indicate that the small tumulus caves and lava rise caves would be less prevalent in similar lava flows, as the openings of hidden or tumulus caves and lava rise caves are lateral and difficult to observe in down-looking aerial photos. Therefore, the oblique (20°) UAV survey as performed by us, coupled with 3D perspective views can be useful in identifying such openings. Table 4 provides the morphological descriptions of these caves along with relevant references. Gadányi [18] has provided detailed discussions on such caves in Iceland. In Figure 8, we display these small caves through field photographs, aerial photographs, and DTMs. For open [71,81] Remote Sens. 2020, 12, x FOR PEER REVIEW 21 of 31 Possible Analogies with Martian lava flows Before discussing any such analogy, there are several important points to consider. First, the topographical details of the lava flow on Mars are largely obscured by dust and make any meter-or submeter-scale morphological observations difficult. Only, the broad flow morphologies such as braided lava channels, fan-shaped lobes, and the extent of terrain roughness provide some clues on the possible characterization of Martian lava flows [82]. Second, identifying possible small cave Possible Analogies with Martian lava flows Before discussing any such analogy, there are several important points to consider. First, the topographical details of the lava flow on Mars are largely obscured by dust and make any meter-or submeter-scale morphological observations difficult. Only, the broad flow morphologies such as braided lava channels, fan-shaped lobes, and the extent of terrain roughness provide some clues on the possible characterization of Martian lava flows [82]. Second, identifying possible small cave entrances in Martian images is a very challenging task due to the unavailability of submeter resolution High Resolution Imaging Science Experiment (HiRISE) images for a vast majority of the Martian terrain. A recent database led by Glen Cushing of U.S. Geological Survey [83] and called Mars Global Cave Candidate Catalog (MGCˆ3) [84] is an exciting start. Figure 9 shows the global distribution of sighted possible caves on Mars based on MGCˆ3. This database is based on the images from the Mars Reconnaissance Orbiter's (MRO's) Context Camera (CTX) and HiRISE camera. While CTX images have best of the resolutions of~5-6 m/pixel and can resolve a possible cave entrance candidate of~20-25 m diameter, the presence of HiRISE images of some of such candidates can provide more clues, i.e., cliff-wall strata, underlying aeolian bedforms, and dust/bedrock interfaces, at~0.25 m/pixel resolutions. Third, none of these cave candidates can be verified as actual caves with sufficient subsurface void spaces until they are physically visited. However, as mentioned in the beginning, even smaller caves with up to several meters to tens of meters of sheltered space can have significant astrobiological significance. Therefore, in this section, we provide certain examples of possible small pit craters/caves on Mars which can be comparable to the small cave entrances which we report for the Icelandic lava flow. Considering the lower gravity of Mars that can allow equivalents of such smaller terrestrial caves to be up to an order of magnitude larger on Mars, such caves can be interesting targets for future Mars exploration. [84] is an exciting start. Figure 9 shows the global distribution of sighted possible caves on Mars based on MGC^3. This database is based on the images from the Mars Reconnaissance Orbiter's (MRO's) Context Camera (CTX) and HiRISE camera. While CTX images have best of the resolutions of ~5-6 m/pixel and can resolve a possible cave entrance candidate of ~20-25 m diameter, the presence of HiRISE images of some of such candidates can provide more clues, i.e., cliff-wall strata, underlying aeolian bedforms, and dust/bedrock interfaces, at ~0.25 m/pixel resolutions. Third, none of these cave candidates can be verified as actual caves with sufficient subsurface void spaces until they are physically visited. However, as mentioned in the beginning, even smaller caves with up to several meters to tens of meters of sheltered space can have significant astrobiological significance. Therefore, in this section, we provide certain examples of possible small pit craters/caves on Mars which can be comparable to the small cave entrances which we report for the Icelandic lava flow. Considering the lower gravity of Mars that can allow equivalents of such smaller terrestrial caves to be up to an order of magnitude larger on Mars, such caves can be interesting targets for future Mars exploration. A closer look at the topographical characteristics of the lava fields and the corresponding presence of possible caves derived from MGC^3 reveals that such caves are predominantly present in the smooth-textured pahoehoe-type lava flows. The extensive presence of collapsed or semi-intact lava channels/tubes in this region [82] suggests the prevalence of a possible shelly pahoehoe-type morphology; extremely vesicular [82] with fragile lava crust and therefore consisting of observable small cave openings. This morphology is characteristic of sluggish lava causing ponding in a vast area while the crust consolidates and the successive outflow beneath the crust leads to subsidence, creating the possibility of pit cratering and extensively undulating surface. Thus, what we observe as possible skylights or cave openings in this region are primarily pit craters. Cushing [85] has detailed on the morphology The extensive presence of collapsed or semi-intact lava channels/tubes in this region [82] suggests the prevalence of a possible shelly pahoehoe-type morphology; extremely vesicular [82] with fragile lava crust and therefore consisting of observable small cave openings. This morphology is characteristic of sluggish lava causing ponding in a vast area while the crust consolidates and the successive outflow beneath the crust leads to subsidence, creating the possibility of pit cratering and extensively undulating surface. Thus, what we observe as possible skylights or cave openings in this region are primarily pit craters. Cushing [85] has detailed on the morphology of such huge pit craters and possible caves in this region. Crown and Ramsey [82] have discussed aa and pahoehoe-type morphologies of lava flows in the Tharsis region where low-viscosity flows were predominantly transported through channels/tubes, inflating in vast plains. An example of this can be seen in Figure 10 where we present a well-studied pit crater [85] in a pahoehoe-type smooth textured flow, adjacent to the rough-textured aa-type flow in the lava field of Pavonis Mons (Figure 10a). The HiRISE image (Figure 10b) and DTM (Figure 10c) additionally highlight this subsidence feature consisting of a subterranean void and a debris pile in the center,~30 m further down the central cave opening. The diameter of the central collapse is 41.5 m while that of the overall subsidence feature is~188 m. However, while this is an example of relatively huge subsidence in a distinctly demarcated pahoehoe-type flow, in the following examples from the Tharsis region, we try to highlight several of the smaller possible cave openings which are dimensionally closer to our terrestrial analogs. Remote Sens. 2020, 12, x FOR PEER REVIEW 23 of 31 of such huge pit craters and possible caves in this region. Crown and Ramsey [82] have discussed aa and pahoehoe-type morphologies of lava flows in the Tharsis region where low-viscosity flows were predominantly transported through channels/tubes, inflating in vast plains. An example of this can be seen in Figure 10 where we present a well-studied pit crater [85] in a pahoehoe-type smooth textured flow, adjacent to the rough-textured aa-type flow in the lava field of Pavonis Mons ( Figure 10a). The HiRISE image ( Figure 10b) and DTM (Figure 10c) additionally highlight this subsidence feature consisting of a subterranean void and a debris pile in the center, ~30 m further down the central cave opening. The diameter of the central collapse is ~41.5 m while that of the overall subsidence feature is ~188 m. However, while this is an example of relatively huge subsidence in a distinctly demarcated pahoehoe-type flow, in the following examples from the Tharsis region, we try to highlight several of the smaller possible cave openings which are dimensionally closer to our terrestrial analogs. Figures 11-13 show that the visible lava channels are one of the best targets to search for such cave openings. Cushing et al. [83] called a morphological group of such subsidence structures "Atypical Pit Craters (APCs)". APCs usually have sharp and definite rims with surface diameters of ~50-350 m [83]. However, in the subsequent examples, we will discuss some smaller APCs too. In Figure 11a, we observe a collapsed skylight north of Arsia Mons with a possible subterranean hollow space (marked by black dotted lines). The different texture within the black dotted lines suggests a slightly elevated and sloped terrain that allows for a changed dust pattern than the surrounding terrain. The diameter of this opening is ~90 m. The small pit crater shown in Figure 11b in an otherwise intact lava tube was sighted on the Ascraeus Mons summit region. This APC was Figures 11-13 show that the visible lava channels are one of the best targets to search for such cave openings. Cushing et al. [83] called a morphological group of such subsidence structures "Atypical Pit Craters (APCs)". APCs usually have sharp and definite rims with surface diameters of~50-350 m [83]. However, in the subsequent examples, we will discuss some smaller APCs too. In Figure 11a, we observe a collapsed skylight north of Arsia Mons with a possible subterranean hollow space (marked by black dotted lines). The different texture within the black dotted lines suggests a slightly elevated and sloped terrain that allows for a changed dust pattern than the surrounding terrain. The diameter of this opening is~90 m. The small pit crater shown in Figure 11b in an otherwise intact lava tube was sighted on the Ascraeus Mons summit region. This APC was particularly interesting due to its smaller than usual diameter of~23 m and its presence on a seemingly intact lava tube. Remote Sens. 2020, 12, x FOR PEER REVIEW 24 of 31 particularly interesting due to its smaller than usual diameter of ~23 m and its presence on a seemingly intact lava tube. ). The red rectangle shows a pit crater with a visible mound in the middle. The blue rectangle highlights three craters separated by several kilometers. The subsidence structure within the green rectangle is a seemingly hollow pit crater, the one within the violet rectangle is the largest of the three but filled with debris, and the smallest of these structures is marked by the yellow rectangle. The inset figure within the cyan rectangle shows an analogous chain of small cave openings in our Icelandic study area. CTX image credit: NASA/JPL-Caltech. particularly interesting due to its smaller than usual diameter of ~23 m and its presence on a seemingly intact lava tube. ). The red rectangle shows a pit crater with a visible mound in the middle. The blue rectangle highlights three craters separated by several kilometers. The subsidence structure within the green rectangle is a seemingly hollow pit crater, the one within the violet rectangle is the largest of the three but filled with debris, and the smallest of these structures is marked by the yellow rectangle. The inset figure within the cyan rectangle shows an analogous chain of small cave openings in our Icelandic study area. CTX image credit: NASA/JPL-Caltech. Figure 12 highlights multiple APCs within the same lava channel. These APCs display remarkably different morphologies and dimensions. For example, the red rectangle marks a pit crater with shadows surrounding the middle portion, suggesting it to be a visible mound of accumulated debris and dust ( Figure 12). The blue rectangle emphasizes three craters separated by several kilometers: (1) the APC within the green rectangle is a seemingly hollow pit crater with the indiscernible floor, (2) the APC within the violet rectangle is the largest of the three APCs and is filled with debris, and (3) the smallest of these APCs marked by the yellow rectangle is only ~27 m in diameter. This small cave within the yellow rectangle is not yet listed in the MGC^3, and here we are reporting it as a representative of more of such small caves which are hard to observe in presently available satellite images. The availability of HiRISE images for the smallest crater might have provided better insights. In Figure 12, we also provide an inset figure within the cyan rectangle that shows an analogous chain of small cave openings in a small lava channel in our Icelandic study area. However, the scale of these observations should be noted with care. As expected, the dimensions of the Icelandic APCs are more than an order of magnitude smaller than the Martian APCs. The rightmost APC in the inset image shows a deeper hollow pit, the middle one is largely filled with debris, and the leftmost is the smallest one with hardly 1 m of diameter. In Figure 13, we show another possible APC with a remarkably small diameter of ~20 m. This small APC is again, not yet listed within the MGC^3, and here we identify and report it. Identifying such APCs on CTX images is difficult and the availability of a HiRISE image would have been ideal. However, with the contextual interpretations such as the one highlighted in Figure 13, these possible APCs can be marked with an acceptable level of confidence. The red arrows in Figure 13 show the collapsed part of a huge lava tube and associated fissure while the yellow arrows mark the intact part. This supports the possibility of the observed dark albedo feature marked by the green dot and shown in the zoomed-in inset image to be an APC. Figure 13. A possible small subsidence cave opening in a lava tube (CTX Scene ID: G18_025285_1769_XN_03S117W). The red arrows mark the collapsed part of the lava tube and a possible fissure while the yellow arrows mark the intact part. The green dot marks the possible cave opening that can be seen in the zoomed-in version indicated by the black arrow. CTX image credit: NASA/JPL-Caltech. Figure 12 highlights multiple APCs within the same lava channel. These APCs display remarkably different morphologies and dimensions. For example, the red rectangle marks a pit crater with shadows surrounding the middle portion, suggesting it to be a visible mound of accumulated debris and dust ( Figure 12). The blue rectangle emphasizes three craters separated by several kilometers: (1) the APC within the green rectangle is a seemingly hollow pit crater with the indiscernible floor, (2) the APC within the violet rectangle is the largest of the three APCs and is filled with debris, and (3) the smallest of these APCs marked by the yellow rectangle is only~27 m in diameter. This small cave within the yellow rectangle is not yet listed in the MGCˆ3, and here we are reporting it as a representative of more of such small caves which are hard to observe in presently available satellite images. The availability of HiRISE images for the smallest crater might have provided better insights. In Figure 12, we also provide an inset figure within the cyan rectangle that shows an analogous chain of small cave openings in a small lava channel in our Icelandic study area. However, the scale of these observations should be noted with care. As expected, the dimensions of the Icelandic APCs are more than an order of magnitude smaller than the Martian APCs. The rightmost APC in the inset image shows a deeper hollow pit, the middle one is largely filled with debris, and the leftmost is the smallest one with hardly 1 m of diameter. In Figure 13, we show another possible APC with a remarkably small diameter of 20 m. This small APC is again, not yet listed within the MGCˆ3, and here we identify and report it. Identifying such APCs on CTX images is difficult and the availability of a HiRISE image would have been ideal. However, with the contextual interpretations such as the one highlighted in Figure 13, these possible APCs can be marked with an acceptable level of confidence. The red arrows in Figure 13 show the collapsed part of a huge lava tube and associated fissure while the yellow arrows mark the intact part. This supports the possibility of the observed dark albedo feature marked by the green dot and shown in the zoomed-in inset image to be an APC. Conclusions We investigated a part of an Icelandic lava flow as a Mars analog environment, by performing a dedicated remote-sensing-based characterization of possible small-cave openings in Leirhnjúkur fissure volcano lava fields. One might argue that probably the Hawaiian lava fields would have been a better analog as those are bigger than the Leirhnjúkur fissure volcano lava fields and are also closer to the shield volcanoes like the ones in Tharsis on Mars. However, for investigating the small lava cave environment that was the focus of our study, an analog site such as the Leirhnjúkur fissure volcano lava fields seemed appropriate because this region has a similar kind of rigorous aeolian-volcanic interactions as on Mars. The fissure nature of eruptions in this region provides the possibility of lower lava volume and subsequently smaller tubes, channels, caves, and folds. Furthermore, sites of possible fissure eruptions, like the one we have selected in Iceland, have also been reported in the Tharsis region on Mars [86,87]. Our analog study is even more relevant considering a recent paper [40] which suggests that the possible mudflows on Mars might have propagated like the terrestrial pahoehoe lava flows and show similar morphological characteristics. Figure 2d of Broz et al. [40] shows the presence of extensive voids or cavities within the simulated mudflow under Martian conditions. These cavities appear to be of smaller scale than the usual caves within a pahoehoe lava flow. This signifies the premise of our research that while the presence of small caves in pahoehoe-type flows on Mars is elusive due to the resolution limitations of the present remote sensors, such caves might be abundant on Mars with considerable significance for astrobiology or habitability. The study by Favalli et al. [22] has already emphasized the importance of UAV imaging and terrain data in resolving lava surfaces and enabling identification of folds and small openings. The fact that we could characterize 81 small cave openings/folds of <1.5 m 2 average area, within a small section of the lava flow, is the proof that fine resolution datasets can be extremely useful in furthering our understanding of these landforms. We observed that the existence of such small cave openings is favored in the regions which show vesicular lava crust flow morphology. By analogy, we performed a visual analysis of similar shelly pahoehoe-type lava flows in the Tharsis region of Mars using available best resolution satellite images. In analogy to its terrestrial counterpart, this region on Mars shows the potential existence of small cave openings that have diameters as small as~20 m. The smaller (~20 m) cave openings which we have identified in CTX images support our hypothesis that such small caves, analogous to small Icelandic caves but an order of magnitude larger than them, might be in abundance on Mars. The unavailability of submeter resolution images for~95% of the Martian terrain makes it impossible to characterize any such cave opening that is lesser than~20-25 m in diameter. Nevertheless, the astrobiological and ISRU significance of such small caves is irrefutable. Future targeted HiRISE acquisitions of such possible cave openings from multiple view angles will not only confirm their existence but will also provide important terrain and morphological details for planning future missions. Until now, APCs seem to be the prevalent volcanic cave type on Mars but with future availability of more HiRISE images covering new regions, more small cave types analogous to the terrestrial ones as shown in Figure 8 can be observed. The importance of having thermal infrared observations in confirming caves is well-established [85,88] but the presently operational thermal sensors around Mars do not have sufficient spatial resolutions to confirm the meter-scale cave openings. Future orbiter, rover, and even UAV missions for Mars should try accommodating high-resolution thermal sensors as payloads. The next phase of our research is going to cover the same region in Iceland in the thermal infrared range during day and night to observe the thermal characteristics of these small caves. If a future rover mission targets volcanic sites, even sending a Ground Penetrating Radar (GPR) can be useful in finding the possibilities of subsurface hollowness around such subsidence structures. Another plausible continuation of this research can focus on taking field measurements of morphometrics and diurnal environmental conditions (mainly temperature, relative humidity, and degree of insolation) within such small caves to enable numerical modeling of analogous structures on Mars. The possibility that such small cave openings can lead to vast subterranean hollow spaces cannot be ruled out on Mars considering its lower gravity and the ongoing Mars cave research needs to reconsider the possibility and significance of the small Martian caves given our results.
20,890.8
2020-06-19T00:00:00.000
[ "Environmental Science", "Geology", "Physics" ]
A novel principle to localize scattered-wave sensitivity using wave 2 interference and its adjoint 3 11 When using scattered waves for high-resolution imaging of a medium, the sensitivity of these 12 waves to the spatiotemporal distribution of heterogeneities is undoubtedly a key factor. The 13 traditional principle behind using scattered waves to detect small changes suffers from an 14 inherent limitation when other structures, not of interest, are present along the wave 15 propagation path. We propose a novel principle that leads to enhanced localization of wave 16 sensitivity, without having to know the intermediate structures. This new principle emerges 17 from a boundary integral representation which utilizes wave interferences observed at 18 multiple points. When tested on geophysical acoustic wave data, this new principle leads to 19 much better sensitivity localization and detection of small changes in seismic velocities, 20 which were otherwise impossible. Overcoming the insensitivity to a target area, it offers new 21 possibilities for imaging and monitoring small changes in properties, which is critical in a 22 wide range of disciplines and scales. Introduction 1 Scattered waves and their sensitivity to heterogeneity are fundamentally important to 2 study any kind of material structure. The sensitivity represents how much changes in wave 3 scattering is associated with changes in the heterogeneity. A knowledge of the sensitivity and 4 its distribution enables one to address why, how and where such changes occur 1-4 . 5 When the heterogeneities are known, then one can model the scattered wave data. The 6 sensitivity (Jacobian) derived from the modelled data can be used to predict how those 7 heterogeneities in natural or engineered composite materials interact when subjected to 8 external stimuli 1-3 . When the heterogeneities are not known, the sensitivity is derived from 9 the observed data. In this case, the sensitivity is defined as a change (gradient) in the 10 difference between the observed waveform and the waveform estimated based on a physical 11 principle, as a result of changes in the assumed model 4- 6 . In this article, we address this latter 12 sensitivity issue. This is important in many real-world problems that involve resolving and 13 monitoring unknown heterogeneities. For example, the sensitivity derived from acoustic, 14 electromagnetic, or seismic scattered waves is key to in-vivo medical imaging to identify 15 tumors 7,8 , in-situ seismological monitoring of geological materials (e.g., rock, soil, ice, 16 fluid) 6,9,10 , and health monitoring of civil engineering structures (e.g., metal and concrete) 11 . 17 The conventional principle behind calculating wave sensitivity involves one source 18 point, one observation point, and the Huygens' principle 4-6 : an incident wave at a source 19 point causes a disturbance in the material, a secondary wavefield is generated at a 20 heterogeneity, and the total wavefield (incident and scattered waves) is measured at an 21 observation point. The sensitivity is then calculated using two Helmholtz equations: one in 22 which the incident wave propagates forward in time from the source point, and one in which 23 scattered waves propagate backward in time from the observation point (adjoint). These two 24 wavefields have identical arrival times at the location where scattered waves are generated 25 (i.e., at the location of the heterogeneity). The sensitivity of the scattered waves is calculated 1 by analyzing the amount of correlation between the two wavefields 4-6 . 2 In various disciplines, owing to the reduced footprint of sensors and the increased 3 computational resources, measurements and data processing based on many spatial 4 observation points have become common. For example, scattered waves are measured with 5 spatially dense sampling, e.g., on the surface of the Earth 12,13 or along boreholes 14,15 , at the 6 surface of or inside the human body 16,17 , and in civil engineering structures 16,18 . These 7 developments have led to spatially and temporally high-resolution imaging of materials 8 across scales 12,15,19-21 . 9 Using the conventional principle, the sensitivity from multiple sets of source-10 observation points is obtained by simply summing up the sensitivity from every single set 4-6 , 11 due to the linearity of the problem. Although this approach improves the sensitivity 12 estimation, it does not fully exploit the interrelation among the scattered waves. As the 13 conventional physical principle addresses material heterogeneity present along the wave 14 propagation paths starting from a source and ending at an observation point, small material 15 perturbation between the observation points does not matter, unless the heterogeneities 16 around the source points and those present along the source-observation paths are sufficiently 17 known. This limits resolving the temporal changes in many applications. For example, in 18 geophysical monitoring, wave sources at the Earth's surface and buried receivers are often 19 deployed to monitor stress and/or fluid in the underground [22][23][24] . In addition to time-lapse 20 changes in the target area, such monitoring data, however, also contain the effect of time-21 lapse changes around a source point, e.g., due to environmental effects (like rainfall), which 22 can jeopardize entire time-lapse monitoring efforts 23 . 23 In this article, we present a novel physical principle for calculating and localizing the 24 sensitivity without having to know the heterogeneities around the source points and those 25 contributions from the reference receiver array and adding them result in the scattered waves 1 (red arrows in Fig. 2b) which propagate backward from the hypothetical scatterer to the 2 source point, leaving only the forward propagating wave travelling from Q to P (green arrow 3 in Fig. 2b). In this way, by analyzing the amount of correlation between the forward and the 4 backward propagating waves, a large sensitivity can be achieved at the true scatterer Q in a 5 data-driven manner, without having to know the heterogeneity around the source point. 6 This novel principle calculates the sensitivity which is highly localized at the location 7 of the scatterer Q, without any knowledge of the complete heterogeneity distribution (Fig. 8 2c). In contrast, the sensitivity obtained from the conventional principle, using the same 9 source-observation points, does not allow detecting the local scatterer Q (Fig. 2d). The same 10 conclusion can be drawn even when we use all available data in the calculation (see 11 Supplementary Fig. 1). When we assume that all heterogeneities but Q are perfectly known, 12 the conventional sensitivity approaches the localized sensitivity that we estimate using the 13 new principle (Fig. 2e) We have tested this new sensitivity localization principle on field experimental data. 21 The measurement geometry is same as in Fig. 1a, but here we have multiple observation 22 points P located along another vertical line (Fig. 3a): in total we observe wavefield using two 23 vertical arrays (left array, LA, and right array, RA). In the field test, this corresponds to 24 measurements in two vertical boreholes. 25 For the sake of clarity, here we need to distinguish between extrapolated and 1 calculated waveforms. Extrapolated waveforms are obtained using the boundary integral 2 representation that we have presented above and assuming homogeneity (constant acoustic 3 velocity). The calculated waveform, on the other hand, are the ones obtained through 4 numerical (finite-difference, FD) computation also assuming homogeneity, where the source 5 wavelet is estimated by deconvolution of the recorded waveform 31 with a waveform that is 6 calculated assuming the same homogeneity and an impulsive source at S. Therefore, the 7 difference in waveforms between the observation (black lines in Fig. 3b) and the calculation 8 or extrapolation (green and red lines in Fig. 3b) indicates the deviation from the 9 homogeneity. The waveforms calculated using the conventional approach by FD method 10 (green lines in Fig. 3b) assumes a globally homogeneous material. On the other hand, those 11 using the boundary integral representation (red lines in Fig. 3b) assumes local homogeneity 12 between LA and RA, and heterogeneities around the source point (gray-shaded area in Fig. 13 3a) are accounted for in a data-driven manner. As a result, their waveforms vary over the 14 receiver location (red lines in Fig. 3b), which implies an improved sensitivity to the local 15 heterogeneity. 16 The sensitivity localization is evident in Fig. 4a. The conventional sensitivity shows 17 large values around the source point S. Further, the conventional sensitivity varies smoothly 18 in the subsurface, which implies small correlation between the incident and the scattered 19 waves (averaging out of the contribution of different local scatterers). This is due to the fact 20 that in the conventional approach, the difference waveforms have a complex nature as they 21 include more scatterers (see black and green lines Fig. 3b). In contrast, the localized 22 sensitivity, derived from the new principle found in this research, reveals a very detailed 23 structure between LA and RA (Fig. 4a). The sensitivity indicates the amount of velocity 24 perturbation/changes with respect to homogeneity, assuming Born scattering 32 . A comparison 25 with the heterogeneity directly observed at LA (Fig. 4b) shows that the localized sensitivity 1 detects a much finer variation in heterogeneity than the conventional sensitivity at depths 2 greater than 80 m. The novel principle exploits information in the observed data in a 3 completely different manner than the conventional principle. As a result, the conventional 4 sensitivity cannot achieve comparably good results even using all available data, including 5 data from the reference receiver array ( Supplementary Fig. 2). 6 7 Localized sensitivity: quantitative estimation of heterogeneity 8 The localized sensitivity can be exploited to resolve quantitatively the material 9 heterogeneity. An inversion scheme can be formulated to estimate the acoustic velocity 10 distribution by minimizing the difference between the observed and the calculated waveforms 11 at the observation point. The localized sensitivity can navigate iteratively toward a best-fit 12 model using nonlinear inversion (see "Methods") without a knowledge of the heterogeneity 13 around the source points. 14 Using the same geometry as in Fig. 3a, multiple sources were used sequentially in the 15 field to generate pressure waves at right to RA and left to LA (Fig. 5a) in order to illuminate 16 the medium from various directions. The reference receiver array, the observation point (P), 17 and the zone which does not contribute to calculating the localized sensitivity are 18 appropriately defined depending on the source location (Fig. 5a). In order to verify a resolved 19 heterogeneity, we additionally perform independent waveform measurements (ground-20 truthing) using downhole sources (Fig. 5b) and apply the conventional waveform inversion 21 (Supplementary Note 2). 22 Waveform inversion estimates a velocity model starting from an initial guess 4-6 . We 23 perform standard traveltime tomography to obtain the starting model (Fig. 6a). Waveforms 24 around the first-arriving events and a frequency component similar to that in the independent 25 measurements using downhole sources are analyzed (Supplementary Notes 3 and 4). Figure 1 6b shows the estimated velocity structure using the localized sensitivity derived from the 2 novel principle involving boundary integral representation. Figure 6c shows the result of 3 waveform inversion where additional downhole sources have been placed in RA (ground-4 truthing). Figure 6d shows in details a comparison between the different velocity models. The 5 estimated velocity using the localized sensitivity is strikingly close to the one obtained from 6 independent waveform inversion using downhole sources and also to acoustic well log data at 7 RA, especially at depths greater than 80 m where the raypath coverage is good (Fig. 6d). We delve further into this concept through performing realistic synthetic monitoring 20 tests. Although this novel principle can be useful in high-resolution monitoring in a wide 21 variety of fields e.g., medical sciences, non-destructive material testing, civil engineering, in 22 our synthetic test we consider geoscientific applications where monitoring is necessary while 23 injecting fluid into the subsurface using boreholes. For example, recycled water is injected 24 and stored in the aquifer for water resource management 34,35 , or treated water is injected in 25 order to produce energy in geothermal fields or to store carbon dioxide in the subsurface 36 . In 1 all these applications, detecting subsurface changes due to the replacement of fluid and 2 changes in the pore pressure is crucial 34,37,38 . Monitoring using sensors located in the 3 boreholes is generally performed for this purpose due to the sensitivity of downhole sensors 4 to changes at the target depths [22][23][24] . Source points are located at the surface, as boreholes are 5 generally inaccessible during the operation 35 . Therefore, we also consider the observation 6 points located in boreholes and the source points at the surface (Fig. 7a), similar to field 7 experiments discussed earlier in this article. 8 To generate realistic synthetic data, we assume a random velocity distribution with a 9 mean velocity of 2.0 km/s (Fig. 7a). The data contain source location errors and errors due to 10 temporal changes occurring outside the target area (the dashed rectangle in Fig. 7a). The 11 target area is located at 100 m depth where the velocity decreases by 5 % with respect to the 12 baseline measurement due to an increase in the pore pressure 39 . The topmost 6 meter is 13 modeled as a vadose zone having a random velocity distribution with a mean value of 1.0 14 km/s ( Supplementary Fig. 7). Additionally, the structure of the vadose zone is completely 15 different between the baseline and the monitor surveys ( Supplementary Fig. 7), representing 16 a possible drastic change in seismic velocity in this zone due to seasonal change in water 17 saturation 23,40 . Source-receiver geometry and frequency components are quite similar to those 18 in the field experiments discussed earlier (Supplementary Note 6), except that the source 19 location in the monitor survey contains random error up to 4 m ( Supplementary Fig. 7). 20 We first look at the result of imaging the inter-borehole velocity heterogeneities in the 21 baseline measurement. Here we consider two different scenarios for the prior information of 22 the non-target zone (outside the two boreholes) to build an initial velocity model for 23 waveform inversion. Assuming the same initial velocity model for depths greater than 16 m 24 ( Fig. 7b), we consider a situation where the correct average velocity and thickness of the 25 vadose zone are known (Fig. 7c), and in case we have a poor knowledge of them (Fig. 7d). 1 These two different initial velocity models are used to estimate the heterogeneities using the 2 conventional sensitivity (Figs. 7e,f) and the localized sensitivity (Figs. 7g,h), respectively. As 3 the recorded waveforms contain information of the structure present along the wave 4 propagation path connecting the surface source and the downhole receiver, the waveform 5 inversion using the conventional sensitivity estimates the heterogeneities not only in between 6 the boreholes but also outside, i.e., those structures to the left of LA and to the right of RA 7 (Figs. 7e,f) -which are not of interest. More critically, the estimation of the velocity structure 8 in between the borehole has been influenced by the accuracy in prior information of 9 structures outside the two boreholes, contributing to large uncertainties in the estimated inter-10 borehole heterogeneities (Figs. 7i,j,k). On the contrary, the new principle presented in this 11 research addresses the localized sensitivity and, therefore, provides directly the inter-borehole 12 structure (Figs. 7g,h) which is minimally influenced by the accuracy in the prior information 13 of the non-target zone (Fig. 7k). 14 In order to achieve accurate results using the conventional sensitivity, it is crucial to 15 account for the propagation effects outside the target zone (gray-shaded area in Figs. 7g,h) by 16 obtaining independently good prior information. Alternatively, one can design carefully a 17 multi-scale inversion scheme utilizing sequentially data from lower to higher frequencies in 18 order to avoid gaps in the wavenumber information 6 . However, this is not a trivial task due to 19 the difficulty in acquiring low-frequency data using controlled sources 6 and because each 20 frequency component has generally a different signal-to-noise ratio. The localized sensitivity 21 is free from uncertainties associated with these fundamental limitations, because the 22 propagation effects outside the target zone are accounted for in a data-driven manner. 23 Next we concentrate on the monitoring of time-lapse changes in the target zone which 24 is located around 100 m depth (dashed rectangle in Fig. 7a). The results are shown in Fig. 8. 25 14 The new principle estimates the temporal changes at the target depth much better than the 1 conventional approach (Figs. 8b,c). The conventional approach is sensitive to the source 2 location errors in case an accurate prior information of the vadose zone is available 3 ( Supplementary Fig. 8). Generally, the conventional approach requires an accurate prior 4 knowledge of the vadose zone (Fig. 7). Any inaccuracy in this prior knowledge results in a 5 significant loss of accuracy in the estimated time-lapse changes at the target zone when using 6 the conventional approach (Figs. 8c, 8e). On the contrary, the extremely high sensitivity of 7 the new approach to the inter-well structures allows high-resolution estimation of the velocity 8 changes, which is nearly independent of the presence of any source location error and/or 9 inaccuracy in the prior information of the vadose zone (Figs. 8b, 8d, Supplementary Fig. 8). 10 11 12 In this article, we present a novel principle to localize the sensitivity of scattered 13 waves to medium heterogeneities, which otherwise remain hidden in case of using the 14 conventional principle for sensitivity estimation. Earlier studies on Green's function 15 retrieval 12,41-47 , which is found useful in a variety of disciplines, e.g., medical diagnostics 44 , 16 seismology 12 , exploration geophysics 42 , and material testing 43 , have tackled a similar problem 17 from a different point of view. In those studies, the inter-receiver Green's function (impulse 18 response) is estimated by effectively removing wave propagation paths from a source point 19 using correlation or convolution 45 . Although there is a good similarity between Green's 20 function retrieval and the concept presented here, there are also notable differences. First, in 21 contrast with Green's function retrieval using crosscorrelation 45 , our boundary integral 22 representation can be applied to lossy media. This is a major difference. Furthermore, our 23 primary purpose is to directly obtain the sensitivity using the adjoint of the boundary integral 24 representation and, therefore, the associated Green's functions are by-products. This enables 25 us to tackle the problem from a completely different point of view. We have used the 26 Dirichlet boundary condition in the Green's functions in a rather unconventional manner (see 1 "Methods"). This makes it possible to relax the critical assumption of one-way wavefield 2 propagation in the Green's function retrieval using convolution 46,47 and that of multi-3 component measurements or single-component measurements with approximations 45,48 . The 4 assumption of one-way wavefield and/or the approximations due to the conventional 5 boundary condition are otherwise necessary when the primary purpose is to retrieve Green's 6 functions, which will require further processing. 7 We have formulated the new principle as a 2D problem. We have shown in this article 8 that the assumption of 2D wave propagation is effective for field data. Also, the geometry of 9 the reference receiver array and the observation point can be arbitrary. In this regard, a 10 similar concept, but using a conventional integral representation for Green's function 11 retrieval, was proposed earlier for reflected waves where the reference receiver array and the 12 observation points are co-located in a horizontal borehole 49 . Also, this newly found principle 13 can be applied to seismological monitoring using surface-waves and 2D seismometer arrays 14 because single-mode surface waves in 3D elastic media can be represented by 2D wave 15 propagation at the surface 50 . The independence of the estimated localized sensitivity from 16 source locations and heterogeneities around the source points is attractive for ambient noise 17 tomography, where the limitation due to uneven distribution of noise sources and due to 18 heterogeneities outside the target area is especially detrimental to imaging and monitoring 51 . 19 The novel principle can also be extended to 3D wave propagation. In that case, one needs to 20 measure waves at a reference receiver array located over a 2D surface. 21 The novel principle provides a unique opportunity in case the wave source does not 22 illuminate a medium from a location which is close to the target area, but multiple 23 observation points are used to enhance the localization of the sensitivity without the need to 24 know precisely the structures outside the target area. We have illustrated that this principle is 25 especially useful in monitoring, where the subsurface is illuminated by distant sources and 1 the response is observed by embedded sensors. In other disciplines, this may necessitate a 2 new data-acquisition design. In this regard, the development of fiber optic sensing has lately 3 demonstrated that existing telecommunication networks can turn into spatially dense, 4 subsurface acoustic sensors without a need of additional sensor installation 13,52 . The novel 5 principle can, therefore, be powerful in future seismic monitoring in areas with difficult 6 access, e.g., in urban or underwater environments. We anticipate that the novel principle will 7 open up possibilities for new experiments and measurement techniques where accurate and 8 efficient monitoring is of high importance but the conventional approaches using scattered 9 waves are hindered by the insensitivity to the target area due to limitations in data-acquisition 10 geometry or a poor knowledge about changes occurring outside the target zone. 11 12 Boundary integral representation 14 The following boundary integral representation is used to calculate the waveform at 15 the observation point at P due to the source point at S using interferences of the observed 16 waveforms at the reference receiver array: 17 , (1) 18 where all properties are in the space-frequency domain,  is the angular frequency, j the 19 imaginary unit,  the density, ds the line element, and ni the outward pointing normal vector 20 on the arbitrary integral path D, which is the location of a reference receiver array. Equation 21 (1) indicates that the multiplication of the observed waveform, p(x, S) where x  D, at the 22 receiver in the reference array and the spatial derivative of the Green's function,iG(x, P), in 23 the i direction due to a point source at P, and collecting its contributions from all receivers 24 calculate the observed waveform at P. We derive equation (1) from the general wavefield 1 representation 53 by defining the Green's function such that the velocity structure is same as 2 that of the observed data, but with the Dirichlet (sound-soft) boundary condition at D. This 3 additional boundary condition correctly handles outward propagating waves at the reference 4 receiver array by canceling non-physical wave arrivals while evaluating the integral. 5 Furthermore, it enables us to require only single component wavefield to measure (e.g., 6 pressure field instead of pressure and particle velocity fields), or an approximation due to the 7 single component measurements is not necessary. This contrasts from other similar 8 techniques of wavefield retrieval 46,47 . Furthermore, the boundary condition enables us to use 9 model information only inside the reference receiver array because waves in impulse 10 responses (Green's function) do not radiate outward from the boundary. 11 The array shape in equation (1) is arbitrary. We consider a special case of a vertical 12 line (Fig. 1a). Suppose that a source S is located to the right of the reference receiver array, 13 and the observation point P is located to the left of the reference receiver array. In this 14 configuration, equation (1) can be written as 15 where pobs is the observed waveform at the reference receiver array, G is the Green's function 17 with the Dirichlet boundary condition at the horizontal location of the reference receiver 18 array (Fig. 1a), and we used the relation (n1, n2) = (1, 0). 19 20 The localized sensitivity using the adjoint of the boundary integral representation 21 A sensitivity of the scattered wave is defined as the change of a selected feature due 22 to model perturbation. In this study, we consider the difference between calculated and 23 observed waveform as, 24 where variables with subscript S indicate that they depend on the source location, those with 11 subscript P indicate that they depend on the receiver location or the observation point P, and 12 the frequency dependence of all variables is omitted for brevity. In equation (4), a column 13 vector gP is a solution to the wave equation 31 : (7) 9 The real part is taken in the right-hand side term of above equation because E(m) is real 30 . In where * denotes the complex conjugation. These equations provide an algorithm to calculate 15 the localized sensitivity. In order to interpret physically the adjoint-state equations, we 16 rearrange them such that the sensitivity is a crosscorrelation of two wavefields b and f, where the term ¶A ¶m compensates for the scattering radiation pattern due to different 1 parameterization, and b is a row vector representing the backward propagating wavefield 2 defined as, 3 The backward propagating wavefield b is a solution to the conjugate (time-reversed) wave 5 equation where the source term represents the scattered waves (i.e., difference between 6 calculated and observed waveforms) crosscorrelated with the observed waves at the reference 7 receiver array. Note that the modeling operator A in equation (12) is the same as in equation 8 (5) where the Dirichlet boundary condition at D is considered. 9 10 Field experiment 11 The test site is made of sedimentary layers. Two instrumented vertical boreholes with 12 50 m horizontal separation are available. Hydrophone strings, installed in 28 m -170 m 13 depth range with 2 m separation between two adjacent hydrophones, are used to measure the 14 pressure wavefield due to a surface source, simultaneously in the two boreholes. We use a 15 small amount (6 g) of explosives for surface sources. The measurement-depth interval is split 16 into four sections. The receiver arrays (hydrophone strings) are installed simultaneously at 17 one of the sections in each borehole; they measure the pressure wavefield due to the surface 18 source. In order to cover the measurement-depth interval, we repeat this procedure four times 19 at the fixed source location changing the depth of the receiver arrays. The total recording 20 length is 0.4 s with a sampling interval of 0.25 ms. 21 Waveform inversion 23 We use the quasi-Newton l-BFGS method 54,55 in estimating the velocity model by 24 waveform inversion. The model parameter is iteratively updated using the following formula: 25 where Qk is the approximate Hessian inverse computed using previous values of the gradient, 2 and k is the step length in the line-search in the descent direction. The datasets generated during and/or analysed during the current study are available from the 7 corresponding author on reasonable request. representation. Scattered waves are observed at a reference receiver array (pressure sensors). 4 These waves are used in the representation to calculate the response at the observation point
6,378.4
2021-03-30T00:00:00.000
[ "Geology" ]
Diplomatic Assistance: Can Helminth-Modulated Macrophages Act as Treatment for Inflammatory Disease? Helminths have evolved numerous pathways to prevent their expulsion or elimination from the host to ensure long-term survival. During infection, they target numerous host cells, including macrophages, to induce an alternatively activated phenotype, which aids elimination of infection, tissue repair, and wound healing. Multiple animal-based studies have demonstrated a significant reduction or complete reversal of disease by helminth infection, treatment with helminth products, or helminth-modulated macrophages in models of allergy, autoimmunity, and sepsis. Experimental studies of macrophage and helminth therapies are being translated into clinical benefits for patients undergoing transplantation and those with multiple sclerosis. Thus, helminths or helminth-modulated macrophages present great possibilities as therapeutic applications for inflammatory diseases in humans. Macrophage-based helminth therapies and the underlying mechanisms of their therapeutic or curative effects represent an under-researched area with the potential to open new avenues of treatment. This review explores the application of helminth-modulated macrophages as a new therapy for inflammatory diseases. Introduction Regulation of macrophage activity and function is essential to balance tissue homeostasis, driving or resolving inflammation in most disease processes. The inflammatory or anti-inflammatory activities of macrophages are shaped in a tissue-and signal-specific manner, enabling macrophages to induce various activation patterns and develop specific functional programs (Fig 1) [1,2]. A recent study in airway hyperreactivity has demonstrated that local macrophages acquire an alternatively activated phenotype (AAM) with regulatory aspects that prevent the development of pathology by inducing antigen-specific CD4 + FoxP3 + T regulatory (Treg) cells [3]. In a skin allergy model, monocytes that are recruited to the site of inflammation express high levels of the typical AAM markers arginase-1 (arg-1), chitinase-like proteins (CLP), and programmed death-ligand (PD-L)2 and reduce inflammation [4]. Hence, the anti-inflammatory and immunoregulatory functions of macrophages could be harnessed for inflammatory disorders, implying that studies to understand their maintenance and stability in vivo are essential. Helminths typically induce T helper (Th)2 responses but have also developed multiple ways to regulate the host immune system to ensure their long-term survival in the host. This regulation can affect bystander allergic or autoimmune diseases, and it has become clear that the presence or absence of helminths in humans has a major influence on the prevalence of such diseases. According to the hygiene hypothesis, improvements in public health have reduced incidences of bacterial, viral, and parasitic diseases, which correlate with an increase in chronic autoimmune inflammatory and allergic disorders. Epidemiological studies demonstrate the inverse relationship between helminth infections and inflammatory bowel disease (IBD) [5] or allergies [6,7]. Multiple experimental studies in mice recapitulate this negative correlation and show disease improvement with concurrent helminth infections, allowing underlying mechanisms to be unravelled. Abundant evidence demonstrates the potential of immunosuppressive, macrophage-targeted therapies in the treatment of renal disease, diabetes, inflammatory diseases, and transplantation rejection. In a chronic inflammatory renal disease model, macrophages polarized in vitro with interleukin (IL)-4 and IL-13 ameliorate disease severity and injury after transfer into mice with the disease [11]. In diabetic mice, transfer of macrophages treated with a combination of IL-4, IL-10, and transforming growth factor (TGF)-β protects up to 80% from the condition [12]. M2 macrophages reduce proinflammatory Th1 and Th17 responses and disease severity in mice with experimental autoimmune encephalomyelitis (EAE), a model of multiple sclerosis (MS) [13]. Similarly, M2 macrophages can protect from septic shock in a model of cecal ligation and puncture [14]. These studies show great promise for the application of macrophages in chronic diseases. Helminth-Modulated Macrophages Macrophages are key innate immune cells that encounter helminths upon initial infection. The macrophage immunoregulatory phenotypes that develop during helminth infection divert anti-helminth immunity to induce host tolerance, parasite survival, and repair of any tissue injury caused by larvae or eggs [15,16]. Defined helminth products can also act on macrophages to induce specific regulatory phenotypes; great efforts have been made to identify helminth products with therapeutic potential [27]. A clear example of this is the filarial molecule ES-62 from Acanthocheilonema viteae, which targets macrophages to repress IL-12 in cells exposed to lipopolysaccharide (LPS) and interferon (IFN)-γ [28,29]. A cysteine protease inhibitor from A. viteae (AvCystatin) is recognised and taken up by macrophages to induce phosphorylation of the mitogen-activated protein kinase signalling pathways ERK1/2 and p38, resulting in IL-10 production [30]. These macrophages also express arg-1, PD-L1, and PD-L2, promote IL-10 production in CD4 + T cells in a cell contact-dependent manner, and protect against allergy and colitis upon adoptive transfer [9]. In summary, helminths modulate macrophages to develop distinct phenotypes and functions that reduce or prevent host immunopathology by inducing regulatory cell populations or diverting proinflammatory effector cells (Fig 2). This cell population may be taken advantage of to develop new therapeutic agents and treat unrelated inflammatory diseases (Box 1). Application of Helminth-Modulated Macrophages in Autoimmune Diseases It is important to establish whether, once differentiated, the regulatory phenotype of helminthmodulated macrophages is stable enough to treat chronic diseases. We aim to instigate a discussion by reviewing current data on these macrophages in the treatment of inflammatory diseases (Fig 3). Helminths and macrophages in allergy and asthma Allergies are driven by dysregulated Th2 responses, predicting that helminth infection might exacerbate these inflammatory disorders. Nevertheless, the strong regulatory mechanisms employed by helminths suppress Th1-and Th2-mediated diseases. We have previously reviewed helminth infections that mediate protection in allergy-related experimental animal models [31]; we describe here those that illustrate macrophages as potential therapeutic targets for these and other diseases. Lung macrophages are key players in asthma and develop a defined activation status that modulates adaptive immune responses by local T cells. Despite the fact that lung macrophages are involved in fibrogenesis in asthma [32], it has been shown that tissue-resident macrophages can induce FoxP3 + Treg cells [3]. In a murine model of ovalbumin (OVA)-induced airway hyperreactivity, treatment with AvCystatin reduced eosinophil lung recruitment and production of OVA-specific immunoglobulin (Ig)E, total IgE, and allergen-specific IL-4, thereby diminishing disease symptoms. Depleting IL-10 or macrophages reversed these antiallergic effects, implicating the therapeutic potential of macrophages in this model [33,34]. In fact, transfer of AvCystatin-treated macrophages to mice with airway hyperreactivity suppressed clinical disease symptoms [9]. A recent clinical trial focused on helminth therapy in rhinitis [35][36][37], in which patients were treated with Trichuris suis ova. While an antiparasitic immune response developed in these patients, neither a redirection of allergen-specific immune responses nor a therapeutic effect was achieved. Similarly, experimental hookworm infection did not lead to improved outcomes in a clinical trial with patients suffering from asthma [38]. However, allergic mice treated with excretory/secretory (E/S) products from T. suis had reduced allergic airway hyperreactivity after challenge [39], which might be a reflection of the route of application or the amount of helminth-derived immunomodulatory molecules available in this setting. Thus, Box 1. Characteristics of Selected Inflammatory Diseases and Widely Used Animal Models Allergy: Strong Th2 responses in mucosal tissues or skin to environmental and food antigens involving eosinophils, mast cells, and IgE. Animal model: Allergic airway inflammation using sensitization and challenge with model allergens (ovalbumin). Inflammatory Bowel Disease (IBD): Autoimmune disease. Ulcerative colitis is characterized by a dominant CD4 + Th1 response of the colon. Crohn's Disease can occur through the entire length of the gastrointestinal tract and is typically associated with an excess of Th2 cytokines. Animal model: spontaneous development of colitis in IL-10-deficient mice or in T and B cell-deficient mice upon transfer of antigen-experienced T cells. Chemical-induced colitis is based on disruption of the intestinal barrier and T cell response against autologous proteins. Diabetes: Type 1 diabetes (T1D) occurs early in life and is immunologically driven, primarily by a strong CD8 + T cell response that destroys pancreatic β cells. Type 2 Diabetes (T2D) is associated with lifestyle and nutrition factors. Animal model: Nonobese diabetic (NOD) mice develop symptoms of T1D spontaneously at about 12 weeks of age. Multiple Sclerosis (MS): Complex demyelinating inflammatory disorder of the central nervous system involving humoral and cellular (Th1 and Th17) immune responses. Animal model: Experimental autoimmune encephalomyelitis (EAE) is induced by injection of myelin-oligodendrocyte glycoprotein and adjuvants and mirrors major aspects of the complex pathophysiology of MS. Rheumatoid Arthritis (RA): Autoimmune disease causing inflammation and destruction of the joints. It is a systemic disease that exhibits extra-articular manifestations as well. Animal model: Collagen-induced arthritis (CIA). Tissue injection of collagen together with complete Freud's adjuvants in susceptible mouse strains. Sepsis: A serious medical condition characterized by dysregulated systemic inflammatory responses towards microbial stimuli followed by immunosuppression. Animal model: Bolus injection of Toll-like receptor agonists or cecal ligation and puncture, which mimics the polymicrobial sepsis observed in human disease. promising preclinical data need to be translated to show definitive clinical benefits for patients with allergic disorders. Helminths and macrophages in inflammatory bowel disease and coeliac disease Distortion of the intestinal barrier and immune response to intestinal bacteria can lead to IBD, including ulcerative colitis and Crohn's disease. Tissue macrophages are present in high numbers in the intestine and present a good target for helminth therapy because of their multiple activation states. In healthy individuals, lamina propria macrophages maintain intestinal homeostasis by inducing Tregs [40,41], while in active IBD, macrophages contribute to pathology by expressing multiple proinflammatory cytokines [42]. M1 macrophages invading the intestinal tissue drive the disruption of the epithelial barrier through dysregulation of tight junction proteins and epithelial apoptosis [43]. In contrast, patients with inactive Crohn's disease have higher levels of M2 macrophages [44], which are also important in inducing protection against IBD in mice [45]. Thus, helminth-induced M2 macrophages and Tregs may contribute to protection against IBD. Mice infected with Hymenolepis diminuta [46] and treated with the adult worm extract [47] or treated with IL-4/IL-13-differentiated M2 macrophages [44] have significantly reduced pathology in experimentally induced colitis; this protective effect is abrogated when IL-10 [46] or macrophages [44] are depleted. Murine infection with S. mansoni prevents colitis in a macrophage-dependent but IL-4-and IL-13-independent manner, representing another population of suppressive macrophages [48]. Intriguingly, experimental hookworm infection combined with gluten microchallenge induces tolerance in patients with coeliac disease, an autoimmune disease resulting from gluten intolerance [49]. The contribution of macrophages was not evaluated in this setting. Multiple clinical trials are investigating the use of T. suis ova therapy in IBD and have shown moderate success (see Fig 3) [50,51]. The safety of this treatment, threatened by the colonization and invasion of the host by T. suis, has been much debated and requires treated patients to be monitored closely [52][53][54][55]. An alternative approach would be to administer characterized helminth products such as AvCystatin or transgenic probiotic bacteria expressing helminth immunomodulators, which lead to diminished disease scores in murine IBD models by reducing numbers of inflammatory macrophages [33,56]. Nevertheless, there are currently no clinical trials addressing the role of helminth-modulated macrophages in protection against IBD. Future studies should translate the encouraging experimental evidence into clinical benefits for patients. Helminths and macrophages in diabetes Both environmental and genetic factors play a role in the development of diabetes, and incidences of this condition have increased dramatically in the past 30 years in developed and newly industrialized countries [57]. Studies have demonstrated an inverse correlation between diabetes (type 1 [T1D] and type 2 [T2D]) and helminth infections [58]. Helminth products have also been demonstrated to reduce incidences of diabetes in animal models [58][59][60][61]. Early studies suggested that macrophages could exacerbate T1D, in which macrophage depletion ameliorated disease [62]. As T1D is a Th1-driven disease, it is likely that the macrophages involved are classically activated, which could be redirected by a helminth infection. Nonobese diabetic (NOD) mice infected with Heligmosomoides polygyrus have augmented numbers of Tregs and Th2 responses as well as an infiltration of M2 macrophages and increased IL-10 expression in the pancreatic lymph nodes [63]. Injection of schistosome egg antigen into NOD mice induces arg-1 and RELM-α expression in macrophages and modulates T cell responses [64]. In another diabetes model, infection with Taenia crassiceps attenuates disease in two different mouse strains and is accompanied by high levels of IL-4 and M2 macrophages [65]. To date, there are no clinical trials examining the application of helminths or macrophages in T1D, indicating an open area for future research. Helminths and macrophages in multiple sclerosis MS is an inflammatory autoimmune disorder driven by dysregulated Th1 and Th17 responses, resulting in a demyelinating disease that affects the central nervous system (CNS). Environmental and genetic factors may be involved in disease onset [66]. As MS progresses, acute inflammatory lesions develop when the integrity of the blood-brain barrier is disturbed, with CD4 + Th1, Th17 cells, and CD8 + cells becoming activated by mature dendritic cells [67].Various studies have demonstrated that helminth-infected patients with MS have fewer relapses and inflammatory changes than uninfected patients, while removal of helminth infection exacerbates MS disease [68][69][70]. Different helminth species have been studied for their ability to modulate unwanted inflammatory responses in MS [71]. Mice with EAE immunised with S. mansoni eggs have lower disease severity; clinical scores and cellular infiltrates are reduced, and CD11b + macrophages isolated from the CNS show decreased IL-12 expression [72]. Schistosomal egg antigen and a single schistosome glycan were also effective in protecting mice against EAE [73,74]. The importance of M2 macrophages that produce IL-10 and protect mice from developing EAE has also been described [75]. While no clinical trials currently exist that use macrophages to treat MS, trials using T. suis ova (TSO) or hookworm larvae are underway or already present results from a small cohort of patients (Fig 3). While both studies show that TSO is safe, the therapeutic effect is ambiguous: one study reports a decrease in the number of CNS lesions observed by magnetic resonance imaging [76] while a comparable study did not detect clinical improvement [77]. Helminths and macrophages in rheumatoid arthritis Multiple experimental helminth-based treatment strategies have been tested in rheumatoid arthritis (RA), a chronic inflammatory disorder [78]. While the exact disease cause is unknown, dysregulated immune responses are important, as high levels of tumour necrosis factor (TNF) and IL-1β have been detected in inflamed synovial membranes. T cells from synovial tissue express Th1-and Th17-associated cytokines and activate neighbouring macrophages that release large amounts of TNF and IL-1β. These and other macrophage-derived proinflammatory cytokines drive much of the inflammation and implicate macrophages as key players in disease [79]. Current treatments include nonsteroidal anti-inflammatory drugs, which can have potentially detrimental long-term side effects [80]. ES-62 shows great potential to treat dysregulated inflammatory disorders [81]. ES-62 prevents collagen-induced arthritis when injected into mice by downregulating IL-17 and MyD88 [82] and restoring levels of IL-10-producing B cells and reducing intra-articular plasma cell infiltration [83]. Introduction of ES-62 in a coculture of T cells from patients with RA and macrophage cell lines significantly reduced macrophage TNF expression compared with ES-62-untreated cells [84]. A synthetic analogue of ES-62 prevented experimental arthritis and inhibited macrophage-derived IL-1β [85]. Numerous therapies for RA are in preclinical or clinical trials, which aim to neutralise or inhibit many macrophage-related disease-driving mechanisms [86]. However, as yet, only one clinical trial assesses helminth infection as a potential therapy for RA (Fig 3). Helminths and macrophages in systemic inflammation Recently, it was shown that helminths and their products can decrease the prevalence of sepsis and improve the outcome of systemic bacterial infection and inflammation [87][88][89][90][91]. Epidemiological data demonstrated a lower prevalence of filarial infection in patients with sepsis than in healthy individuals, suggesting that preexisting helminth infection prevents sepsis development [87]. Fundamental evidence demonstrating that helminth-modulated macrophages improve sepsis came from a murine experimental filarial infection, in which gene expression profiles of macrophages modulated by Litosomosoides sigmodontis illustrated decreased Toll-like receptor (TLR) responsiveness. Transfer of macrophages from L. sigmodontis-infected mice into naïve recipients improved sepsis outcome in a TLR2-dependent but AAM-independent manner [89]. Macrophages from patients with sepsis expressed reduced sepsis-inducing inflammatory cytokines after treatment with Trichinella spiralis E/S products [88]. Similarly, a T. spiralis cathepsin B-like protein ameliorates intestinal ischemia/reperfusion injury, a model for systemic inflammation, by promoting a switch from M1 to M2 macrophages [91]. Furthermore, a single helminth molecule from Fasciola hepatica (fatty acid-binding protein; FABP or Fh12) can suppress serum inflammatory cytokines in a septic shock model. This was accompanied by suppression of proinflammatory cytokines and nitric oxide synthase-2 (NOS2) in macrophages [90] and demonstrates the potential of macrophages in this disease setting. Macrophages in Cell Therapy: A Potential Treatment Option For macrophage-based therapies, one must consider the possibility of phenotype reversion after transfer. The phenotype and function of a particular macrophage subset develops from the combined integration of tissue-specific and environmental cues, such as inflammation or infection, which can lead to epigenetic imprinting [19,92]; however, the stability of the therapeutic macrophage phenotype must be determined. Murine studies have shown that transferred macrophages can block pathology independently of the perturbed environment they encounter [9,12,13,45,75,89]. One particular macrophage subset can confer protection upon transfer in mice and humans. Murine macrophages stimulated with IFN-γ have significant anti-inflammatory characteristics, mitigating colitis and prolonging allograft survival [93,94]. Human macrophages stimulated with IFN-γ in vitro and administered to patients undergoing renal transplant significantly reduced the required dose of immunosuppressant drugs and improved transplanted kidney function [95]. These macrophages conferred immunosuppression on T cells, which was partly mediated by the indoleamine 2,3-dioxygenase and likely induced nutrient deficiencies in alloreactive T cells [94]. Although the mechanism of action of these macrophages is different to that of helminthinduced macrophages, it exemplifies how these powerful cells can redirect undesired immune responses in disease settings. The macrophage population that bears sufficient therapeutic function in a given environment must be carefully evaluated. Alongside macrophages, other immune cells are involved in helminth-derived immunomodulation. The application of one helminth-modulated cell population cannot represent the full spectrum of immunomodulation compared with a chronic helminth infection, which can induce changes in microbiota [38,96], mediating a therapeutic effect [97], but it might be enough to reset the diseased environment to homeostasis. What Does the Future Hold for Helminth-Based Therapies? The studies discussed herein demonstrate the potential of helminth infections and, in particular, helminth-induced macrophages to treat inflammatory disorders; in some cases, clinical trials are already underway. However, the mode of application must be addressed to determine the safest and most effective route for patients. Is it best to treat the patient with a patent infection or with isolated stages (e.g., eggs)? Is it best to apply specific helminth-derived products (e.g., AvCystatin, ES-62, T. spiralis cathepsin B-like protein) or to stimulate in vitro and reinfuse a patient's own macrophages? Live infections provide a rapid path to clinical trials compared with identifying and characterising defined products. Nevertheless, live infections remain infectious, and can induce pathological consequences in the host, especially in immunocompromised individuals [98]. In contrast, defined products can be produced recombinantly in high quantities at relatively low costs. Defined products allow efficient site-directed and prolonged application, e.g., through the use of carriers like probiotic bacteria that colonize and release the molecules in targeted tissues [56]. Generating transgenic auxotrophic strains that release powerful helminth products will enable the use of such techniques without risking contamination of the environment. However, helminth products themselves may be immunogenic, and thus, a further therapeutic alternative is the synthesis of small-molecule analogues, as described for ES-62. New targets identified by large-scale technologies (proteomics, metabolomics, genomics) combined with bioinformatics aid the discovery of novel pathways and molecules that can translate helminth-or helminth product-derived immunomodulating strategies into efficient therapies [27]. The experimental models that illustrate the prospect of helminth-modulated, macrophagebased therapies provide hope that safe and effective treatments for humans are a viable option. The abilities of macrophages to regulate T and B cell function and cytokine production highlight this innate cell population as a powerful tool in therapy development. However, the stability of transferred macrophages must be established. The fact that clinical trials employing the regulatory effects of helminths or immune-suppressive macrophages are underway is extremely encouraging and indicates that research in this direction should be pursued.
4,631.2
2016-04-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Deep learning-based segmentation of breast masses using convolutional neural networks Automatic breast tumor segmentation based on convolutional neural networks (CNNs) is significant for the diagnosis and monitoring of breast cancers. CNNs have become an important method for early diagnosis of breast cancer and, thus, can help decrease the mortality rate. In order to assist medical professionals in breast cancer investigation a computerized system based on two encoder-decoder architectures for breast tumor segmentation has been developed. Two pre-trained DeepLabV3+ and U-Net models are proposed. The encoder generates a high-dimensional feature vector while the decoder analyses the low-resolution feature vector provided by the encoder and generates a semantic segmentation mask. Semantic segmentation based on deep learning techniques can overcome the limitations of traditional algorithms. To assess the efficiency of breast ultrasound image segmentation, we compare the segmentation results provided by CNNs against the Local Graph Cut technique (a semi-automatic segmentation method) in the Image Segmenter application. The output segmentation results have been evaluated by using the Dice similarity coefficient that compares the ground truth images provided by the specialists against the predicted segmentation results provided by the CNNs and Local Graph Cut algorithm. The proposed approach is validated on 780 breast ultrasonographic images of the BUSI public database of which 437 are benign and 210 are malignant. The BUSI database provides classification (benign or malignant) labels for ground truth in binary mask images. The average Dice scores computed between the ground truth images against CNNs were as follows: 0.9360 (malignant) and 0.9325 (benign) for the DeepLabV3+ architecture and of 0.6251 (malignant) and 0.6252 (benign) for the U-Net, respectively. When the segmentation results provided by CNNs were compared with the Local Graph Cut segmented images, the Dice scores were 0.9377 (malignant) and 0.9204 (benign) for DeepLabV3+ architecture and 0.6115 (malignant) and 0.6119 (benign) for U-Net, respectively. The results show that the DeepLabV3+ has significantly better segmentation performance and outperforms the U-Net network. Introduction Nowadays, the breast cancer is still the main reason of death among women.To detect possible breast tumors the examination is performed using different screening procedures.Among them, the mammography is a primary screening method for breast cancer, with a very good performance in small tumors and microcalcifications detection.However, it uses ionizing radiation and breast compression.However, the dense breast tissue can hide small tumors and thus the sensitivity of the mammography decreases.This is the main limitation that considerably reduces the sensitivity of the mammography technique [1].Another breast imaging method is ultrasound imaging.This technique does not use ionizing radiation, is more cost-effective and allows the detection of tumors in the case of dense breast tissue.To differentiate benign and solid breast lesions, the ultrasound technique evaluates various features such as: morphology, orientation, boundary of lesions and lesion size.As a diagnostic tool, breast ultrasound imaging (BUS) has a good performance in an early detection of cancer. Various tools for suspicious lesions localization and segmentation (i.e., the boundaries of the lesion are outlined to differentiate it from the background tissue) were developed.Localization and segmentation of the tumor aims to separate it from the normal breast tissue.Correct segmentation of breast tumors is a necessary stage in the diagnostic process.To obtain quantitative data, automatic segmentation can be useful for medical radiologists in the analysis of breast cancer.The segmentation performance is assessed based on the ground truth images that are manually generated through boundaries delineation.Over the years, the first attempt was devoted to computer-aided diagnostic systems, which are important tools in assisting the medical imaging professionals [2,3].Nowadays, artificial intelligence (AI) is the leader in clinical practice, as it saves time and preforms tedious activities much faster, diminishes radiologist overload and helps less experiences practicians, in some cases [4][5][6][7][8][9][10][11][12][13][14][15][16][17].AI includes machine learning and deep learning as efficient computational tools for biomedical big data storage, analysis and understanding.In image processing, the main tool used by the deep learning is the convolutional neural network (CNN), with several models proposed in recent years.Generally, a CNN has input and output layers, along with the convolution, max pooling and fully connected layers.They enable the CNN to learn a huge number of abstract features. The proposed approach aims to find a deep learning network solution to automatically detect and segment breast cancer with high accuracy.To this end, our main contributions are as follows: (i) BUS images segmentation using the Local Graph Cut method from MatLab as a benchmark; (ii) BUS images segmentation using two Encoder-Decoder architectures, namely DeepLabV3+ and U-Net; (iii) the segmentation performance analysis using the Dice similarity coefficient computed between the ground truth images and the predicted segmentation results provided by the two CNNs and (iv) a comparison between the Local Graph Cut segmented images and the predicted segmentation results provided by the two CNNs. Related Works The development of AI research on breast ultrasound has tremendously increased.Many studies devoted to imaging classification, object detection, segmentation and synthetic imaging of breast lesions were published.Vakanski et al. [6] proposed a deep learning model for breast tumor segmentation in BUS images.This approach introduced blocks of attention into a U-Net architecture and learned feature representations that prioritize spatial regions with high levels of saliency.This deep learning model used a dataset of 510 images and a Dice similarity coefficient of 90.5% was reported.Two different encoder-decoder architectures for breast tumor segmentation, SegNet and U-Net were proposed in [7].The proposed model used the ratio 0.85/0.15for training / validation datasets.The U-Net architecture has returned the best qualitative and quantitative results.The SegNet architecture provided a mean intersection over union (IoU) of 68.88% and 76.14% for the U-Net architecture.Four semantic segmentation models based on CNNs along with AlexNet, U-Net, SegNet and DeepLabV3+ were analyzed in [8].Over 3000 BUS images were used for training and validation.The segmentation performance was quantified by F1-score and IoU.The best results were achieved by models based on SegNet and DeepLabV3+ with a F1-score > 0.90 and an IoU > 0.81.Tsochatzidis et al. [9] proposed a CNN approach to analyze the mammographic information for breast cancer diagnosis.An improved diagnostic performance was obtained for the proposed method using a CNN classifier.Xu et al. [10] used CNNs to segment the 3D BUS images and various metrics used to evaluate the segmentation performance.Their reported results indicated that the obtained segmentations can facilitate the breast cancer diagnosis.Singh et al. [11] proposed a deep learning method for breast tumor segmentation based on the texture features and the contextual dependencies.The dilated or atrous convolution allows capturing the spatial context (the position and size of the tumors).The model can examine various tumors of various shapes and sizes.They reported the Dice and IoU values of 93.76% and 88.82%, respectively. A combination between a graph CNN and a classical CNN in order to improve the detection of malignant lesions in breast mammograms was proposed in [12].The authors reported improved performance, i.e., a sensitivity of 96.20%, specificity of 96.00% and accuracy of 96.10% and concluded that the proposed method improves detection of malignant breast masses.Salama and Aly [13] proposed a new technique based on the following models: ResNet50, MobileNetV2, InceptionV3, VGG16 and DenseNet-121 to segment the area of interest in mammographic images.Three mammographic datasets were used to evaluate the proposed models: MIAS, DDSM and CBU-DDSM.The best classification performance was reported for the InceptionV3 and modified U-Net working in the DDSM mammography dataset.Luo et al. [14] proposed a new segmentation framework based on CAD technology and deep learning algorithms for breast tumor classification.Initially, the network is trained to obtain enhanced images of the segmented tumors.The features are obtained from the raw and enhanced images, respectively, by using two parallel networks.A new cascaded CNN that consists of the U-net, a bidirectional attention guidance network and a refinement residual network for breast lesion segmentation was proposed in [15].The results indicated the cascade convolutional algorithm as being able to improve diagnostic performance.A selective kernel U-Net CNN model for BUS image segmentation was developed in [16].The network's receptive fields are adjusted using both a selective kernel and an attention mechanism to provide the fuse feature maps.Another CNN model was used to build an "attention enhanced U-net" for breast segmentation with improvements to the obtained results [17]. Proposed method The Encoder-Decoder architecture employs a decoder network that maps the resolution of the encoder network layer features.This mapping aims to recover the mask that retains the tumor segmentation at the original image size. DeepLabv3+ employs Xception-65 as its backbone.This module is based on deep separable convolutions with different steps, which decomposes the convolution into a deep convolution and a point convolution.It uses atrous spatial pyramid pooling (ASPP) to increase the field-of-view but the necessary number of parameters was not increased.The ASPP module processes the up-sampled feature map to conform to the low level resolution.Then, it up-sampled again the feature map.The restoration of spatial information is done progressively to pick up boundary information of the target.Thus, the loss of intrinsic spatial information is minimized.The decoder uses bilinear up-sampling to restore the initial spatial resolution.[22] is a CNN used for semantic segmentation with a symmetric architecture.It consists of an encoder devoted to spatial features extraction and a decoder that generetas the segmentation map using the encoded features.The encoder contains two 3 × 3 convolution operations, one 2 × 2 max-pooling process and two other steps that are repeated four times.A 2 × 2 transposed convolution operation followed by two 3 × 3 convolution operations are used by decoder for feature map generation.This sequence is repeated four times. U-Net architecture description. U-Net Finally, the segmentation map is obtained based on 1 × 1 convolution operation.The ReLU (Rectified Linear Unit) activation function works in the convolutional layers.The final convolutional layer uses a Sigmoid activation function.The connection between the encoder and the decoder is made by a progression of two 3 × 3 convolution operations.The U-Net network architecture is presented in figure 3. The Local Graph Cut segmentation method is a semi-automated segmentation technique.To segment the breast mass, a region of interest (ROI) is drawn around it.The boundaries of the region of interest will mark the breast mass to be segmented.It is an interactive segmentation where some information is provided by the user. The segmentation performance is evaluated using the DICE similarity coefficient.The similarity coefficient compares the segmentation of the original raw images with that provided by the Graph-cut as benchmark (they help to compare the performance and efficiency of segmentation).Also, the segmentation results provided by the CNN are compared with the segmentation performed manually by the radiologist (ground truth images).The flowchart of the segmentation step is showed in figure 4. Results and Discussion The BUS images are segmented using the Local Graph Cut algorithm from the MATLAB environment.These segmentation results were used as benchmark images together with ground truth images provided by radiologists for subsequent segmentations done with convolutional networks. Visualizations of the segmentation results are displayed in figures 5 and 6.Overall, the segmentation outputs provided by DeepLabV3+ are closer to the ground truth images provided by both the radiologists and graph Cut algorithm used as benchmark (figure 5).DeepLabV3+ provides an accurate segmentations compared to the U-net networks.Compared to U-net networks in terms of Dice scores (0.9360/malignant and 0.9325/benign), the DeepLabV3+ network clearly outperformed the segmentation provided by U-Net.The segmentation performance of U-Net networks is significantly worse (0.6251/malignant and 0.6252/benign).The encoder-decoder structure of the DeepLabV3+ network allows for a better resolution control of the extracted encoder features. Moreover, the DeepLabv3+ model performed slightly better with malignant tumors than the benign ones.This finding suggests that the low intensity of benign tumors made them more difficult to segment.For U-Net model we cannot find a pattern in the segmentation performance.The rationale of our study indicates that a classical encoder−decoder structure cannot successfully generate segmentation outputs.A previous study reported a similar segmentation performance of a DeepLabV3+ model with a Dice coefficient of 0.8690 [23].Another study tested the segmentation performance of U-net models using BUS image and reported a Dice score of 0.7177 as the best performance of the model [24]. Conclusions To assist medical professionals in breast cancer diagnosis, a computerized system based on two encoder-decoder architectures for breast tumor segmentation was proposed.The segmentation performance of DeepLabV3+ and U-Net models were investigated.Significant performance differences are shown in terms of Dice similarity coefficient with respect to various experimental conditions.Our proposed DeepLabV3+ achieves promising performance while U-Net provides the worst results among the analyzed methods.Semantic segmentation based on deep learning techniques can overcome the limitations of traditional algorithms. Figure 1 . Figure 1.Various BUS images of BUSI dataset [16].Benign lesion in gray scale image (a) and ground truth image (b).Malignant lesion in gray scale image (c) and ground truth image (d) Figure 2 . Figure 2. The description of the DeepLabV3+ network architecture. Figure 3 . Figure 3.The U-Net network architecture. Figure 4 . Figure 4.The segmentation flowchart of breast mass by CNNs. Figure 5 . Figure 5. Visualizations of the DeepLabV3+ segmentation results.The top left three images are benign images (from left to right correspond to original raw image, ground truth provided by radiologists, and segmentation output by DeepLabV3+).The top right three images are benign images (from left to right correspond to original raw image, ground truth provided by Graph cut algorithm, and segmentation output by DeepLabV3+).The bottom images are malignant images and the experimental conditions are the same. Figure 6 . Figure 6.Visualizations of the U-Net segmentation results.The images in the first row are benign images (the first two images from left to right correspond to ground truth provided by radiologists and the result predicted by U-Net; the following two images correspond to ground truth provided by Graph cut algorithm and the result predicted by to U-Net).The images on the second row are malignant images and the experimental conditions are the same. Figure 7 . Figure 7. (a) Average Dice score for segmentation performance for DeepLabV3+; (b) Average Dice score for segmentation performance for U-Net.The central lines indicate median Dice score values; "boxes" are the interquartile range and "whiskers" indicate the smallest and largest values.
3,234
2024-02-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Riemannian Proximal Policy Optimization In this paper, We propose a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems. To model policy functions in MDP, we employ Gaussian mixture model (GMM) and formulate it as a nonconvex optimization problem in the Riemannian space of positive semidefinite matrices. For two given policy functions, we also provide its lower bound on policy improvement by using bounds derived from the Wasserstein distance of GMMs. Preliminary experiments show the efficacy of our proposed Riemannian proximal policy optimization algorithm. Introduction Reinforcement learning studies how agents explore/exploit environment, and take actions to maximize long-term reward. It has broad applications in robot control and game playing (Mnih et al., 2015;Silver et al., 2016;Argall et al., 2009;Silver et al., 2017). Value iteration and policy gradient methods are mainstream methods for reinforcement learning (Sutton and Barto, 2018;Li, 2017). Policy gradient methods learn optimal policy directly from past experience or on the fly. It maximizes expected discounted reward through a parametrized policy whose parameters are updated using gradient ascent. Traditional policy gradient methods suffer from three well-known obstacles: high-variance, sample inefficiency and difficulty in tuning learning rate. To make the learning algorithm more robust and scalable to large datasets, Schulman et al. proposed trust region policy optimization algorithm (TRPO) (Schulman et al., 2015). TRPO searches for the optimal policy by maximizing a surrogate function with constraints placed upon the KL divergence between old and new policy distributions, which guarantees monotonically improvements. To further improve the data efficiency and reliable performance, proximal policy optimization algorithm (PPO) was proposed which utilizes firstorder optimization and clipped probability ratio between the new and old policies (Schulman et al., 2017). TRPO was also extended to constrained reinforcement learning. Achiam et al. proposed constrained policy optimization (CPO) which guarantees near-constraint satisfaction at each iteration (Achiam et al., 2017). Although TRPO, PPO and CPO have shown promising performance on complex decisionmaking problems, such as continuous-control tasks and playing Atari, as other neural network based models, they face two typical challenges: the lack of interpretability, and difficult to converge due to the nature of non-convex optimization in high dimensional parameter space. For many real applications, data lying in a high dimensional ambient space usually have a much lower intrinsic dimension. It may be easier to optimize the policy function in low dimensional manifolds. In recent years, Many optimization methods are generalized from Euclidean space to Riemannian space due to manifold structures existed in many machine learning problems (Absil et al., 2007(Absil et al., , 2009Vandereycken, 2013;Huang et al., 2015;Zhang et al., 2016). In this paper, we leverage merits of TRPO, PPO, and CPO and propose a new algorithm called Riemannian proximal policy optimization (RPPO) by taking manifold learning into account for policy optimization. In order to estimate the policy, we need a density-estimation function. Options we have include kernel density estimation, neural networks, Gaussian mixture model (GMM), etc. In this study we choose GMM due to its good analytical characteristics, universal representation power and low computational cost compared with neural networks. It is well-known that the covariance matrices of GMM lie in a Riemannian manifold of positive semidefinite matrices. To be more specific, we model policy functions using GMM first. Secondly, to optimize GMM and learn the optimal policy functions efficiently, we formulate it as a non-convex optimization problem in the Riemannian space. By this way, our method gains advantages in improving both interpretability and speed of convergence. Please note that Our RPPO algorithm can be easily extended to any other non-GMM density estimators, as long as their parameter space is Riemannian. In addition, previously GMM has been applied to reinforcement learning by embedding GMM in the Q-learning framework (Agostini and Celaya, 2010). So it also suffers from the headache of Q-learning that it can hardly handle problems with large continuous state-action space. Reinforcement learning In this study, we consider the following Markov decision process (MDP) which is defined as a tuple (S, A, P, r, γ), where S is the set of states, A is the set of actions, P : S × A × S → [0, 1] is the transition probability function, r : S × A × S → R is the reward function, and γ is the discount factor which balances future rewards and immediate ones. To make optimal decisions for MDP problems, reinforcement learning was proposed to learn optimal value function or policy. A value function is an expected, discounted accumulative reward function of a state or state-action pair by following a policy π. Here we define state value function as v π (s) = E τ ∼π [r(τ ) | s 0 = s] where τ = (s 0 , a 0 , s 1 , ...) denotes a trajectory by playing policy π, a t ∼ π (a t | s t ), and s t+1 ∼ P (s t+1 | s t , a t ). Similarly we define state-action value function as: q π (s, a) = E τ ∼π [r(τ ) | s 0 = s, a 0 = a]. We also define advantage function as A π (s, a) = q π (s, a) − v π (s). In reinforcement learning, we try to find or learn an optimal policy π which maximizes a given performance metric J (π). Infinite horizon discounted accumulative return is widely used to evaluate a given policy which is defined as: where r (s t , a t , s t+1 ) is the reward from s t to s t+1 by taking action a t . Please note that the expectation operation is performed over the distribution of trajectories. Riemannian space Here we give a brief introduction to Riemannian space, for more details see (Eisenhart, 2016).Let M be a connected and finite dimensional manifold with dimensionality of m. We denote by T p M the tangent space of M at p. Let M be endowed with a Riemannian metric ., . , with corresponding norm denoted by . , so that M is now a Riemannian manifold (Eisenhart, 2016). We use l (γ) = b a γ (t) dt to denote length of a piecewise smooth curve γ : [a, b] −→ M joining θ to θ, i.e., such that γ (a) = θ and γ (b) = θ. Minimizing this length functional over the set of all piecewise smooth curves passing θ and θ, we get a Riemannian distance d (θ , θ) which induces original topology on M . Take θ ∈ M, the exponential map exp θ : which maps a tangent vector v at θ to M along the curve γ. For any θ ∈ M we define the exponential inverse map exp −1 θ : M −→ T θ M which is C ∞ and maps a point θ on M to a tangent vector at θ with d (θ , θ) = exp −1 θ θ . We assume (M, d) is a complete metric space, bounded and all closed subsets of M are compact. For a given convex function f : The set of all subgradients of f at θ ∈ M is called subdifferential of f at θ ∈ M which is denoted by ∂f (θ ). If M is a Hadamard manifold which is complete, simply connected and has everywhere non-positive sectional curvature, the subdifferential of f at any point on M is non-empty (Ferreira and Oliveira, 2002). Modeling policy function using Gaussian mixture model To model policy functions, we employ the Gaussian mixture model which is a widely used and statistically mature method for clustering and density estimation. The policy function can be modeled as π(a | s) where N is a (multivariate) Gaussian distribution with mean µ ∈ R d and covariance matrix S 0, K is number of components in the mixture model, α = (α 1 , α 2 , ..., α K ) are mixture component weights which sum to 1. In the following, we drop s in GMM to make it simple and parameters of GMM still depend on state variable s implicitly. We would like to optimize the following problem with corresponding constraints from GMMs: We employ a reparametrization method to make the Gaussian distributions zero-centered. We augment action variables by 1 and define a new variable vector as a = [a, 1] with new covariance matrix S = S + µµ µ µ 1 (Hosseini and Sra, 2015). where α k is the step size. Proof of Lemma 1 can be found in the Appendix. Theorem 1. Under Assumption 1, the following statements hold for any sequence {θ k } k≥0 generated by Algorithm 1: (a) Any limit point of the sequence {θ k } k≥0 is a critical point, and the sequence of function values {f (θ k )} k≥0 is strictly decreasing and convergent. Proof of Theorem 1 can be found in the Appendix. Lower bound of policy improvement Assume we have two policy functions π (a | s) = Σ i α i N (a; S i ) and π(a | s) = Σ i α i N (a; S i ) parameterized by GMMs with parameters θ we would like to bound the performance improvement of π (a | s) over π(a | s) under limitation of the proximal operator. Implementation of the Riemannian proximal policy optimization method Recall that in the optimization problem (2), we are trying to optimize the following objective function: min 2) Retraction With S i,t and grad S i,t g(θ ) shown above at iteration t, we would like to calculate S i,t+1 using retraction. From (Cheng, 2013), for any tangent vector η ∈ T W M , where W is a point in Riemannian space M , its retraction R W (η) := arg min X∈M W + η − X F . For our case where σ i and q i are the i-th eigenvalues and eigenvector of matrix S i,t − α t (grad S i,t g(θ ) + ∂ S i,t ϕ(θ )). η i , i = 1, 2, ..., K −1 are updated using standard gradient decent method in the Euclidean space. The calculation and retraction shown above are repeated until f (θ ) converges. Simulation environments and baseline methods We choose TRPO and PPO, which are well-known excelling at continuous-control tasks, as baseline algorithms. Each algorithm runs on the following 3 environments in OpenAI Gym MuJoCo simulator (Todorov et al., 2012): InvertedPendulum-v2, Hopper-v2, and Walker2d-v2 with increasing task complexity regarding size of state and action spaces. For each run, we compute the average reward for every 50 episodes, and report the mean reward curve and parameters statistics for comparison. Preliminary results In Fig. 1 we show mean reward (column1) for PPO, RPPO and TRPO algorithms on three MuJoCo environments, screenshots (column2) and probability density of GMM (column3) for RPPO on each environment. From the learning curves, we can see that as the state-action dimension of environment increases (shown in Table 1), both the convergence speed and the reward improvement slow down. This is because the higher dimension the environment sits, the more difficult the optimization task is for the algorithm. Correspondingly, in the GMM plot, S, A represent the state and the action dimensions respectively, and the probability density is shown in z axis. In the density plot, we can see that as the environment complexity increases, the density pattern becomes more diverse, and non-diagonal matrix terms also show its importance. The probability density of GMM shows that RPPO learns meaningful structure of policy functions. TRPO and PPO are pure neural-network-based models with numerous parameters. This makes the model highly vulnerable to overfitting, poor network architecture design and the hyper-parameters tuning. RPPO achieves better robustness by having much fewer parameters. In Table 1 we compare the number of parameters of each algorithm on each environment. It can be seen that GMM has 10 3 ∼ 10 5 order fewer parameters as compared with TRPO and PPO. Conclusion We proposed a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems. To model policy functions in MDP, we employed the Gaussian mixture model (GMM) and formulated it as a non-convex optimization problem in the Riemannian space of positive semidefinite matrices. Preliminary experiments on benchmark tasks in OpenAI Gym MuJoCo (Todorov et al., 2012) show the efficacy of the proposed RPPO algorithm. In Sec. 4.1, the algorithm 1 we proposed is capable of optimizing a general class of non-convex functions of the form f (θ) = g (θ) − h (θ) + ϕ (θ). Due to page limit, in this study we focus on f (θ) = g (θ) + ϕ (θ) as shown in the Optimization problem (2). In the future, it would be interesting to incorporate constraints in MDP problems like constrained policy optimization (Achiam et al., 2017) and encode them as a concave function −h(θ) in our RPPO algorithm.
3,029.6
2020-05-19T00:00:00.000
[ "Computer Science" ]
Correlation Functions and Vertex Operators of Liouville Theory We calculate correlation functions for vertex operators with negative integer exponentials of a periodic Liouville field, and derive the general case by continuing them as distributions. The path-integral based conjectures of Dorn and Otto prove to be conditionally valid only. We formulate integral representations for the generic vertex operators and indicate structures which are related to the Liouville S-matrix. The Liouville theory has fundamentally contributed to the development of both mathematics [1,2] and physics [3], and beyond it fascinated with a wide range of applications as a conformal field theory. Nevertheless, its quantum description is still incomplete. By canonical quantisation [4]- [8] it could be shown that the operator Liouville equation and the Poisson structure of the theory, including the causal non-equal time properties, are consistent with conformal invariance and locality [6,8], but exact results for Liouville correlation functions remained rare [9] despite of ambitious programmes [10,11]. In this letter we calculate correlation functions for vertex operators with generic exponentials of a periodic Liouville field. The vertex operators are given in terms of the asymptotic in-field of the Liouville theory [12], and we formulate for them an integral representation as an alternative to the formal but still useful infinite sum of [6]. However, there is so far no reliable recipe to use such integral operators directly since the complex powers of the screening charge operators describing them are not constructed yet, a problem related to the exact knowledge of the Liouville S-matrix. We calculate therefore first correlation functions for vertex operators of [6] with negative integer exponentials and continue the result analytically as a distribution, as is required by the zero mode contributions of the Liouville theory [13]. We prove so that the correlation functions suggested in [14] are conditionally applicable only. This is indeed a surprise because the conjecture of [14] was obtained by standard analytical continuation of a path-integral result for minimal models [15], which describes nothing but a special part of the operator based correlation function [9]. In this respect it is worth mentioning that already the Liouville reflection amplitudes [16] proved to be identical with those obtained from the Liouville S-matrix [13,17]. We parametrise the vertex operators by a free-field which allows to avoid the use of quantum group representations, and we define the approach on which both, the derivation of the correlation functions and the formulation of the integral representation of the vertex operators are based. The structures needed to understand the Liouville S-matrix are indicated, and in the conclusions we stress the importance of the results for the related WZNW cosets. Free-field parametrisation We use minkowskian light-cone coordinates x = τ + σ,x = τ − σ and select from ref. [1] that general solution of the Liouville equation which has a particularly utilisable physical interpretation. For periodic boundaries one can parametrise the non-canonical and quasi-periodic parameter functions A(x),Ā(x) by the canonical free field 2φ(τ, σ) = log A ′ (x)Ā ′ (x) with standard mode expansion and the chiral decomposition Note the rescalings of the fields φ and ϕ by the Liouville coupling γ. We obtain so the canonical transformation between the Liouville and the free-field of [6] (γ there is 2γ here!) and by integrating A ′ (x) ( correspondinglyĀ ′ (x) ) using (3) A(x) = e γq 2 sinh γp 2 2π 0 dy e γp 2 (ǫ(x−y)+ y π )+2φ(y) . ǫ(z) is the stair-step function, and as preconceived, the non-vanishing parameter p of the hyperbolic monodromy relation (3) becomes identical with the momentum zero mode of the free field (4), and we choose p > 0 [4]. As a consequence the chosen canonical free field (4) has a physical meaning, it is the asymptotic in-field of the Liouville theory [12], and the out-field which is given by the in-field too defines likewise the classical form of the Liouville S-matrix. Two related forms of the Liouville exponential are relevant for canonical quantisation, the formal expansion in powers of the conformal weight zero functions and the integral representation for positive λ e 2λϕ(τ,σ) = e 2λφ(τ,σ) The last equation follows from the Liouville solution (6) by using the Fourier transformation of (2 cosh y) −2λ [18] +∞ If we continue this equation from positive to negative λ and consider the kernel of that integral as a generalised function ( see also eqs. (56) -(61) of [13] ), for λ → − n/2 we obtain In this manner (12) becomes in fact identical with the corresponding finite sum of (11). It might be interesting to notice here that the investigations initiated by the references [4,5] are based on Liouville's second form of the general solution [1]. This approach led [4] to a parametrisation of the Liouville theory in terms of a canonically related [19] but pseudo-scalar free field which is asymptotically neither an in-nor an out-field, whereas the work of [5,10] mainly treats the singular elliptic monodromy for which we do not know whether there exists a parametrisation in terms of a real free field at all. Vertex operators The quantum Liouville theory will be defined by canonically quantising the free field (4) [a m , a n ] = m δ m+n,0 , and requiring that the vertex operators are primary and local. Such a procedure gives an anomaly-free quantum Liouville theory only if additional quantum deformations are taken into consideration [4] - [8]. A quantum realisation of (11), consistent with the operator Liouville equation and the canonical commutation relations, was so constructed in [6]. But the infinite sum is not a useful vertex operator. It is its finite form for negative integer 2λ = −n which presents a well-defined basis for the calculation of correlation functions. Let us review the needed elements, and call V 2λ (τ, σ) the vertex operator of e 2λϕ(τ,σ) . Taking, for simplicity, the same notations for the classical and the corresponding normal ordered quantum expressions, the vertex operator for λ = −1/2 can be written as Here µ 2 α is the renormalised 'cosmological constant' and we introduced short notations and defined the conformal weight zero screening charge operator as is the integral (7) rewritten by using the periodicity and ǫ(z) = sign(z) for z ∈ (−2π, 2π). The useful factorised form of the operator V −n can be constructed easily by induction where the regularising factor ǫ α n just removes the short distance singularity, and it results The shift of the momenta is a consequence of locality, and the same holds for the deformed binomial coefficients whereas the hidden short distance contributions of (21) are due to the conformal properties. Note that [e −φ(τ,σ) , S(τ, σ)] = 0 provides hermiticity of (16). The vertex operators for arbitrary λ will be described jointly with the correlation functions in the next section. Correlation functions Owning to conformal invariance we have to calculate 3-point correlation functions only. They are defined by matrix elements of vertex operators p; 0| V 2λ (0, 0) |p ′ ; 0 between the highest weight vacuum state |p; 0 ( p > 0 ) which gets annihilated by the operators a n with n > 0. Using (21) the correlation function for 2λ = −n becomes ( P = γp 2π ! ) where J n m (P, α) summarises the p-dependent factors of the sinh-terms of (21) and I n m (P, α) is the (anti-)chiral matrix element 0| e −nφ m l=1 A P −i(n−2l+1)α |0 which is given by the integrals over the conformal short-distance deformations of (21) Fortunately these integrals can be expressed [10] by Dotsenko-Fateev integrals [20] I n m (P, α) = m l=1 Γ(1 + (n − l + 1)α) Γ(1 + lα) If we replace the sin-(sinh-) functions of (22) ( respectively (24) ) by Γ-functions πx sin πx we find for the correlation function the result Our aim is to continue these functions from the negative value 2λ = −n to positive λ. We apply eq. (14) and obtain, with 2αk = P − P ′ from the continued δ-function of (28), where V −2λ ik−λ (P, P ′ ) is the analytical continuation of (29). This continuation will be performed by means of the integral representation of the Γ-function [18] log and the following summation under that integral The function f (x, α| m) has a natural analytical continuation with respect to α, m and x, and the useful property f (x − mα, α| m) = f (x − α, −α| m). To simplify our result we rewrite a factor of (29) and continue it separately After analytically continuing the remaining terms of (29), and obvious cancellations, we obtain finally the generic correlation function we are looking for as where It is worth mentioning here that the function (32) was used for the parametrisation of the 3-point correlation functions suggested in [14], and it is easy to show that with eqs. (35)-(36) we have re-derived that result (see eq. (14) of the second reference of [14] with obvious changes of the notation and overall renormalization). However, there are some further remarks in order. Dorn and Otto [14] started their analytical continuation from a path-integral result for minimal models [15] which is just the one term n = 2m of the operator calculated correlation function (28) proportional to δ(P − P ′ ). This single term would be selected in our calculations if and only if screening charge conservation could be operative [9]. However, the Liouville theory is Möbius noninvariant and all the (n + 1) terms of (28) together characterise this theory for 2λ = −n. Moreover, for odd n the correlation functions of [14] vanish and only the neglected n terms guarantee the necessary non-vanishing of the Liouville correlation functions in those points. Vice versa, by analytically continuing the correlation function of [14] as generalised function, in the manner explicitely described for the zero modes in ref. [13], one finds the correlation function for negative λ which for λ = −n/2 just reproduces (28). We should, furthermore, mention that the functions (32) and Υ b of [16] are related by where b 2 = α, and u + v = Q − sb with Q = b + 1/b. With these equations we can derive from (35) the more heuristically motivated alternative, but to [14] equivalent, correlation functions of [16] as follows By the same procedure of analytical continuation as used before, and by taking into consideration the results of this section, we obtain from (21) the vertex operator for positive λ as an integral representation But we should emphasize here that we do not have so far a recipe at hand to calculate the Liouville correlation functions for positive λ directly from this integral. It would require the knowledge of complex powers of the screening charge operator S ik−λ (τ, σ), which incidentally would give the S-matrix in compact form too. But this problem is at present under investigation only, and that is the reason why we cannot compare our vertex operator (39) with the Ansatz of [11] for which corresponding screening charge operators are not given either, and as well no recipe how to treat that vertex for an operator calculation of correlation functions directly. Conclusions With a suitable free-field parametrisation ad hand, canonical quantisation proves to be a straightforward and reliable approach for a description of the quantum Liouville theory. We have calculated the correlation functions for generic vertex operators by using known algebraic quantum structures of the theory and their distributional properties. But the derived integral vertex operators could not be applied directly since the complex powers of screening charge operators are not yet constructed. The exact S-matrix is therefore not available too. Nevertheless, it is known that self-adjointness of the Liouville theory as well as the reflection amplitudes follow from the S-matrix [17], which can be derived at least level by level. We can so conclude that we have got, in principle, a complete understanding of quantum Liouville theory. The remaining problems to be solved are mostly of technical nature. Since the Liouville theory and the SL(2; R)/U(1) respectively SL(2; R)/R + black hole cosets can be derived from the same SL(2; R) WZNW theory by Hamiltonian reduction [21], we expect also a joint quantum treatment of them. While doing so, the Liouville theory is an important ingredient of the other cosets, so that its quantum description is a prerequisite for the quantisation of the other cosets. This might be relevant for AdS 3 and string theory too, and different boundary conditions should be taken into consideration. Moreover, we believe that the observed causal non-equal time structures of the cosets are important for field theory in general, and that these two-dimensional conformal field theories will remain outstanding examples of mathematical physics even in the next future.
3,000.2
2003-11-21T00:00:00.000
[ "Physics" ]
Low-Temperature 3D Printing Technology of Poly (Vinyl Alcohol) Matrix Conductive Hydrogel Sensors with Diversified Path Structures and Good Electric Sensing Properties Novel and practical low-temperature 3D printing technology composed of a low-temperature 3D printing machine and optimized low-temperature 3D printing parameters was successfully developed. Under a low-temperature environment of 0–−20 °C, poly (vinyl alcohol) (PVA) matrix hydrogels including PVA-sodium lignosulphonate (PVA-LS) hydrogel and PVA-sodium carboxymethylcellulose (PVA-CMC) hydrogel exhibited specific low-temperature rheology properties, building theoretical low-temperature 3D printable bases. The self-made low-temperature 3D printing machine realized a machinery foundation for low-temperature 3D printing technology. Combined with ancillary path and strut members, simple and complicated structures were constructed with high precision. Based on self-compiling G-codes of path structures, layered variable-angle structures with high structure strength were also realized. After low-temperature 3D printing of path structures, excellent electrical sensing functions can be constructed on PVA matrix hydrogel surfaces via monoplasmatic silver particles which can be obtained from reduced reactions. Under the premise of maintaining original material function attributes, low-temperature 3D printing technology realized functionalization of path structures. Based on “3D printing first and then functionalization” logic, low-temperature 3D printing technology innovatively combined structure–strength design, 3D printable ability and electrical sensing functions of PVA matrix hydrogels. Introduction As a kind of water-soluble synthetic polymer, Poly (vinyl alcohol) (PVA) has the perfect biodegradability, biocompatibility, nontoxicity and large number of hydroxyl units [1][2][3], which provides sufficient mechanical strength and flexibility base for preparation of PVA matrix hydrogels.Combined with inorganic fillers including lignin, graphene oxide, nano clay, ammonium sulfate ions and so on [4][5][6][7], PVA hydrogels exhibit "green" application features in drug delivery systems, wound dressings and especially soft sensors [8][9][10].For example, Zhang and co-workers prepared a kind of PVA hydrogel with relatively high mechanical strength and conductivity on the raw material base of biomass sodium lignosulfonate and PVA [11].By physically mixing the PVA and glycerol in water as the gel matrix, followed by soaking in a saturated NaCl aqueous solution, the PVA/glycerol/NaCl ionic hydrogel sensors exhibited excellent transparency, stretchability, mechanical strength, toughness and excellent stretching sensitivity with a gauge factor of 4.01 [12].Even though PVA hydrogels have impressive advantages, the simple sample structures are always shapes of strip and chunk.Therefore, with the increasing extensive application of PVA hydrogels, complex and diversified structure patterns are important application restrictions and bottlenecks.3D printing technology has the advantages of material universality, convenience, cleanliness and environmental protection and continuous production efficiency [13][14][15][16], and is widely used in polymer material preparation and is treated as an effective method for resolving the restrictions and bottlenecks of PVA hydrogels.The existing typical 3D printing technologies are material jetting and material extrusion (fused deposition modelling and direct ink writing) [17][18][19][20][21][22].As the most common 3D printing pattern, the material extrusion 3D printing technology exhibits wide material adaptability including polymers, metals, ceramics and so on [23].In practical application, direct-ink-writing 3D printing can be easily modified to combine temperature control systems and other auxiliary modules inexpensively, which is treated as the main pattern of the laboratory-built 3D printing machine.In order to realize the complex and diversified structural patterns of PVA hydrogels via direct-ink-writing 3D printing, the rheology characteristics of PVA hydrogel ink are the key points to resolve.The common method for realizing the printable ability of hydrogels is the addition of accessory ingredients [24][25][26], which adjusts the viscosity, storage modulus and loss modulus of hydrogel inks effectively.But, the incidental property variations including strength, toughness and crosslinking density change the original material characteristics and application demands.Namely, how to realize the printable ability of PVA hydrogel and maintain the original or specific material properties are the inevitable key points and challenges.Based on the combination of material properties and printable abilities, a new novel 3D printing technology is the innovative solution. The polymerization reaction of PVA hydrogels via freeze-thaw cycles in our previous study provided the breakthrough point of 3D printing feasibility for authors to combine material properties and printable abilities.In our previous study [27], a kind of PVA matrix conductive hydrogel with excellent conductivity and sensitivity was prepared via freeze-thaw cycles and an in situ reduction reaction of silver particles.During the polymerization of PVA and sodium lignosulphonate (LS) under an environment of low temperature of −18 • C for 12 h, a PVA-LS hydrogel reaction liquid changed from a liquid state to the final gel state.Namely, the viscosity value of the PVA-LS hydrogel increased gradually in polymerization, which disclosed the important effect of temperature on the rheology characteristics of the PVA-LS hydrogel.There must be a specific temperature range in the freeze-thaw cycles, leading to the appropriate rheology characteristic for 3D printing.Inspired by characteristics of the low-temperature polymerization process of PVA hydrogel, the authors decided to innovate a new 3D printing technology which can realize effective diversified and complex structures of PVA hydrogels and maintain the original or specific material properties.The convenient, high-efficiency and feasible low-temperature 3D printing machine which composed of the direct-ink-writing 3D printer and a low-temperature control system is the foundation of 3D printing of PVA hydrogels. In this study, the low-temperature 3D printing machine and corresponding 3D printing parameters for PVA matrix hydrogels were assembled and developed, respectively.The low-temperature 3D printing machine can be divided into four parts including a parallelshaft drive mechanism, a hydraulic extrusion mechanism, a low-temperature control device and an electric control device.The parallel-shaft drive mechanism and hydraulic extrusion mechanism constituted the direct-ink-writing device of the low-temperature 3D printing machine, which realized steady and continuously three-dimensional movements of the nozzle and controllable extrusion of the PVA hydrogel reaction liquid.In the low-temperature environment ranging from 0 • C to −20 • C, PVA matrix hydrogels realized high-precision 3D printing.Using the self-made low-temperature 3D printing machine, PVA-LS and PVA-sodium carboxymethylcellulose (CMC) hydrogels with complex and diversified structures were 3D printed.Considering the precondition of maintaining original mechanical strength and reduction reaction matrix role, the innovative low-temperature 3D printing technologies of PVA matrix hydrogels provided a simple and highly effective method for overcoming the application bottleneck of PVA matrix hydrogels. Design and Components of Low-Temperature 3D Printing Machine The low-temperature 3D printing machine is composed of a parallel-shaft drive mechanism, a hydraulic extrusion mechanism, a low-temperature control device and an electric control device.In order to realize the application of objects of small-space-occupancy ratios, stable and efficient operation, continuous movement of the nozzle, 3D printing precision and controllable extrusion of hydrogel inks, the main load bearing structures are aluminium profiles (Model 4040F).Under the conditions of satisfying operating requirements and reducing manufacturing costs, the other load bearing structures were 3D printed high-strength polylactic resin (Sanweicube Co., Ltd., Shenzhen, China).The components of the low-temperature 3D printing machine are listed in Table 1.The integral structure of the low-temperature 3D printing machine is a secure trigonal prism.The three-dimensional location of the nozzle is realized by three independent pulleys which were vertically installed on aluminium profiles.Under the control of programs, the three pulleys operate synergistically via stepping motors and synchronous belts, which drive the three-dimensional movements of nozzle.The nozzle connects with an ink storage tube.In order to ensure 3D printing precision and reduce the effect of inertance on the movement precision of the ink storage tube, the distal extrusion pattern is adopted.Driven force of the hydraulic extrusion mechanism results from stepping motors and synchronous belts.The pressure medium is water.The hydraulic extrusion mechanism is located on the top of trigonal prism structure, which connects with the ink storage tube via a plastic hose.The low-temperature control device and electric control device are located on the bottom of the trigonal prism structure, which constructs a low-temperature environment and drives the stepping motors, respectively.The length, width and height of the low-temperature 3D printing machine are 300 mm, 300 mm and 800 mm, respectively.The movement speed range of the nozzle is 10-60 mm/s.The extrusion speed range is 0.2-1.2mm/s.The low-temperature range is 0-−20 In order to verify the feasibility of the low-temperature 3D printing technology and the effectiveness of the low-temperature 3D printing machine, two kinds of PVA matrix hydrogels including PVA-LS hydrogel and PVA-CMC hydrogel were prepared. The preparation processes of the PVA-LS hydrogel were as follows: 45 mL of a cosolvent mixture of DMSO and deionized water with a volume of 4:1 was prepared via magnetic stirring.LS weighing 0.15 g was added into the co-solvent mixture of DMSO and deionized water and stirred and sonicated for 10 min and 20 min at room temperature, respectively.Then 4.85 g of PVA was added into the mixed solution and mechanically stirred at 400 rpm for 2 h at 140 • C. When the temperature cooled to 60 • C, the final PVA-LS mixed solution was slowly poured into the ink storage tube for low-temperature 3D printing. The preparation processes of the PVA-CMC hydrogel were as follows: 0.15 g CMC was placed on the bottom of a beaker and adequately infiltrated by ethanol for 2 min.Then, 45 mL deionized water was poured into the beaker and sonicated for 20 min.Then, 4.85 g PVA was added into the CMC solution and continuously magnetically stirred at 800 rpm to obtain a homodisperse PVA-CMC mixed solution.Then, the PVA-CMC mixed solution was stirred at 400 rpm for 2.5 h at 95 • C. When the temperature cooled to 60 • C, the final PVA-CMC mixed solution was slowly poured into the ink storage tube for low-temperature 3D printing. Low-Temperature 3D Printing Process of PVA Matrix Hydrogels Before low-temperature 3D printing, a series of structural models and corresponding STL files were constructed via Solidworks.RepetierHost was used to connect the lowtemperature 3D printing machine and to control the 3D printing process.After steady installation of the ink storage tube with the PVA matrix hydrogel ink, the low-temperature control device was started, firstly to obtain a low-temperature environment of −20 • C.Then, the hydraulic extrusion mechanism was preloaded to ensure continuous extrusion of the PVA hydrogel ink in the 3D printing process.Under the control of the electric control device, the parallel-shaft drive mechanism and the hydraulic extrusion mechanism jointly prepared the structural models.After low-temperature 3D printing, 3D printed samples were placed in a −20 • C environment for 12 h and then thawed at room temperature for 2 h.The polymerized PVA-LS and PVA-CMC hydrogels were immersed in deionized water for 4 days (changing water every 12 h) to remove the DMSO completely. Functionalization of Low-Temperature 3D Printing PVA Matrix Hydrogels In order to verify the innovativeness of the low-temperature 3D printing technology which maintained the original functionalization properties of PVA-LS and PVA-CMC hydrogels, a silver particles reduction reaction was conducted.After mixing AgNO 3 , VC and PVP solution with specific concentrations, PVA-LS and PVA-CMC hydrogels were soaked in the mixed AgNO 3 -PVP solution for 8 h and then soaked in the mixed VC-PVP solution for 24 h.After washing with deionized water, the low-temperature 3D printed conductive PVA matrix hydrogels were prepared.In order to obtain optimal conductivity and sensitivity, the parameters of the silver particles reduction reaction were obtained via our previous study.The concentration of AgNO 3 and VC were 1 M and 0.08 M, respectively.The mass fraction of PVP was 10 wt.%. Material Characteristics 2.4.1. Rheology Tests A rotational rheometer (DHR, TA Instruments, New Castle, DE, USA) with a parallel plate geometry of 40 mm in diameter and a gap of 0.55 mm was employed to analyse the low-temperature rheology properties of the PVA matrix hydrogel reaction liquids.The strain sweep range was 0.1-1000%.Under the condition of the oscillation mode, the frequency was 1 Hz.Shear rate was 6.28 rad/s.In a temperature variation range of 25-−20 • C, viscosity, storage modulus and loss modulus were analysed. Mechanical Tests The universal testing machine with a constant loading rate of 100 mm•min −1 was used to obtain the tensile strength of the low-temperature 3D printed PVA matrix hydrogels.The sample size was 40 mm × 5 mm × 4 mm (Length × Width × Height).Average values of stress and strain were calculated from three individual measurements. Microstructure and Phase Component Tests In order to observe the microstructure and distribution pattern of the reduced silver particles, the cross-sections of the PVA-LS and PVA-CMC hydrogels after the silver reduction reaction were observed by field-emission scanning electron microscopy (SEM, XL-30, FEI Company) equipped with an energy-dispersive spectrometer.All samples were cryofractured in liquid nitrogen first and then freeze-dried for 48 h via a freeze-drying oven (LGJ-10C, Beijing Four Ring Scientific Instrument Factory Co., Ltd., Beijing, China). In order to verify the phase component of the silver particles on the PVA matrix hydrogel surfaces, X-ray diffraction (XRD, MAXima_X XRD-7000) with Cu Kα radiation in the range of 10-90 • was conducted.The scan speed was 4 deg/min.The freeze-dried PVA matrix hydrogels with silver particles were pressed into sheets for XRD analysis. Conductivity and Electrical Sensing Tests The conductivity of the low-temperature 3D printing PVA matrix hydrogels was measured via a digital-display DC power supply.The digital-display DC power supply exhibited the voltage (U) and current (I) loaded on the samples in real time, which provided resistance (R) calculation data via Ohm's law of R = U/I.The parallel-connected "JLU" LED circuit board was used to exhibit conductivity intuitively. The gauge factor (GF) was also calculated and analysed.The PVA matrix hydrogels with silver particles were fixed on the universal testing machine with a constant stretching rate of 10 mm/min.A digital-display multimeter (KEYSIGHT, 34465A) with an NPLC value of 0.02 was used to record resistance variation in the stretching process.The relative change in the amount of resistance (∆R) was calculated as follows: where R and R 0 represented the real-time resistance and initial resistance, respectively, which can be obtained via the digital-display multimeter. The GF was calculated as follows: where ε was the specific strain of the PVA matrix hydrogels in the stretching process. Results and Discussion 3.1.Low-Temperature Rheology Analysis of PVA Matrix Hydrogels Low-temperature rheology characteristics of the PVA matrix hydrogels are the theoretical basis of the low-temperature 3D printing technology and mechanical framework.Figure 1a shows viscosity variation of PVA-LS and PVA-CMC hydrogel reaction liquids and pure PVA liquid.With the increase of shear rate, viscosity values of the PVA matrix hydrogels decreased gradually, exhibiting a shear-thinning phenomenon.With the further increase of shear rate, the entanglements of the macromolecular segments were straightened, leading to the viscosity values of the PVA matrix hydrogels decreasing linearly.The PVA matrix hydrogel reaction liquids exhibited the typical Newtonian fluid phenomenon at 25 • C.During the temperature declining process from 25 • C to −20 • C, viscosity values of the PVA-LS, PVA-CMC and pure PVA increased gradually, as shown in Figure 1b.Compared with Figure 1a, viscosity of the PVA matrix hydrogel reaction liquids had obvious increases.The addition of LS and CMC maintained the original viscosity characteristic of PVA.Based on viscosity characteristics, the low-temperature environment enabled the primary feasibility of the low-temperature 3D printing.In order to investigate the theoretical 3D printable ability, the storage modulus and loss modulus of the PVA matrix hydrogel reaction liquids were tested, as shown in Figure 1c.The crossover point of the storage modulus and loss modulus was treated as an indication value of the breakdown of the gel network structure and the transition of the quasi-liquid phase.PVA and PVA matrix hydrogels all had crossover points as shown in Figure 1c.The storage modulus and loss modulus curves of the PVA-LS and PVA-CMC hydrogel reaction liquids intersected in a temperature range between −10 • C and 0 • C. When the temperature was lower than −10 • C, the storage modulus was higher than the loss modulus.PVA-LS exhibited elastic deformation and a solid state.When the temperature was higher than −5 • C, the storage modulus was lower than the loss modulus.PVA-LS exhibited viscous deformation and a liquid state.A similar variation phenomena between storage modulus and loss modulus also existed in PVA-CMC and PVA.As shown in Figure 1c, the viscosity increased suddenly, and fluidity decreased, which existed in PVA matrix hydrogels.Based on the influence of the low-temperature environment, the PVA matrix hydrogel reaction solutions had perfect rheology properties which can store energy via elastic deformations and lose energy via viscous deformations.The addition of LS and CMC maintained the original modulus characteristics of PVA.As shown in Figure 1b,c, the PVA matrix hydrogel reaction solutions exhibited specific low-temperature rheology properties, which adequately proved the feasibility of the low-temperature 3D printing technology.Figure 1d-f exhibits the 3D printed sample patterns of the PVA matrix hydrogels at 25 • C and −20 • C, respectively.The failed 3D printing result shown in Figure 1d at 25 • C can be found in the supplemental movie (Supporting Information M1).The low-temperature of −20 • C realized the controllable extrusion and perfect structure retention properties of PVA matrix hydrogels effectively, which intuitively proved the effectiveness of the theoretical analysis of low-temperature rheology. Construct and Operation of Low-Temperature 3D Printing Machine In order to realize the 3D printing technology of PVA matrix hydrogel reaction liquids, a self-made low-temperature 3D printing machine was designed and constructed.Figure 2a exhibits the general assembly drawing and practical pattern of the self-made low-temperature 3D printing machine.The low-temperature 3D printing machine was designed with a triangular prism structure to obtain high structural stability, which was composed of a hydraulic extrusion mechanism, a parallel-shaft drive mechanism, a lowtemperature control device and an electric control device.The parallel-shaft drive mechanism was the main body frame of the low-temperature 3D printing machine.Three independent pulleys on the three vertical aluminium profiles drove movement of the nozzle on the X axis, Y axis and Z axis via the connection of parallel shafts.Pulleys were driven by stepping motors.Accurate and rapid positioning properties were advantages of the parallel-shaft drive mechanism.The hydraulic extrusion mechanism undertook the extrusion of the PVA matrix hydrogel reaction liquids, which were located on the top of the parallel-shaft drive mechanism.The distalis extrusion pattern reduced motion inertia of Construct and Operation of Low-Temperature 3D Printing Machine In order to realize the 3D printing technology of PVA matrix hydrogel reaction liquids, a self-made low-temperature 3D printing machine was designed and constructed.Figure 2a exhibits the general assembly drawing and practical pattern of the self-made lowtemperature 3D printing machine.The low-temperature 3D printing machine was designed with a triangular prism structure to obtain high structural stability, which was composed of a hydraulic extrusion mechanism, a parallel-shaft drive mechanism, a low-temperature control device and an electric control device.The parallel-shaft drive mechanism was the main body frame of the low-temperature 3D printing machine.Three independent pulleys on the three vertical aluminium profiles drove movement of the nozzle on the X axis, Y axis and Z axis via the connection of parallel shafts.Pulleys were driven by stepping motors.Accurate and rapid positioning properties were advantages of the parallel-shaft drive mechanism.The hydraulic extrusion mechanism undertook the extrusion of the PVA matrix hydrogel reaction liquids, which were located on the top of the parallel-shaft drive mechanism.The distalis extrusion pattern reduced motion inertia of the storage tube and maintained 3D printing precision.The stepping motor and water were utilized as the driving source and pressure medium, respectively, controllably extruding the PVA matrix hydrogel reaction liquid in the storage tube.The low-temperature control device realized a low-temperature environment ranging from 0 • C to −20 • C via a semiconductor chilling plate.The low-temperature control device was controlled by an individual power supply.Circulating water was used for heat dissipation of the semiconductor chilling plate and was stored in the red bucket.The electric control device located on the bottom of the parallel-shaft drive mechanism controlled the synergistic operation of the hydraulic extrusion mechanism and the parallel-shaft drive mechanism.The operation process of the complete machine can be found in the supplemental movie (Supporting Information M2).The steady and accurate operation process proved the feasibility of the low-temperature 3D printing machine. Sensors 2023, 23, 8063 9 of 22 the injector out of the work drum.The empty injector connected with the storage tube via a triangular platform.The extruded piston core rod pushed the PVA matrix hydrogel reaction liquid out of the storage tube.Combined with the calculation and debugging, the feed speed of the hydraulic extrusion mechanism matched with the movement speed of the triangular platform in the parallel-shaft drive mechanism.Besides the metalline components and parts, the other structural components were also 3D printed with polylactic acid to enhance packaging efficiency. Figure 2d shows the assembly drawings of the low-temperature control device, which can construct a low-temperature environment of −20 °C.When the PVA matrix hydrogel reaction liquid was extruded by the hydraulic extrusion mechanism, the low-temperature control device provided a controllable and steady low-temperature environment for 3D printing.The self-made low-temperature 3D printing machine combined advantages of a direct-ink-writing 3D printing machine and a low-temperature system, which constructed a steady and effective platform to realize the low-temperature rheology properties of PVA matrix hydrogels.rods was composed of two parallel fish-eye rods.The parallel shaft structure had high load capacity.One side of the fish-eye rod connected with a pulley, the other side connected with a triangular platform which was used for installing the storage tube.The hybrid stepping motors with high output torque and positional accuracy were fixed on aluminium profiles.A hybrid stepping motor, type 42BYG60, was utilized as the drive motor of the parallel-shaft drive mechanism.The corresponding detailed parameters are listed in Table 2.In order to maintain high positional accuracy and sufficient drive bite force of the parallel-shaft drive mechanism, a synchronizing wheel, type 2GT20, with 20 teeth and 2 mm teeth space was used.The type of synchronous belt was a 2GT with a pitch line of 300 mm.The synchronizing wheel and synchronous belt system connected with a jackscrew.The polish rods with a length of 520 mm and a diameter of 8 mm were treated as guide rails of the pulleys to enhance positional accuracy of the parallel-shaft drive mechanism.The aluminium profiles were frame materials for the low-temperature 3D printing machine, which provided sufficient strength and space for arrangement of the mechanical structure and wiring harness.Considering the space requirements of 3D printing, aluminium profiles with lengths of 300 mm and 600 mm were used for construction of the X axis, and the Y axis and Z axis, respectively.The frame structures which were used to install the cooling fans were 3D printed with polylactic acid. Figure 2c exhibits the assembly drawings of the hydraulic extrusion mechanism, which was also driven via a synchronizing wheel and synchronous belt system with a deceleration transmission ratio of 1:3.The small synchronizing wheel connected with the hybrid stepping motor, type 42BYG60, via a jackscrew, driving the big synchronizing wheel via the synchronous belt.The big synchronizing wheel connected with a screw rod via a jackscrew.The rotating screw rod led to the movement of the sliding block, pushing water out from the injector.The polish rods with a length of 300 mm and diameter of 8 mm were treated as guide rails of the sliding block to enhance positional accuracy.The extruded water was squeezed into another empty injector, pushing the piston core rod of the injector out of the work drum.The empty injector connected with the storage tube via a triangular platform.The extruded piston core rod pushed the PVA matrix hydrogel reaction liquid out of the storage tube.Combined with the calculation and debugging, the feed speed of the hydraulic extrusion mechanism matched with the movement speed of the triangular platform in the parallel-shaft drive mechanism.Besides the metalline components and parts, the other structural components were also 3D printed with polylactic acid to enhance packaging efficiency. Figure 2d shows the assembly drawings of the low-temperature control device, which can construct a low-temperature environment of −20 • C. When the PVA matrix hydrogel reaction liquid was extruded by the hydraulic extrusion mechanism, the low-temperature control device provided a controllable and steady low-temperature environment for 3D printing.The self-made low-temperature 3D printing machine combined advantages of a direct-ink-writing 3D printing machine and a low-temperature system, which constructed a steady and effective platform to realize the low-temperature rheology properties of PVA matrix hydrogels.A series of typical common structures were designed to verify the operational effectiveness of the low-temperature 3D printing machine.Figure 3(a-1-a-5) depicts the process and material object of PVA-LS hydrogels with a cuboid structure.Based on a large number of debugging experiments, the detailed low-temperature 3D printing parameters which can realize a practical 100% filling rate are listed in Table 3. Low-Temperature 3D Printing of Complex Structures After the constructions of simple structures as shown in Figure 4, the complex structure construction property with strut member was analysed, as shown in Figure 4.The low-temperature 3D printable ability of undersized and exquisite structures was the steady base of complex structure construction with strut member of PVA-LS and PVA-CMC.Therefore, the Model Ⅰ of a groove cuboid with detailed dimensions as shown in The size of the cuboid structure was 30 mm × 30 mm × 3 mm (Length × Width × Height), as shown in Figure 3(a-1).Figure 3(a-2) exhibits the 3D printing slice path pattern of a cuboid structure with eight layers.The detailed low-temperature 3D printing process of the cuboid structure can be observed in the supplemental movie (Supporting Information M3).In order to be convenient for introducing a low-temperature 3D printing process, an initial time of ∆t = 0 s was artificially set in Figure 3(a-3).At the point of ∆t = 0, the nozzle which linked with the storage tube was at the top-right corner of the 3D printed sample.Several extrusive PVA-LS paths with the original colour of the PVA-LS hydrogel reaction liquid maintained the initial path structure without collapsing.Moreover, the extrusive path structure was not solidified.After 19 s, the nozzle moved to the middle part of the 3D printed sample in Figure 3(a-4).The extrusive continuous path structure without collapse also exhibited the original colour of the PVA-LS hydrogel reaction liquid.But, the colour of the 3D printed paths in the top-right corner were white, which indicated the corresponding path structure was solidified.When the designed paths depicted in Figure 3(a-2) were 3D printed one by one, the low-temperature 3D printing process of the cuboid structure was finished.After sufficient cycles of freezing and thawing, the PVA-LS hydrogel cuboid was finished, as shown in Figure 3(a-5).Compared with the designed size, the size of the final PVA-LS hydrogel cuboid was 30 mm × 30 mm × 3 mm (Length × Width × Height), which proved the high 3D printing precision of the self-made low-temperature 3D printing machine. In order to verify the repeatability of the low-temperature 3D printing machine, a torus structure was designed.The corresponding dimension parameters are exhibited in Figure 3(b-1).The 3D printing path structure with five layers is shown in Figure 3(b-2).The practical low-temperature 3D printing process can be found in the supplemental movies (Supporting Information M4).The time which described the nozzle position in Figure 3(b-3) was also defied as ∆t = 0 s.The nozzle was on the second layer of the torus structure.In the left bottom part, the 3D printing path maintained the designed unbroken path structure.After the continuous and unhindered low-temperature 3D printing process, the nozzle moved to top right corner.As shown in Figure 3(b-4), the left bottom part depicted in Figure 3(b-3) was solidified and maintained the original 3D printed structure perfectly, which provided a solid structural base for the third layer.After sufficient cycles of freezing and thawing, the practical low-temperature 3D printed torus sample had high dimensional accuracy via comparison between Figure 3(b-1,b-5). The low-temperature 3D printing of the cuboid and torus structures effectively verified the feasibility of low-temperature 3D printing.The quadrangular platform structure depicted in Figure 3(c-1) was designed to investigate the accumulation property of the PVA-CMC hydrogel.The slice path structure with seven layers is exhibited in Figure 3(c-2).The detailed low-temperature 3D printing process can be found in the supplemental movies (Supporting Information M5).Attributed to the low-temperature rheology properties, PVA-CMC reaction liquid can also realize continuous and steady low-temperature 3D printing, which was similar to the PVA-LS reaction liquid.The first layer was 3D printed and maintained the original designed path structure without collapse, as shown in Figure 3(c-3).After 120 s, the fourth layer was low-temperature 3D printing on the solidified first three layers as depicted in Figure 3(c-4).After accumulation layer by layer, the final low-temperature 3D printed PVA-CMC quadrangular platform structure was complete, as exhibited in Figure 3(c-5), proving the high structure accumulation and precision properties of the low-temperature 3D printing machine. The low-temperature 3D printing process of simple structures depicted in Figure 3 indicated the operational effectiveness of the self-made low-temperature 3D printing machine.Based on the low-temperature rheology characteristics of PVA matrix hydrogels, the low-temperature 3D printing technologies including the steady low-temperature 3D printing machine and controllable parameters provided preparation technology bases for PVA matrix hydrogels with complicated structures. Low-Temperature 3D Printing of Complex Structures After the constructions of simple structures as shown in Figure 4, the complex structure construction property with strut member was analysed, as shown in Figure 4.The lowtemperature 3D printable ability of undersized and exquisite structures was the steady base of complex structure construction with strut member of PVA-LS and PVA-CMC.Therefore, the Model I of a groove cuboid with detailed dimensions as shown in Figure 4(a-1) was adopted.The depth and width of the groove were 2 mm.The width of the prominent part was only 1.5 mm.As shown in Figure 4(a-2), the path structure with nine layers was constructed via the slicing software of CuraEngine.Combined with the continuous extrusion, the low-temperature 3D printing machine realized the path structure and 3D printed the groove cuboid successfully, which can be seen in Figure 4(a-3).The degreasing cotton in the 3D printed groove directly exhibited the undersized structure construction property of low-temperature technology.Combined with the practical low-temperature 3D printing process shown in the supplemental movie (Supporting Information M6), the 3D printing process of the path structure is listed in Figure 4(a-5-a-8).The solid cuboid with dimensions of 40 mm × 5 mm × 1 mm (Length × Width × Height) was divided into three layers.Under the positive role of low-temperature rheology, the first three layers, with 100% filling rate, were 3D printed layer by layer.On the base of the solidified first three layers, the other six layers of the two prominent parts with height of 2 mm were low-temperature 3D printed successfully. In order to verify the low-temperature 3D printing feasibility of complex structures via strut members, Model II of the cuboid with a thru hole was designed.A glass sheet with dimensions of 24 mm × 20 mm × 1 mm (Length × Width × Height) was treated as a strut member.Attributed to the continuity of the low-temperature 3D printing process, ancillary paths were adopted to reserve enough operation time for placing the glass strut member.The detailed shape and dimensions of Model II are exhibited in Figure 4(b-1).The thru hole dimensions of the cuboid were 24 mm × 5 mm × 1 mm (Length × Width × Height).The 3D printing path structure with nine layers obtained from the slicing software of CuraEngine is shown in Figure 4(b-2).Figure 4(b-3) shows the low-temperature 3D printed cuboid with the glass sheet strut member.Figure 4(b-4) shows the final shape of Figure 4(b-3) without ancillary paths, which proved the feasibility of the structural design of Model II.The supplemental movie (Supporting Information M7) exhibits the detailed low-temperature 3D printing process of Model II. Figure 4(b-5-b-8) shows the schematic diagrams of the main procedures of the low-temperature 3D printing process of Model II.The first three layers with dimensions of 40 mm × 5 mm × 0.5 mm (Length × Width × Height) were low-temperature 3D printed, serving as the base of the thru hole structure.When the fourth layer and corresponding ancillary path started, the glass strut member was placed on the solidified PVA-LS hydrogel reaction liquid.Then, the other three layers were 3D printed on the solidified PVA-LS hydrogel reaction liquid and glass strut member shown in Figure 4(b-1).After sufficient cycles of freezing and thawing, the glass strut member and low-temperature 3D printed ancillary paths were removed and cut off, respectively, which resulted in the final low-temperature 3D printed PVA-LS cuboid with thru hole shown in Figure 4(b-4).Mechanical strength was the application base of low-temperature 3D printing PV matrix hydrogels.Even though Figures 3 and 4 exhibit perfect low-temperature 3D prin ing of simple and complex structures, the path structures were obtained from the slici software of CuraEngine directly.The generated 3D printing paths only exhibited the or inal shape patterns, which ignored the effect of the parameters, including optimal pa angles, filling rate and layers, on mechanical strength.Therefore, design of a low-temp ature 3D printing path and construction relationship between path structure and mecha ical strength became the key points for mechanical property enhancement of low-temp ature 3D printing PVA matrix hydrogels. Because of the bottom-up conduction pattern, the top parts of the structure with hi height had a relative low solidification property.The key point for successful low-temp ature 3D printing was the cooperation of the extrusion speed, movement speed of t nozzle and freeze speed of the PVA matrix hydrogel reaction liquid.The arrangement In order to verify whether the low-temperature 3D printing technology can realize complicated structures with multiple strut members or not, PVA-CMC hydrogel reaction liquid was used for the low-temperature 3D printing of Model III.As shown in Figure 4(c-1), Model III was a cube with three thru holes.Figure 4(c-2) exhibits the path structure with the nine layers of Model III.The size dimensions of the columniform thru hole was Φ1 mm × 12 mm (Diameter × Length). Figure 4(c-3,c-4) exhibits the practical low-temperature 3D printed cubic with and without strut members, respectively.The supplemental movie (Supporting Information M8) exhibits the detailed low-temperature 3D printing process of Model III. Figure 4(c-5-c-8) exhibits the main low-temperature 3D printing processes of Model III.After the solidification of the first three layers, two columniform strut members were placed in sequence.Then, the other four layers were lowtemperature 3D printed on the strut members.Another columniform strut member was also placed on the solidified PVA-CMC hydrogel reaction liquid.After the low-temperature 3D printing of the rest of the path structures, Model III was constructed.After sufficient cycles of freezing and thawing, the low-temperature 3D printed PVA-CMC cube with three thru holes shown in Figure 4(c-1) was successfully obtained.Combined with the low-temperature 3D printing process of PVA matrix hydrogels with Model I, Model II and Model III, the novel low-temperature 3D printing technology realized efficient construction of complicated structures with undersized and exquisite dimensions via multiple strut members.Mechanical strength was the application base of low-temperature 3D printing PVA matrix hydrogels.Even though Figures 3 and 4 exhibit perfect low-temperature 3D printing of simple and complex structures, the path structures were obtained from the slicing software of CuraEngine directly.The generated 3D printing paths only exhibited the original shape patterns, which ignored the effect of the parameters, including optimal path, angles, filling rate and layers, on mechanical strength.Therefore, design of a lowtemperature 3D printing path and construction relationship between path structure and mechanical strength became the key points for mechanical property enhancement of lowtemperature 3D printing PVA matrix hydrogels. Because of the bottom-up conduction pattern, the top parts of the structure with high height had a relative low solidification property.The key point for successful lowtemperature 3D printing was the cooperation of the extrusion speed, movement speed of the nozzle and freeze speed of the PVA matrix hydrogel reaction liquid.The arrangement of ancillary paths with relatively low 3D printing speed can provide enough time for sufficient solidification of the 3D printed PVA matrix hydrogel and realize 3D printing precision.The self-compiling G-codes, with all 3D printing parameters, built the programmable structure base.Figure 5a shows the designed path pattern of the parallel path structure.The dimensions of the parallel path structure with 10 layers and 100% filling rate were 40 mm × 5 mm × 3 mm (Length × Width × Height).The path structure was designed as a zigzag pattern.In order to realize an accurate intact dimension pattern, ancillary paths shown in Figure 5b were prepared via relatively low nozzle movement speed.After removing the ancillary paths, the finial low-temperature 3D printed PVA-LS hydrogel with parallel path structure is shown in Figure 5c.The supplemental movie (Supporting Information M9) exhibits the detailed low-temperature 3D printing process of the parallel path structure.In the first layer, the ancillary path was 3D printed in the periphery of the parallel path structure, as shown in Figure 5d.When the ancillary path was lowtemperature 3D printed, the movement speed of nozzle was slow, which provided sufficient time for solidification of the parallel path structure.The existence of ancillary paths ensured the positive role of low-temperature for the construction of the parallel path structure layer by layer as shown in Figure 5e,f.The low-temperature 3D printing process proved the feasibility of self-compiling G-codes in low-temperature 3D printing technology. Sensors 2023, 23, 8063 15 of ancillary paths with relatively low 3D printing speed can provide enough time for su cient solidification of the 3D printed PVA matrix hydrogel and realize 3D printing pre sion.The self-compiling G-codes, with all 3D printing parameters, built the programm ble structure base.Figure 5a shows the designed path pattern of the parallel path stru ture.The dimensions of the parallel path structure with 10 layers and 100% filling r were 40 mm × 5 mm × 3 mm (Length × Width × Height).The path structure was design as a zigzag pattern.In order to realize an accurate intact dimension pattern, ancillary pa shown in Figure 5b were prepared via relatively low nozzle movement speed.After moving the ancillary paths, the finial low-temperature 3D printed PVA-LS hydrogel w parallel path structure is shown in Figure 5c.The supplemental movie (Supporting Inf mation M9) exhibits the detailed low-temperature 3D printing process of the parallel pa structure.In the first layer, the ancillary path was 3D printed in the periphery of the p allel path structure, as shown in Figure 5d.When the ancillary path was low-temperatu 3D printed, the movement speed of nozzle was slow, which provided sufficient time solidification of the parallel path structure.The existence of ancillary paths ensured t positive role of low-temperature for the construction of the parallel path structure lay by layer as shown in Figure 5e,f.The low-temperature 3D printing process proved t feasibility of self-compiling G-codes in low-temperature 3D printing technology.Self-compiling G-codes enriched the designability of low-temperature 3D printing the PVA matrix hydrogels.Inspired by the microstructure characteristic of bionic mod with perfect mechanical strength, the bionic structures provide a perfect structure mod for the design of the path structure with high mechanical strength via self-compiling codes.In our previous study, the mechanical properties of the spearer propodus of man shrimp was investigated [28].Figure 6a exhibits the typical layered spiral structure of t Self-compiling G-codes enriched the designability of low-temperature 3D printing of the PVA matrix hydrogels.Inspired by the microstructure characteristic of bionic models with perfect mechanical strength, the bionic structures provide a perfect structure model for the design of the path structure with high mechanical strength via self-compiling G-codes.In our previous study, the mechanical properties of the spearer propodus of mantis shrimp was investigated [28].Figure 6a exhibits the typical layered spiral structure of the spearer propodus of mantis shrimp.The spearer propodus (red wireframe in Figure 6a) bore a maximum tensile force of 320 N via an in situ tensile test [28].Inspired by the layered spiral structures shown in Figure 6a, a series of layered spiral path structures with different layer angles were designed.Figure 6b exhibits the layered spiral path structure with dimensions of 40 mm × 5 mm × 3 mm (Length × Width × Height) via self-compiling G-codes.Based on the steady operation of the low-temperature 3D printing machine, the path structure with seven layers as shown in Figure 6b was low-temperature 3D printed with PVA-CMC as shown in Figure 6c.The supplemental movie (Supporting Information M10) exhibits the detailed low-temperature 3D printing process of the layered spiral path structure.The direction along with width was treated as 0 • , as shown in Figure 6d.PVA-CMC hydrogel reaction liquid was smoothly extruded and maintained the 3D printed path structure with a 100% filling rate.The path direction angle between the first layer and the second layer was 30 • in the counterclockwise direction.As shown in Figure 6e,f, the path direction angle of the third layer and the sixth layer was 60 • and 150 • , respectively.Namely, the path direction angle between adjacent layers was 30 • in the counterclockwise direction, which was defined as a layered spiral path structure of 30 • .Besides the path structure of 30 • , the path direction angel of adjacent layers can also be designed as 15 the path direction angle of the third layer and the sixth layer was 60° and 150°, respectively.Namely, the path direction angle between adjacent layers was 30° in the counterclockwise direction, which was defined as a layered spiral path structure of 30°.Besides the path structure of 30°, the path direction angel of adjacent layers can also be designed as 15°, 45°, 60°, 75° and 90°.The PVA-CMC hydrogel with a layered variable-angle structure further verified the structural diversity of low-temperature 3D printing technology.In order to verify the positive effect of layered variable-angle structures on the enhancement of mechanical strength, mechanical strength values of the low-temperature 3D printed PVA-CMC hydrogels with various path direction angels were investigated as shown in Figure 7a.The stress values of the PVA-CMC hydrogels with 0°, 15°, 30°, 45°, 60°, 75° and 90° were 153.1 KPa, 193.5 KPa, 398.8 KPa, 333.1 KPa, 235.5 KPa, 210.1 KPa and 207.1 KPa, respectively.The strain values of the PVA-CMC hydrogels with 0°, 15°, 30°, 45°, 60°, 75° and 90° were 65.7%, 68.8%, 101.4%, 106.5%, 77.6%, 74.2% and 72.3%, respectively.Compared with PVA-CMC hydrogels with 0°, PVA-CMC hydrogels with a layered variable-angle structure exhibited high mechanical strength.Moreover, PVA-CMC hydrogels with 30° had the highest stress values.Besides the PVA-CMC hydrogels, the low-temperature 3D printed PVA-LS hydrogels with a layered variable-angle structure also exhibited a high mechanical strength value of 372.2 KPa as shown in Figure 7b, which proved the In order to verify the positive effect of layered variable-angle structures on the enhancement of mechanical strength, mechanical strength values of the low-temperature 3D printed PVA-CMC hydrogels with various path direction angels were investigated as shown in Figure 7a.The stress values of the PVA-CMC hydrogels with 0 • , 15 , 75 • and 90 • were 65.7%, 68.8%, 101.4%, 106.5%, 77.6%, 74.2% and 72.3%, respectively.Compared with PVA-CMC hydrogels with 0 • , PVA-CMC hydrogels with a layered variable-angle structure exhibited high mechanical strength.Moreover, PVA-CMC hydrogels with 30 • had the highest stress values.Besides the PVA-CMC hydrogels, the low-temperature 3D printed PVA-LS hydrogels with a layered variable-angle structure also exhibited a high mechanical strength value of 372.2 KPa as shown in Figure 7b, which proved the mechanical strength enhancement of a layered variable-angle structure.Compared with our previous study [27], the low-temperature 3D printed PVA matrix hydrogels with a strength enhanced structure had the perfect mechanical property and application bases.Based on Figure 5, low-temperature 3D printing technology maintained the original excellent mechanical properties of PVA matrix hydrogels via a self-compiling 3D printing path structure design.Moreover, low-temperature 3D printing technology realized diversified structure compiling properties in PVA matrix hydrogels with high structural strength. Conductivity and Sensing of Low-Temperature 3D Printing PVA Matrix Hydrogels Low-temperature 3D printing technology realized high mechanical strength in PVA matrix hydrogels via the construction of a path structure.During the implementation process of the low-temperature 3D printing of PVA matrix hydrogels, 3D printable rheology of PVA hydrogel reaction liquids were controlled by low-temperature without any rheology modifiers, which maintained the original material components of the PVA matrix hydrogel.Namely, the original functional properties including electrical sensing characteristics can also be maintained logically.In order to verify the conductivity and sensing properties, novel self-made surface reduced silver reactions [27] were conducted on the low-temperature 3D printed PVA matrix hydrogels. Figure 8a shows a low-temperature 3D printed PVA-LS hydrogel with a layered variable-angle structure of 30° after a reduction reaction of silver particles.Compared with Figure 5c, the colour of the PVA-LS hydrogel with Ag became dark and grey.The path structure patterns can still be observed.After immersing in the mixed AgNO3/PVP solution and the VC/PVP solution, silver particles were obtained on the surface of the PVA-LS hydrogels, as shown in Figure 8b.The energy spectrum analysis shown in Figure 8c disclosed the existence and distribution position of the silver particles.XRD analysis as shown in Figure 8d exhibited that the phase component of reduced silver on PVA-LS surfaces was an elementary substance.The novel self-made surface reduced silver reactions had steady and efficient characteristics, leading to the uniform size of silver particles as shown in Figure 8e,f.Based on the existence of silver particles, all surfaces of PVA-LS hydrogels had electrical conductivity, which can lighten "JLU" lighting sets with a low electrical resistance value as shown in Figure 8g (Supporting Information M11).Combined with the conductivity and low-temperature 3D printed soft structure with high mechanical strength, the PVA-LS hydrogel with a layered variable-angle structure of 30° realized 100 times the cyclic stretching processes with 5% strain.During the cyclic stretching process shown in Figure 8h, the output signal of ΔR/R0 was constant with minor fluctuations, indicating the stable output electrical signal changes under small strain of the low-tem- Based on Figure 5, low-temperature 3D printing technology maintained the original excellent mechanical properties of PVA matrix hydrogels via a self-compiling 3D printing path structure design.Moreover, low-temperature 3D printing technology realized diversified structure compiling properties in PVA matrix hydrogels with high structural strength. Conductivity and Sensing of Low-Temperature 3D Printing PVA Matrix Hydrogels Low-temperature 3D printing technology realized high mechanical strength in PVA matrix hydrogels via the construction of a path structure.During the implementation process of the low-temperature 3D printing of PVA matrix hydrogels, 3D printable rheology of PVA hydrogel reaction liquids were controlled by low-temperature without any rheology modifiers, which maintained the original material components of the PVA matrix hydrogel.Namely, the original functional properties including electrical sensing characteristics can also be maintained logically.In order to verify the conductivity and sensing properties, novel self-made surface reduced silver reactions [27] were conducted on the low-temperature 3D printed PVA matrix hydrogels. Figure 8a shows a low-temperature 3D printed PVA-LS hydrogel with a layered variable-angle structure of 30 • after a reduction reaction of silver particles.Compared with Figure 5c, the colour of the PVA-LS hydrogel with Ag became dark and grey.The path structure patterns can still be observed.After immersing in the mixed AgNO 3 /PVP solution and the VC/PVP solution, silver particles were obtained on the surface of the PVA-LS hydrogels, as shown in Figure 8b.The energy spectrum analysis shown in Figure 8c disclosed the existence and distribution position of the silver particles.XRD analysis as shown in Figure 8d exhibited that the phase component of reduced silver on PVA-LS surfaces was an elementary substance.The novel self-made surface reduced silver reactions had steady and efficient characteristics, leading to the uniform size of silver particles as shown in Figure 8e,f.Based on the existence of silver particles, all surfaces of PVA-LS hydrogels had electrical conductivity, which can lighten "JLU" lighting sets with a low electrical resistance value as shown in Figure 8g (Supporting Information M11).Combined with the conductivity and low-temperature 3D printed soft structure with high mechanical strength, the PVA-LS hydrogel with a layered variable-angle structure of 30 • realized 100 times the cyclic stretching processes with 5% strain.During the cyclic stretching process shown in Figure 8h, the output signal of ∆R/R 0 was constant with minor fluctuations, indicating the stable output electrical signal changes under small strain of the low-temperature 3D printing conductive PVA-LS hydrogels.Besides PVA-LS hydrogels, the low-temperature 3D printed PVA-CMC hydrogels can also have electrical sensing functions via the reduction reaction of silver particles.Compared with pure PVA-CMC hydrogels as shown in Figure 6, the PVA-CMC hydrogels with silver particles also exhibited a dark and grey colour and maintained the original layered variable-angle structure.Based on the silver particles with uniform size shown in Figure 9a, the conductive PVA-CMC hydrogel also had a low electrical resistance value.As shown in Figure 9b, the low-temperature 3D printed conductive PVA-CMC hydrogel was stretched to 5%, 10%, 15%, 20% and 25% strain values constantly.The corresponding ΔR/R0 values were 0.377%, 0.96%, 2.59%, 5.23% and 8.35%, respectively.The output signal was stable and repeatable in the seven stretching-releasing cycle processes with corresponding strains, indicating the perfect sensing property within 25% strain.The low-temperature 3D printed conductive PVA-CMC hydrogel was continuously stretched with a maximum effective strain value of 25% and released for 150 cycles as shown in Figure 9c.This steady and durable sensing function exhibited the application feasibility of the lowtemperature 3D printing technology.ΔR/R0 values of the low-temperature 3D printed conductive PVA-CMC hydrogel were measured via the increasing strain values to analyse sensing sensitivity as shown in Figure 9d.After linear fitting, the slope of the fitting equation was 34.6 in the 0-20% strain range.Moreover, the fitting equation had relatively high linearity (R 2 = 0.98).The low-temperature 3D printed conductive PVA-CMC hydrogel exhibited high sensitivity, which was suitable for detecting small deformations as a strain Besides PVA-LS hydrogels, the low-temperature 3D printed PVA-CMC hydrogels can also have electrical sensing functions via the reduction reaction of silver particles.Compared with pure PVA-CMC hydrogels as shown in Figure 6, the PVA-CMC hydrogels with silver particles also exhibited a dark and grey colour and maintained the original layered variable-angle structure.Based on the silver particles with uniform size shown in Figure 9a, the conductive PVA-CMC hydrogel also had a low electrical resistance value.As shown in Figure 9b, the low-temperature 3D printed conductive PVA-CMC hydrogel was stretched to 5%, 10%, 15%, 20% and 25% strain values constantly.The corresponding ∆R/R 0 values were 0.377%, 0.96%, 2.59%, 5.23% and 8.35%, respectively.The output signal was stable and repeatable in the seven stretching-releasing cycle processes with corresponding strains, indicating the perfect sensing property within 25% strain.The low-temperature 3D printed conductive PVA-CMC hydrogel was continuously stretched with a maximum effective strain value of 25% and released for 150 cycles as shown in Figure 9c.This steady and durable sensing function exhibited the application feasibility of the low-temperature 3D printing technology.∆R/R 0 values of the low-temperature 3D printed conductive PVA-CMC hydrogel were measured via the increasing strain values to analyse sensing sensitivity as shown in Figure 9d.After linear fitting, the slope of the fitting equation was 34.6 in the 0-20% strain range.Moreover, the fitting equation had relatively high linearity (R 2 = 0.98).The low-temperature 3D printed conductive PVA-CMC hydrogel exhibited high sensitivity, which was suitable for detecting small deformations as a strain sensor.Based on the electrical sensing functions of the low-temperature 3D printed PVA matrix conductive hydrogels, the low-temperature 3D printing technology maintained the original material properties, realizing conductivity and sensing properties via an in situ reduction reaction of silver particles on PVA matrix hydrogel surfaces.Compared with the traditional moulding method, the low-temperature 3D printing technology had diversified design patterns of 3D printing path structures and original material characteristics of the PVA matrix hydrogel reaction liquids, which prepared new kinds of PVA matrix hydrogels with perfect mechanical strength and electrical sensing functions.The low-temperature 3D printing machine and technology provided new efficient methods for resolving bottlenecks of PVA matrix hydrogels and built a firm technology base for practical application in soft electrical sensing fields of PVA matrix hydrogels. Conclusions In order to realize functionalization of various 3D printing path structures of PVA matrix hydrogels under the premise of maintaining the original material function attributes, a self-made-innovative-low-temperature 3D printing machine and corresponding technologies were developed.By constructing various structure patterns and conducting functionalization analyses, the feasibility and effectiveness of the low-temperature technology was proved.The main conclusions are listed as follows: (1) PVA matrix hydrogel reaction solutions exhibited specific low-temperature rheology properties.During the temperature declining process from 25 °C to −20 °C, viscosity values of the PVA matrix hydrogel reaction liquids increased gradually, exhibited the Based on the electrical sensing functions of the low-temperature 3D printed PVA matrix conductive hydrogels, the low-temperature 3D printing technology maintained the original material properties, realizing conductivity and sensing properties via an in situ reduction reaction of silver particles on PVA matrix hydrogel surfaces.Compared with the traditional moulding method, the low-temperature 3D printing technology had diversified design patterns of 3D printing path structures and original material characteristics of the PVA matrix hydrogel reaction liquids, which prepared new kinds of PVA matrix hydrogels with perfect mechanical strength and electrical sensing functions.The low-temperature 3D printing machine and technology provided new efficient methods for resolving bottlenecks of PVA matrix hydrogels and built a firm technology base for practical application in soft electrical sensing fields of PVA matrix hydrogels. Conclusions In order to realize functionalization of various 3D printing path structures of PVA matrix hydrogels under the premise of maintaining the original material function attributes, a self-made-innovative-low-temperature 3D printing machine and corresponding technologies were developed.By constructing various structure patterns and conducting Figure 2 . Figure 2. The (a) general assembly drawing and corresponding assembly drawings of (b) parallelshaft drive mechanism, (c) hydraulic extrusion mechanism and (d) low-temperature control device of the low-temperature 3D printing machine. Figure 2 . Figure 2. The (a) general assembly drawing and corresponding assembly drawings of (b) parallelshaft drive mechanism, (c) hydraulic extrusion mechanism and (d) low-temperature control device of the low-temperature 3D printing machine. Figure Figure2bshows the assembly drawings of the parallel-shaft drive mechanism.The parallel shaft was composed of three groups of connecting rods.One group of connecting Sensors 2023 , 23, 8063 14 of novel low-temperature 3D printing technology realized efficient construction of comp cated structures with undersized and exquisite dimensions via multiple strut members Figure 5 . Figure 5. (a) self-compiling G-code pattern, (b) practical pattern, (c) final pattern and low-temperature 3D printing process of (d) 1 layer, (e) 5 layers and (f) 6 layers of the parallel path structure. Figure 6 . Figure 6.(a) microstructure of spearer propodus of mantis shrimp and (b) self-compiling G-code pattern, (c) final pattern and low-temperature 3D printing process of (d) 1 layer, (e) 3 layers and (f) 6 layers of layered variable-angle structure. Figure 6 . Figure 6.(a) microstructure of spearer propodus of mantis shrimp and (b) self-compiling G-code pattern, (c) final pattern and low-temperature 3D printing process of (d) 1 layer, (e) 3 layers and (f) 6 layers of layered variable-angle structure. Figure 9 . Figure 9. (a) material object and amplifying morphology of Ag particles, (b) ∆R/R 0 with cyclic different strains, (c) ∆R/R 0 with 25% strain for 150 cycles and (d) line-fitting curve of ∆R/R 0 with continuous strain variation of low-temperature 3D printing PVA-CMC conductive hydrogels. Table 1 . Main components of low-temperature 3D printing machine. Table 2 . Detailed parameters of the selected hybrid stepping motor.
12,626.6
2023-09-24T00:00:00.000
[ "Materials Science", "Engineering" ]
Combinatorial MAB-Based Joint Channel and Spreading Factor Selection for LoRa Devices Long-Range (LoRa) devices have been deployed in many Internet of Things (IoT) applications due to their ability to communicate over long distances with low power consumption. The scalability and communication performance of the LoRa systems are highly dependent on the spreading factor (SF) and channel allocations. In particular, it is important to set the SF appropriately according to the distance between the LoRa device and the gateway since the signal reception sensitivity and bit rate depend on the used SF, which are in a trade-off relationship. In addition, considering the surge in the number of LoRa devices recently, the scalability of LoRa systems is also greatly affected by the channels that the LoRa devices use for communications. It was demonstrated that the lightweight decentralized learning-based joint channel and SF-selection methods can make appropriate decisions with low computational complexity and power consumption in our previous study. However, the effect of the location situation of the LoRa devices on the communication performance in a practical larger-scale LoRa system has not been studied. Hence, to clarify the effect of the location situation of the LoRa devices on the communication performance in LoRa systems, in this paper, we implemented and evaluated the learning-based joint channel and SF-selection methods in a practical LoRa system. In the learning-based methods, the channel and SF are decided only based on the ACKnowledge information. The learning methods evaluated in this paper were the Tug of War dynamics, Upper Confidence Bound 1, and ϵ-greedy algorithms. Moreover, to consider the relevance of the channel and SF, we propose a combinational multi-armed bandit-based joint channel and SF-selection method. Compared with the independent methods, the combinations of the channel and SF are set as arms. Conversely, the SF and channel are set as independent arms in the independent methods that are evaluated in our previous work. From the experimental results, we can see the following points. First, the combinatorial methods can achieve a higher frame success rate and fairness than the independent methods. In addition, the FSR can be improved by joint channel and SF selection compared to SF selection only. Moreover, the channel and SF selection dependents on the location situation to a great extent. Introduction The Low-Power Wide-Area Network (LPWAN) is a technology that enables low-power and long-distance communication for Internet of Things (IoT) applications [1]. The number of IoT devices using the communication protocols that belong to LPWAN has been rapidly increasing in recent years [2]. Among the LPWAN protocols, Long-Range (LoRa) systems attract attention because they do not require a license, but have an open standard. Besides, they can be built at a low cost. As a result, the number of LoRa devices is projected to Research on communication parameter management in LoRa systems can be divided into centralized and distributed approaches. Most of the existing research focuses on centralized approaches in which the network server allocates communication parameters to the LoRa devices [15]. The GW is responsible for transmitting the LoRa packets of nodes and forwarding them to the network server [16]. The network server may allocate optimal transmission parameters for the centralized approaches. However, the GW needs to know much a priori information, such as the distance between the GW and the LoRa device, the packet length, the event probability, the number of devices, and so on, to determine the communication parameters for the LoRa devices in the centralized approaches, which may increase the communication latency. Furthermore, the LoRa device needs to be awake to receive the transmission parameters instruction from the GW, which may increase the energy consumption of the IoT devices compared to decentralized parameter selection methods. Moreover, the centralized approaches also increase the consumption of communication resources due to the transmission of the transmission parameters' instruction. There are also some studies on improving the performance based on the standardized protocol of the LoRa systems. There are mainly three specifications for the LoRa systems, i.e., Class A, Class B, and Class C. Class A uses a so-called Pure-ALOHA-type asynchronous multiple-access scheme in which a terminal uplink has a short burst signal at an arbitrary timing. On the other hand, terminal reception is limited to a very short period immediately after the uplink. In Class B, all GWs and terminals use beacons transmitted by the GWs to synchronize with the network. By accurately recognizing the time at which each terminal opens its reception window, the GW can immediately send a call when there is downlink information. In Class C, high-speed downlink communication is possible because terminals can always receive signals. Even though the parameter control methods for different LoRa classes are not the same, the standardized protocols face the same issues as the other centralized methods. As described above, considering the future proliferation of LoRa devices and the need to provide ultra-long battery life for LoRa devices, only the resource allocation schemes that can significantly reduce signaling to the access network are feasible. Therefore, a decentralized approach is required where each LoRa device autonomously selects appropriate communication parameters without the help of the GW/network server [17]. Compared to the centralized approaches, the decentralized approach allows parameter selection without needing prior information and the transmission of the transmission parameters' instruction [18]. Hence, the spectrum resource for the communications and energy consumption of the LoRa devices can be reduced. Several decentralized communication-parameter-selection methods based on the Multi-Armed Bandit (MAB) algorithm have been proposed in previous studies to improve the scalability of the LoRa systems. However, these previous studies were limited to the selection of only the SF or only the CH. Meanwhile, few papers have considered the implementation of the methods in practice. As IoT devices have low computational power, limited storage, and less battery, it is a great challenge to develop a joint SF and CH method for practical LoRa systems. To address this issue described above, we proposed a MAB-based joint channel and SF-selection method in our previous work. We evaluated the performance of the proposed method in high-density static and dynamic practical environments [19]. The experimental results demonstrated that the performance of the FSR can be improved by selecting both the channel and SF. However, the selection of the SF and channel may be correlative, which was not considered in our previous work. In addition, the communication performance of the LoRa device strongly depends on the selection of the SF related to its location, which was also not considered. To consider the correlation of the SF and channel to improve the performance in our LoRa systems further, we set the SF-channel-selection problem as a combinatorial MAB-based SF-channel-selection problem and solved it using the MAB methods in this paper. Moreover, we evaluated the performance of our LoRa systems in the FSR with varied locations of the LoRa devices to show the relationship between the SF selection and the location of the LoRa devices. We consider that the proposed method may be a potential transmission-selection solution for the LoRa systems in the future. The main contributions of this paper are as follows: • We set the SF-channel-selection problem as a combinatorial MAB-based SF-channel problem and introduced the MAB algorithms, including the Tug of War dynamics (ToW), Upper Confidence Bound 1 (UCB1), and -greedy algorithms to solve the formulated problem. In the MAB-based SF-channel-selection methods, the SF and channel were selected only using the ACK information by the LoRa devices, which can be applied without modifications to the LoRa protocol. In addition, since the operation of the MAB algorithms is not complex, the methods can be easily implemented in IoT devices with memory and computational power constraints. • We evaluated the proposed method in experiments with real-world LoRa devices in an environment where the LoRa devices were distributed in multiple indoor locations. First, we evaluated the performance of the FSR and the relationship between the selection rate of the SF and the locations of the LoRa devices when only selecting the SF. The results demonstrated that the appropriate SF depended on the distance from the GW. Besides, the superiority of the MAB-based SF-selection methods was demonstrated by comparing the methods with random access. Then, we evaluated the performance of the FSR and Fairness Index (FI) when considering a joint selection of the SF and channel. Specifically, to show the effectiveness of the proposed combinatorial MAB-based SF-channel method for the FSR and FI, we compared it with the independent MAB-based SF-channel method, where the SF and channel are selected independently. Next, we focused on the performance evaluation of the proposed MAB-based SF-channel-selection method. The performance of the FSR with varying numbers of LoRa devices, transmission intervals, and the locations of the LoRa devices was evaluated exhaustively. The remainder of this paper is organized as follows. Section 2 provides an introduction to the related work. Section 3 describes the system model and the formulated problem. Section 4 describes the combinational MAB-based SF-channel-selection methods. Section 5 describes the implementation and performance evaluation of the proposed combinational MAB-based methods. Finally, we provide a conclusion to summarize this paper in Section 6. Related Work In this section, we describe the related work on communication-parameter-managementtechniques in LoRa systems. We first present the SF-and channel-selection methods in the centralized approach, followed by the decentralized approach. Centralized Approaches In this subsection, we introduce the related work on the centralized approaches for SF allocation, channel allocation, followed by SF and channel allocation.In the centralized approaches, the transmission parameters are allocated by the GW/network server. SF Allocation Methods Simple centralized methods for allocating SFs include the Equal-Interval-Based (EIB) and Equal-Area-Based (EAB) allocation schemes [15,20]. In these schemes, the total network area is first divided into concentric circles, assuming that the GW is at the center of the area. The EIB then divides the network to make the width of each annulus equal, while the EAB divides the network to make the area of each annulus equal. Next, the SF is allocated according to the proximity to the GW for the annuli. The smaller SF is assigned to the annuli that are near the GW. In [15], the EIB and EAB methods were analyzed and compared. These simple schemes are based on the idea that the reception strength weakens with the increase of the distance from the GW. However, in a real network environment, it is necessary to consider the effects of interference and fading in a particular region, as well as the channel conditions. Therefore, these methods are challenging to improve the scalability sufficiently in real-world environments. To this end, approaches to parameter optimization were proposed in [21][22][23][24], where the problem for resource allocation was formulated as the optimization problem, and optimization solvers were proposed to solve the formulated problems. Some other methods that allocate the SF based on channel gain and the Signalto-Noise Ratio (SNR) were proposed in [25,26], respectively. In [27], a modification of the existing LoRa@FIIT protocol was proposed, ensuring energy-efficient, QoS-supporting, and reliable communication over the LoRa technology by selecting an appropriate SF and transmission power. In [28], a deep-reinforcement-learning-based adaptive PHY layer transmission-parameter-selection algorithm was proposed to select the SF and power. The proposed algorithm was run on the GW to allocate the SF and power for the LoRa devices. It was shown that the proposed algorithm could achieve 500% packet delivery ratios in some cases while being adaptive at the same time. However, these centralized approaches for SF selection require the GW/network serverto know a priori information, such as the number of devices, their locations, and transmission probabilities. Furthermore, the GW/network servermust send control signals regarding the communication parameters to all LoRa devices, which leads to increased communication resource consumption and communication latency. Channel Allocation Methods In addition to the SF, as discussed in the previous section, the management of the channel also has a significant impact on the scalability in the LoRa systems. The quality varies greatly from channel to channel in the unlicensed ISM band because it is susceptible to interference from IoT devices and electronic devices in other applications. Similar to the SF-selection methods, many methods have been proposed [18,29]. However, these existing studies have disadvantages, such as the need for GW/network serverto know prior information, as well as the centralized approach in the SF-selection methods, which increases the communication resource consumption since the transmission parameters instructionneed to be sent from the GW/network server.Because the disadvantages described above will become more serious with the future increase of the number of IoT devices, most of the related centralized approaches would have difficulty becoming realistic solutions. SF and Channel Allocation Methods Reference [30] proposed a joint channel-and SF-selectionmethod that allocates a SF-selectionvalue depending on the rate demand of each end-device and considers the availability of the frequency channel for each uplink transmission. However, the details of the proposed method were not fully given. According to the existing description, it seems that the transmission parameters could not be adjusted corresponding to the dynamic environments, while the correlation of the location of the users and SF selection, as well as the implementation of the method were not well considered. Moreover, it seems that the proposed method in [30] is a centralized method, which may face the same disadvantages as other centralized methods. Distributed Approaches In the decentralized approaches, it is possible to reduce communication resource consumption, latency, and energy consumption compared to the centralized approaches, since LoRa devices make decisions independently. However, most existing studies on parameter selection methods for the LoRa systems are centralized approaches, and there are limited related studies on distributed approaches. In this subsection, we describe the SF and CH distributed selection approaches. SF Selection Methods SF-selection methods based on the MAB algorithm were proposed in [31,32]. In [31], SF selection based on a popular algorithm called Exponential Weights for Exploration and Exploitation (EXP3) was proposed and evaluated by simulation. In [32], a SF-selection method based on the Upper Confidence Bound (UCB), a MAB algorithm that can perform high-precision search, was proposed. The simulation results showed that the proposed method based on the MAB algorithm improved the success rate of data transmission. However, in the existing studies, SF-selection methods were proposed assuming that all LoRa devices use the same channel, which is unrealistic. In addition, the method has yet to be validated through real-world experiments, and realistic environments were not considered. Channel Selection Methods Several channel selection distributed approaches were studied in [33,34]. In these approaches, the channels were selected based on the MAB methods. In [33,34], a channelselection method using the UCB algorithm, a typical MAB algorithm, was proposed and implemented on an actual LoRa device. Moreover, the experimental results under a dynamic environment with changing channel states were presented. However, these studies assumed that all LoRa devices use the same SF, which is not a realistic assumption. Furthermore, only experiments with a small number of LoRa devices were conducted, and no investigations were conducted in environments with a large number of LoRa devices. SF and Channel Selection Methods In [19], a method for simultaneous channel and SF selection was proposed for multiple MAB algorithms. However, only the performance under high-density conditions was evaluated. The distance between the LoRa devices and the GW and the reception strength have yet to be considered. Furthermore, the MAB problem structure considering the relevance between the channel and SF when selecting them simultaneously has yet to be well evaluated. In summary, the existing studies on centralized methods have disadvantages, such as the need for the GWs/networks to know prior information and to send transmission parameters instruction to IoT devices, which increases communication resource consumption. Because the disadvantages described above will become more serious with the future increase of the number of IoT devices, most of the related centralized approaches would have difficulty becoming realistic solutions. Although studies on decentralized methods can solve the disadvantages of centralized methods, several issues have not yet been considered in the existing studies. For instance, the distance between the LoRa devices and the GW and the reception strength have yet to be considered. Furthermore, the MAB problem structure considering the relevance between the channel and SF when selecting them simultaneously has yet to be well evaluated. To solve these issues, we propose a combinatorial MAB-based joint channel-and SF-selection method in this paper, which will be exhaustively described in Section 4. The comparison of the relevant schemes is summarized in Table 2. and in the Table mean whether the reference considered the corresponding items or not. System Model and Problem Formulation This paper considered the uplink transmission of a LoRa system with a star topology consisting of one GW and L LoRa devices. Denote D = {D 1 , D 2 , . . . D l , . . . , D L } as the LoRa device set, where D l denotes the l-th LoRa device. Assume that the number of available channels for the LoRa devices is I. The public ISM band of Japan was used for the communications between LoRa devices and the GW in this paper, where the bandwidth of each channel was 125 kHz, while at most 15 channels can be used for communication. We considered a natural wireless communication environment where LoRa devices are distributed in various locations with different distances from the GW, as shown in Figure 1. In Figure 1, the concentric circles are divided according to the distance from the GW. Different colors represent different SFs assigned to LoRa devices, which may need to be assigned according to the distance between the LoRa device and the GW. Assume that the number of SFs is S. Each LoRa device selects one SF and one channel to transmit packets each time. As described in Section 1, LoRa employs CSS modulation so that signals with different SFs (7-12) can be identified and successfully received even if they are transmitted simultaneously on the same channel. In addition, different SFs have different transmission speeds and thresholds for the SNR that can be successfully received. Therefore, each LoRa device must select an appropriate SF considering its distance to the GW and interference effects in the surrounding environment. Theoretically, the spreading codes for different SFs are orthogonal, so collisions only occur when two or more LoRa devices choose the same SF and channel. In practice, however, perfect orthogonality may not be guaranteed, and the interference between transmissions using different SFs on the same channel must be considered [15,19]. In addition to the channel and SF, the bandwidth B and the transmit power TP also can be selected to improve the communication performance. The bandwidth can be chosen as 62.5 kHz, 125 kHz, 250 kHz, and 500 kHz. The transmit power can be selected from −1 dBm to 13 dBm, depending on the application requirements and the communication environment of the LoRa devices. In this paper, the bandwidth of the channel and the transmission power for all LoRa devices were set to 125 kHz and the maximum transmit power, i.e., 13 dBm, respectively. We assumed that all LoRa devices transmit M-byte packets with the same length each time. Denote TI as the transmission interval. Note that TI is the same for all LoRa devices. The process of packet transmission in the LoRa system is summarized as follows. The transmission parameters, including the SF and channel, are first selected by the LoRa devices using the distributed MAB-based reinforcement learning methods implemented on them. After determining the transmission parameters based on the implemented learning methods, carrier sensing is performed to check the availability of the selected channel. If the selected channel is available, the LoRa device sends a packet to the GW using that channel. The feedback ACKnowledgement (ACK) or NACK information from the GW will be received at the LoRa devices' side for a while after packet transmission, which is used to update the MAB-based reinforcement learning methods. If the ACK information is received, it represents that no packet collision or a capture effect occurred, and the packet from the LoRa device was successfully transmitted, as shown in the middle of Figure 2. On the other hand, the packet transmission fails for some reason if the NACK information is received. The reasons that cause the packet transmission failure may include that other LoRa devices transmit packets using the same channel and SF at the same time, as shown in the left side of Figure 2, causing packet collisions among them. In addition, the reason may include the interference from other IoT devices, or the signal is attenuated by shadowing due to a low SF value, resulting in an SNR value that is smaller than the threshold value that can be received, as shown in the right side of Figure 2. The FSR was used to evaluate the performance of the MAB-based joint SF-and channelselection methods in this paper. The FSR in the LoRa system at the t-th decision is defined as the ratio of the number of successful transmissions to the total number of transmission attempts, which is expressed as: where n l (t) is the number of transmission attempts by device l and r l (t) is the number of successful transmissions at the time t. This paper aimed to maximize the FSR by the MABbased decentralized learning methods, thereby improving the scalability of the overall LoRa application. The FSR maximization problem can be formulated as follows: To achieve this goal, an appropriate SF must be selected based on the distance from the GW and the surrounding environment. Meanwhile, a channel less affected by other LoRa devices must be well chosen. In the LoRa system, packet collisions occur when they are transmitted on the same channel and SF at the same time. Hence, the SF and channel must be co-selected, and their relationship must be jointly considered in the selection. Channel and SF Selection Based on MAB Algorithms As mentioned in the previous section, LoRa devices must select appropriate parameters, such as the SF and channel, according to the communication environments. To achieve this goal, the SF-channel-selection problem was formulated as the MAB problem in this paper and solved by the MAB-based algorithms. In this section, we first introduce the relationship between the SF-channel-selection problem and the MAB problem. Next, the SF-channel-selection problem is formulated as two MAB problems with different structures, i.e., a combinatorial MAB-based and an independent MAB-based channel-SFselection problems. Finally, we present the MAB algorithm for solving the formulated SF-channel-selection problems. MAB and Channel-SF-Selection Problems The MAB problem or bandit problem is one of the general problems first discussed by Robbins in [35]. In the MAB problem, the player selects a slot machine to play among several slot machines, aiming to maximize the number of coins he/she can earn by repeatedly playing [36]. The player needs to learn the probability of the number of coins for each slot machine to find the slot machine that pays the most by repeatedly playing. In other words, we have to perform exploration to gather information by playing slot machines other than the one with the best probability. On the other hand, if we perform more exploration than necessary, we cannot maximize the number of coins we can win. Hence, if we can estimate a good slot machine, we must play that slot machine to maximize the reward. The MAB problem is a decision-making problem that considers the trade-off between "exploration" for searching for a good slot machine and "exploitation" for playing a good slot machine to increase the coins in a series of trials. In the most-straightforward formulation, the bandit problem has K slot machines with probability distributions (D 1 ,...D K ). The mean and variance of each probability distribution can be expressed as (µ 1 ,. . . µ K ) and (σ 1 ,. . . σ K ), respectively. The player aims to find the probability distribution with the largest expected value and tries to obtain as many rewards as possible in a sequence of trials. At each trial t, the player selects a slot machine m(t) and wins r(t) as a reward (i.e., r(t) coins). The bandit algorithm for solving the bandit problem can be described as a decision-making strategy determining the slot machine m(t) to be selected for each trial. Reward maximization is the most-used metric for evaluating the performance of a bandit algorithm. The bandit algorithm for solving the bandit problem will be described later in this section. The reward maximization problem can be expressed as follows, where T is the total number of trials. As described in Section 2, we aimed to maximize the cumulative FSR by letting each device autonomously select the appropriate channel and SF using the ACK/NACK information. The problem of learning appropriate channels and SFs using only the ACK/NACK information can be transformed into the MAB problem: an IoT device (i.e., the player in the MAB problem) has S SFs and I channels (i.e., the slot machines in the MAB problem). The objective is to maximize the cumulative FSR (i.e., the cumulative rewards in the MAB problem). The relationship between the channel-SF-selection and the MAB problems is summarized in Table 3. MAB-Based Channel-SF-Selection Problem When the parameters to be selected are only SFs or only channels, i.e., when there is only one parameter to be selected, the MAB problem can be applied directly, as described in the previous subsection. However, to perform autonomous decentralized joint optimization of the channel-and SF-selection problem, we need to design the structure of the MABbased channel-SF-selection method. In this subsection, we introduce two structures of the MAB-based channel-SF-selection problem, i.e., combinatorial and independent MAB-based channel-SF-selection problems. Combinatorial MAB-Based Channel-SF-Selection Problem We first describe the combinatorial MAB-based channel-SF-selection problem. In this problem, any combination of the SF and CH is configured as one slot machine, as shown in Figure 3. Hence, the number of slot machines is I × S. The best slot machine among these combinations is selected using the MAB algorithms by maximizing the reward (i.e., the FSR). The main design idea of this structure is that it is necessary to optimize the channel and SF considering their potential relationship since packets sent using the same channel and SF simultaneously will cause collisions in the LoRa system. The combinatorial MAB-based channel-SF-selection problem process can be summarized as follows. The channel-SF is first selected based on the strategy of the MAB algorithms implemented on each LoRa device. Then, packets are sent using the selected SF and channel. The reward of the selected SF-channel combination is evaluated depending on whether the packet was successfully sent. For the next packet transmission time, each LoRa device dynamically selects the optimal SF-channel combination based on the updated evaluation and repeats this process until the time limit T is reached. The details of the combinatorial MAB-based channel-SF-selection problem are summarized in Algorithm 1. Algorithm 1 Combinatorial MAB-based channel-SF selection. 1: Initialize the parameters used in each MAB algorithm 2: while time t ≤ T do 3: Select channel-SF set based on the MAB algorithm. 4: Send a packet using the selected SF and channel. 5: if the packet is transmitted, and the ACK frame is received then 6: Transmission successful. Update the corresponding parameters according to each MAB algorithm. 11: Sleep for transmission interval TI. Independent MAB-Based Channel-SF-Selection Problem In the independent MAB-based channel-SF-selection structure, the channels and SFs are selected independently, aiming to optimize the channel and SF parameters, respectively. Two groups of machines are prepared; one group is used for SF selection, and the other group is used for channel selection. Hence, the numbers of the two types of machines are S and I, respectively. The number of machines for the independent MAB-based channel-SF-selection structure is S + I. Compared to the combinatorial MAB-based channel-SFselection problem, the number of machines can be reduced to a great extent. By this, the memory requirements can be reduced. In addition, the efficiency of the search for the appropriate channel or SF may be increased. The schematic diagram of this structure is shown in Figure 4. In the independent MAB-based channel-SF-selection problem, the SF is first selected based on the MAB algorithm implemented on the LoRa device. Similarly, a channel is selected. A packet is then sent using the chosen independent SF and channel. After that, the parameters related to the MAB algorithms are updated based on whether the packet was successfully transmitted. The process is repeated until the time limit T. The independent MAB-based channel-SF-selection problem's details are shown in Algorithm 2. The computational complexity of the independent MAB-based channel-SF method is O(1), which was analyzed in our previous work [19]. Algorithm 2 Independent MAB-based channel-SF-selection problem. 1: Initialize the parameters of the MAB algorithm used for the SF and channel selection 2: while time t ≤ T do 3: Select the SF among the SF slot machines using the MAB algorithm. 4: Select the channel among the channel slot machines using the MAB algorithm. 5: Send a packet using the selected SF and channel. 6: if the packet is transmitted, and the ACK frame is received then 7: Transmission successful. Update the corresponding parameters for SF selection according to the policy of the MAB algorithm. 12: Update the corresponding parameters for channel selection according to the policy of the MAB algorithm. 13: Sleep for transmission interval TI. MAB Algorithms In this paper, we focused on three MAB algorithms for solving the channel-SF-selection problem, that is the ε-greedy, UCB1, and ToW dynamics algorithms. In the following subsection, we discuss these three MAB algorithms in detail. ε-Greedy Algorithm The ε-greedy algorithm is widely used for solving MAB problems because of its simplicity. In each trial, the slot machine with the highest reward probability determined by experience is selected and played with a probability of 1 −ε. On the other hand, the slot machines are randomly selected and played with probability ε. The policy of the ε-greedy algorithm is expressed below. where j is the indicator of the channel and SF selection and j ∈ 1, 2, 3, that is j = 1 corresponds to the joint channel and SF selection in the combinatorial MAB-based channel-SF-selection problem and j = 2 and j = 3 correspond to the channel and SF selections in the independent MAB-based channel-SF-selection problem. K j is the number of arms corresponding to the structure j. K 1 is the number of SF and channel combinations for the combinatorial MAB-based channel-SF selection. The value of K 1 is I × S. The values of K 2 and K 3 are equal to the number of SFs S and channels I, respectively, for the independent MAB-based channel-SF-selection problem. k j is the set of slot machines. k 1 ={s 1 i 1 , s 1 i 2 , · · · , s 1 i I , s 2 i 1 , · · · , s S i I }, i.e., the set of channel and SF combinations for the combinatorial MAB-based channel-SF-selection problem. k 2 ={s 1 , s 2 , · · · , s S } and k 3 ={i 1 , i 2 , · · · , i I }, i.e., the set of channels and that of SFs for the independent MAB-based channel-SF-selection problem. k ij is the i-th slot machine in k j . N k ij is the number of times the arm k ij is selected at iteration t. R k ij is the number of successful transmissions among N k ij , i.e., the number of times the ACK information is received. Upper Confidence Bound1 Upper Confidence Bound (UCB) algorithm sequences were proposed by Auer and Bianchi in [37]. The UCB1 algorithm is the simplest one among the UCB series. The UCB1 algorithm selects the slot machine based on the average reward and the number of times each slot machine is played. This algorithm considers the upper bound of the confidence interval. The slot machine X k ij (t) is selected in the t-th trial after playing each slot machine once, according to the following equation. Auer et al. also proposed a UCB1-Tuned algorithm, which considers not only the empirical mean value of each slot machine, but also the empirical variance [37]. This algorithm is the best-performing algorithm among the current MAB algorithms. In the UCB1-Tuned algorithm, the slot machine is selected based on the following equation in each trial. where V k ij (t) is based on the estimated variance, which can be expressed as follows. σ k ij in the equation is the variance of the obtained reward. Tug of War Dynamics The ToW is a simple method with low computational complexity. It has been analytically validated that the ToW dynamics is efficient in maximizing stochastic rewards under dynamic environments where the reward probabilities of the arms change frequently [38][39][40][41]. The essential element of the ToW dynamics is a volume-conserving physical object. It assumes that each slot machine is allocated to multiple cylinders with branches filled with an incompressible fluid, as shown in Figure 5. The volume is then updated by pushing and pulling the corresponding cylinders depending on whether the slot machine is rewarded for a trial at time t. In addition, since the cylinders are connected, as shown in the figure, a volume increase in one part is immediately compensated by a volume decrease in another part. In the ToW dynamics, the arm k * ij with the height cylinder interface value X k ij is selected. The following formula expresses X k ij . There are various possibilities for adding oscillations osc k ij (t). References [41,42] studied the impact of oscillations on the efficiency of decision-making in detail, which is beyond the scope of this paper. In this paper, the incompressible liquids oscillate autonomously according to the following equation. In addition, Q k ij (t) is the estimated compensation for each arm, which is derived by the following equation: where α (0 < α < 1) is the discount factor for estimated compensation. By introducing α, we can control the impact of the past learning experience on the present to adapt to natural communication environments where the channel conditions may change dynamically. ∆Q k ij (t) is given by the following formula. In other words, if the transmission is successful and the ACK is received, the Q value of the selected arm (parameter) gains "+1" as a reward. By this, the height of the fluid interface for that arm is increased. Conversely, when the transmission fails and an ACK is not received, the corresponding arm (parameter) is updated with the punishment −ω ij (t). By this, the interface value of the selected arm is decreased. Correspondingly, the interface value of other arms increases. Here, ω ij (t) is expressed as: . (15) where p ij1st (t) and p ij2nd (t) are the arms with the highest and second-highest reward probabilities among all arms at time t, respectively. The reward probability is given by Equation (4). In the ToW dynamics, N k ij (t) is the number of times the arm k ij is selected by time t and R k ij (t) is the number of successful transmissions using the arm k ij by time t. N k ij (t) and R k ij (t) are given by the following equations, respectively. Implementation and Performance Evaluation of the MAB-Based Channel-SF-Selection Methods This section evaluates the proposed combinatorial MAB-based joint channel and SF-selection methods, including the ToW-dynamics-based, UCB1-based, -greedy-based methods, and random methods by conducting experiments using actual LoRa devices. Specifically, the FSR under different numbers of LoRa devices and the rate of the SF selection for the LoRa devices at different positions when only selecting SF were evaluated first. Then, the performance of the FSR and FI when selecting both the SF and channel was evaluated exhaustively. This section describes the experiment settings first, followed by the performance evaluation when only selecting the SF and selecting both the SF and channel, respectively. Experiment Settings The MAB-based SF-channel-selection methods use a LoRa module ESP320LR that supports LoRa communication in the 920 MHz band. A Raspberry Pi and a battery-powered Arduino mini pro were used as the GW and LoRa device controllers, respectively. The component parts of the LoRa device and the GW are shown in Figures 6 and 7, respectively. The communication between the GW and LoRa devices used a LoRa wireless link. The GW was connected to a common network server over a standard IP protocol stack using a WiFi router. Using the implemented LoRa system, we evaluated the performance of the (i) -greedy-based, (ii) UCB1-based, (iii) ToW-dynamics-based joint channel and SF selection in the combinatorial and independent MAB-based channel-SF-selection methods, and (iv) the random channel-SF selection. Among the compared methods, the UCB1based method was introduced in [27,32], while the MAB-based independent methods were proposed in [19]. By comparing the recently published results, we aimed to justify that the proposed ToW-dynamics-based combinatorial transmission selection method has advanced the state-of-the-art in this field. In the experiments, the impact of the transmission intervals, the number of LoRa devices, and the locations of the LoRa devices on the communication quality, especially on the FSR, was evaluated. The experiments were conducted indoors in a 120 m × 20 m rectangular area on the fifth floor of a concrete wall building. The LoRa devices were placed in several locations. The diagram of the experiment field is shown in Figure 8. As shown in Figure 8, the GW was deployed in Position of a room termed Room 1 in this paper. The LoRa devices in Position were deployed in the same room as the GW, i.e., Room 1, and there was no obstacle between them, guaranteeing a Line of Sight (LoS) path. Meanwhile, except for the LoRa devices deployed at Position in Room 1, all other LoRa devices were deployed in the corridor, i.e., Positions ∼ shown in Figure 8. The gray parts in Figure 8 are the other rooms with walls except Room 1, resulting in a Non-Lineof-Sight (NLOS) path with the GW. The Received Signal Strength Indicator (RSSI) from each location is summarized in Table 4. The LoRa devices deployed in the positions with a lower RSSI should select a higher SF due to the lower received strength at the GW. We considered two scenarios to evaluate the impact of the channel and SF selections on the network performance. The first scenario was where the LoRa device only performed SF selection. To verify the effectiveness of the MAB-based methods, we evaluated the performance of the FSR of the MAB-based methods compared with the selection randomly. Moreover, to confirm that the SFs needed to be selected appropriately according to the reception strength from each location, we evaluated the selection rate of the SFs at different positions. In the second scenario, both channel-SF selections were performed using the combinatorial and independent MAB-based selection methods described in the previous section. In this scenario, we first evaluated the effect of the structure of the MAB-based methods on the FSR. Then, we evaluated the performance of the FSR with varying numbers of LoRa devices and transmission intervals for the combinatorial MAB-based selection methods in detail. Note that the results shown below were the average value over 10 repetitions of the experiment in each setting. Performance Evaluation of the SF Selection In this subsection, we describe the experimental results in the setting where the LoRa device only performed SF selection. The effectiveness of the distributed approaches using the MAB algorithm was first evaluated. Then, the impact of the SF selection on the distance from the GW was evaluated based on the ToW-dynamics-based SF-selection method. In the experiments, the LoRa devices were placed at three locations, i.e., Positions , , and , in the experimental field shown in Figure 8. The number of channels was set as one, while the channel used in the experiments was CH1 working at 920.6 MHz band. The transmission interval was set to 20 s. Packets with a payload of 50 bytes were sent in each data transmission. The parameter ε in the ε-greedy method was set as 0.1, and the forgetting parameters α and β in the ToW dynamics were set as 0.9. The parameters related to the experiments are listed in Table 5. Figure 9 shows the performance in terms of the FSR for the MAB-based SF-selection methods and the random method. The number of LoRa devices was set to 3, 9, 15, and 30. The LoRa devices were deployed at each location equally, i.e., 1 device was deployed at each location when the total number of LoRa devices was 3, and 10 devices were deployed at each location when the total number of LoRa devices was 30. From Figure 9, it can be seen that the FSR decreased as the number of LoRa devices increased for all approaches. This was due to the increase in packet collisions as the number of LoRa devices increased. In addition, the MAB-based SF-selection methods can achieve a higher FSR than the random-based selection method, indicating that the distributed reinforcement learning approaches were effective. Moreover, compared to the existing studies on decentralized parameter selection using UCB1 in [27,32] and -greedy in [3], the ToW dynamics algorithm can achieve a higher FSR. A comparison of the FSR values showed that, as the number of LoRa devices increased, the difference between the ToW and the other MAB-based methods increased, which indicated that the ToW algorithm is more suitable for large-scale LoRa systems. Figure 10 shows the ratio of the SF selection at each position for the ToW-dynamics-based SF-selection method. The number of LoRa devices was set to 30 in the experiments. The results showed that a higher SF was selected by the LoRa devices farther from the GW. The reason was that the reception strength at the GW was weak for farther LoRa devices. A higher SF that could resist the noise was selected to guarantee successful transmission at a more-distant location with a lower receiver sensitivity and SNR threshold. In summary, the ToW-dynamics-based SF selection can select the appropriate SF for the LoRa devices deployed in different positions without prior information. Performance Evaluation of the Channel-SF Selection This subsection introduces the experimental results for joint channel and SF selection using the MAB-based methods described in the last section. We first evaluated the impact of the different MAB structures on the performance of the FSR and the FI. Then, we evaluated the impact of the number of users and the transmission interval on the FSR for the combinational MAB-based method. Finally, we evaluated the effect of the position of the LoRa devices deployed on the performance of the FSR. The performances of the FSR and FI for the two structures of the MAB-based channel-SF-selection methods were evaluated. In the experiments, the number of LoRa devices was set as 30, and 10 devices were deployed at Positions , , and , respectively. Three channels were used in the experiments, i.e., CH1, CH4, and CH7. The experimental parameter settings are shown in Table 6. Figure 11 shows the FSR for different methods at each location. CMAB and IMAB denote the combinatorial and independent MAB-based channel-SF methods, respectively. SF7-SF9 denotes the results with all LoRa devices fixed to the same SF and the channels equally allocated to 10 devices on each of the three channels. The experimental results showed that the FSR decreased with the distance between LoRa devices and the GW increasing. The reason may be that the time on air for the LoRa devices that were farther from the GW was longer, which may cause collisions with high probability. Moreover, the combinatorial MAB-based methods could achieve a higher FSR than independent MABbased methods for all the MAB algorithms. Since packet collisions in LoRa systems occur when both the channel and SF are the same, the combinatorial MAB-based method, which can account for their relationship, showed better results. The lowest FSR was obtained among the fixed allocation methods when the SFs of all LoRa devices was set to 9 for average.This was because the packet time on air was longer when the SF is set to 9, which increased the probability of packet collisions. On the other hand, the highest FSR among the fixed methods was achieved when the SF was set to 8 for average, but even at this FSR, the success rate was still lower than the MAB algorithm based on the CMABstructure. Furthermore, since the fixed method allocates channels equally based on prior knowledge of the number of LoRa devices, the MAB-based method was more effective in the actual network, where prior information was unavailable. Figure 12 shows the confidence interval of the FSR for combinational MAB-based and random channel-SF selection methods. From this, we can see that the combinational ToW-based channel-SF-selection method could achieve the highest FSR compared to the other methods. 508EZOBNJDT $."# VDC $."# HSFFEZ $."# SBOEPN '43 In Figure 13, we compare the FSR of all the MAB algorithm methods with the CMAB structure (CH-SF selection), the CH selection by all the MAB algorithms, and the SF selection by all the MAB algorithms. Among them, all the MAB algorithms methods with the CMAB structure are our proposed method, and the CH or SF selection by all the MAB algorithms represents the existing methods. From Figure 13, we can see that the ToW dynamics method with the CMAB structure was superior in the FSR to other methods. Figure 14 shows the performance of the FI for the MAB-based channel-SFselection methods. The FI was used to evaluate the fairness of the LoRa devices that were deployed at Positions -, which is expressed by the following equation: From the results, we can see that the MAB-based selection methods achieved a much higher FI compared to the random-based selection method, which verified the effectiveness of the MAB-based methods. Moreover, similar to the FSR evaluation, the combinatorial MAB-based channel-SF selection also could achieve a higher FI compared to the independent MAB-based methods. This was due to the high FSR in LoRa devices far from the GW, such as Positions and , in the combinatorial MAB-based method. Effect of the Experimental Parameters on FSR for CMAB Methods To measure the effectiveness of the experimental parameters on the FSR, the experiments were performed by varying the number of LoRa devices and the transmission interval. The number of LoRa devices was set to 3, 9, 15, and 30, while the transmission intervals were set to 20 s and 50 s. The structure of the method used in the experiments was a combinational MAB-based channel-SF-selection method since its superiority was shown in our previous experiments. The other parameters related to the experiments are listed in Table 7. We first evaluated the effect of the number of LoRa devices on the FSR. In the experiments, the transmission interval was set as 20 s. Figure 15 shows the results, from which we can see that the FSR decreased with the increase of the number of LoRa devices. The reason was that an increase in the number of LoRa devices increased the number of transmitted packets, increasing collisions and interference. In addition, compared to the experimental results shown in Figure 9, where only the SF selection was considered, the effect of the number of available channels on the FSR was small when the number of LoRa devices was small. However, when the number of LoRa devices increased, Figure 15, where multiple channels were used, shows a better FSR. This indicated the necessity of selecting both the SF and channel. /VNCFSPG-P3BEFWJDFT '43 5P8EZOBNJDT 6$# HSFFEZ SBOEPN Then, we evaluated the effect of the transmission interval on the FSR. In the experiments, the transmission intervals were set to 20 s and 50 s. The number of LoRa devices was set to 30. Figure 16 shows the experimental results. From the results, it can be seen that a higher FSR could be achieved for all of the combinational MAB-based channel-SFselection methods when the transmission interval was 50 s compared to the case where the transmission interval was 20 s. This indicated that the transmission interval should be set appropriately according to the requirements of the LoRa applications. The FSR may increase by adjusting the transmission interval autonomously, which will be studied in our future work. Effect of the Setting Position of LoRa Devices on FSR In our previous experiment, the LoRa devices were deployed at Positions -. In the following experiments, the LoRa devices were deployed at Positions -. The number of LoRa devices was set to 24. At each position, three LoRa devices were deployed. The other parameters used in the experiment were same as shown in Table 7, and the TI was 20 s. Figure 17 shows the FSR and the average RSSI of the received packets at each position. Similar to the previous results, it can be seen that the MAB-based channel-SF-selection methods achieved a much higher FSR than the random method, regardless of the positions. In particular, the random method showed a significant decrease in the FSR at lower RSSI values, indicating the necessity of using the MAB algorithms to select the channel and SF appropriately depending on the deployed positions of the LoRa devices. In addition, the FSR at each position was proportional to the RSSI value for all methods. The reason may be that the larger SF was selected by the LoRa devices deployed in the positions with a lower RSSI, which increased the time on air of the transmitted packets. Hence, the probability of collisions may increase. Conclusions In this paper, we implemented and evaluated lightweight autonomous distributed reinforcement learning methods for joint channel and SF selection in a practical larger-scale LoRa system. As a result, we were able to verify the necessity of dynamically selecting both the SF and channel. Specifically, the results showed that the channel-SF selection using the MAB-based methods was effective compared to random selection, especially in situations where the LoRa devices were distributed in various locations. Specifically, when the difference between the FSR of the proposed ToW dynamics and that of random selection was highest, the achieved FSRs of the ToW dynamics and random selection were 0.86919 and 0.59761, respectively. Hence, the percentage difference of the achieved maximum FSRs for the ToW dynamics and random selection was 145%. Besides, the ToWdynamics-based method outperformed other MAB-based methods, such as UCB1, used in the recently published results, whether with the combinational or independent structure. In addition, the structures of the MAB-based methods and the other communication parameters also greatly affected the FSR and FI. Specifically, the combinational MAB-based methods could achieve a higher FSR and FI than the independent MAB-based methods considered in our previous research. Hence, the relevance of the channel and SF is a very important factor for the communication performance of larger-scale LoRa systems. Moreover, the FSR can be improved by jointly selecting the channel and SF compared to only selecting the SF. Furthermore, by increasing the transmission interval, the FSR can be improved to a great extent. In our future work, we will consider the joint channel and SF selection in outdoor, longer-distance environments, the optimization of other communication parameters, and the energy efficiency of the MAB-based methods.
11,936.4
2023-07-26T00:00:00.000
[ "Computer Science", "Engineering" ]
Numerical study on transonic shock oscillation suppression and buffet load alleviation for a supercritical airfoil using a microtab ABSTRACT The effect of microtabs on shock oscillation suppression and buffet load alleviation for the National Aeronautics and Space Administration (NASA) SC(2)-0714 supercritical airfoil is studied. The unsteady flow field around the airfoil with a microtab is simulated with an unsteady Reynolds-averaged Navier–Stokes (URANS) simulation method using the scale adaptive simulation-shear stress transport turbulence model. Firstly, the influence of the microtab installation position along the upper airfoil surface is investigated with respect to the buffet load and the characteristics of the unsteady flow field. The results show that the shock oscillating range and moving average speed decrease substantially when the microtab is installed in the middle region between the shock and trailing edges of the airfoil. Subsequently, the effects of the protruding height (0.50%, 0.75% and 1.00% of the chord length) of the microtab (installed at x/c = 0.8 on the upper airfoil surface) on the buffet load and flow field are studied, and the results show that the effect on buffet load alleviation is best when the protruding height of the microtab is 0.75% of the chord length. Finally, the mechanism of buffet load alleviation with the microtab on the upper airfoil surface is briefly discussed. shock oscillation suppression; buffet load alleviation; microtab; transonic flow; URANS Nomenclature c airfoil chord length (mm) α angle of incidence between the airfoil chord and the free stream direction (°) α B buffet onset angle of incidence k reduced frequency u free stream velocity (m/s) f frequency of buffet load (Hz) M ∞ free stream Mach number Re Reynolds number based on free stream conditions and chord length x chord coordinate of airfoil, with the origin of the coordinate located at the leading edge of the airfoil x single grid average size along flow direction H protruding height of microtab from the surface of the airfoil W width of the microtab in the direction of the airflow C p pressure coefficient, (p − pa)/q C N coefficient of normal force p static pressure (Pa) pa free stream static pressure (Pa) PSD power spectral density CONTACT Jinli Liu<EMAIL_ADDRESS> Introduction Transonic buffeting involves the interaction between shock waves and the separated boundary layer under transonic flow conditions. The unsteady pressure fluctuations, i.e. the buffet loads induced by shock wave oscillation and boundary layer separation, can cause fatigue damage in airplane structures, thus reducing the controllability of the airplane and threatening flight safety. For modern airplanes with a thick profile and supercritical wings, the transonic buffet loads are particularly severe. Therefore, airworthiness requirements demand that airplanes must be completely controllable and have the ability to withdraw from buffeting as soon as possible when a buffet onset boundary is unintentionally penetrated, as well as having a maximum buffet penetration boundary or maximum demonstrated lift boundary that is not exceeded (Obert, 2009). In other words, flow control techniques should be applied to reduce the intensity of buffet loads and alleviate buffeting to enhance flight control performance and guarantee flight safety once the airplane surpasses the buffet onset boundary. The results of wind tunnel tests and numerical simulations have revealed that the shock oscillation on the airfoil at transonic speed interacts with the boundary layer near the wake. To suppress the shock oscillation and control the air flow in the boundary layer after the shock, the flow conditions in the region where the shock and the boundary layer interact can be modified, for example by applying vortex generators to decrease the flow separation tendency in the boundary layer (Molton, Dandois, Lepage, Brunet, & Bur, 2013;Unal & Goren, 2011) or applying bumps to weaken the shock intensity (König, Pätzold, & Lutz, 2009;Ogawa, Babinsky, & Pätzold, 2008). Modifying the flow in the region near the trailing edge can also suppress shock oscillation -for example, thickening the trailing edge of the airfoil to decrease the flow separation tendency in the boundary layer after the shock (Gibb, 1988), or employing an active control method using a trailing edge deflector (Caruana, Corrège, & Reberga, 2000). Recently, a flow control technique has been developed that involves installing microtab devices on the wing surface near the trailing edge to improve the flow conditions. This technique is mainly applied to the blades of wind turbines to modify the aerodynamic load distribution on the blades, thus reducing the weight of the blades and improving the efficiency of power generation (Baker, Standish, & Van Dam, 2007;Chow & Dam, 2006;Mayda, Van Dam, & Yen-Nakafuji, 2005). The advantages of microtab devices are the simplicity of their driving mechanism, low actuation power requirements, short actuation times and the minimal requirements for changing the original structure. By deploying or extending the microtab from the airfoil surface, the equivalent camber of the airfoil and the flow conditions near to the trailing edge are modified, and thus the interaction between the shock wave and the boundary layer can be changed. Hence, employing microtab devices to achieve transonic buffet load alleviation is a worthwhile research topic. So far, however, to the authors' knowledge no studies have been conducted on adopting microtabs in order to suppress transonic shock oscillation. Microtab buffet load alleviation on the National Aeronautics and Space Administration (NASA) SC(2)-0714 supercritical airfoil is investigated in this study, and the transonic unsteady flow field was simulated using an unsteady Reynolds-averaged Navier-Stokes (URANS) method. The effects of the installation position of the microtab device on the buffet load and the characteristics of flow field are investigated, along with the effects of the protruding height of the microtab device (fixed at x/c = 0.8 on the upper surface) on the buffet load. Finally, the mechanism of microtab buffet load alleviation is explored. Numerical simulation of the transonic unsteady flow field This study focuses on the buffet load of the NASA SC(2)-0714 supercritical airfoil which was designed by the Langley Research Center in the United States (US). It has a thickness to chord ratio e/c = 13.86%, which is located at 37% of the chord length away from the leading edge. Its maximum camber is 1.5%, which is located at four fifths of the chord length away from the leading edge. Its maximum camber is 1.50%, which is located at 80% of the chord from the leading edge. The airfoil profile geometry data is acquired from the UIUC web site (http://m-selig.ae.illinois.edu/ads/coord_data base.html#N) and the chord length of the airfoil is c = 1000 mm ( Figure 1). The transonic unsteady flow field around the NASA SC(2)-0714 supercritical airfoil with a microtab is simulated by solving the Navier-Stokes equations. The twodimensional computational domain is divided by C-type structured girds with 129,821 nodes. The far-field boundaries are imposed at 50 times the chord length away from the profile. The length of the first layer mesh is 1 × 10 −6 c, and the first layer mesh point y+ is always less than 1. The grid resolution along the flow direction in the shock oscillation region is x/c ≈ 0.003 (where x is the average size of a single grid along the flow direction) and the grid resolution along the flow direction in the wake region near to the trailing edge is x/c ≈ 0.005 (Figure 2(a)). The grids around the microtab are locally refined (Figure 2(b)). URANS simulations which use a one-equation turbulence model -such as the Spalart-Allmaras model (Spalart & Allmaras, 1992) -or a two-equation turbulence model -such as the k-ω shear stress transport model (Menter, 1994) -only generate large-scale unsteadiness, resulting in either a steady flow or a much lighter unsteady flow. However, the scale adaptive simulation-shear stress transport (SAS-SST) model (Menter & Egorov, 2005) can be dynamically adjusted to resolve structures in URANS simulations. This leads to an large eddy simulation-like behavior in the unsteady regions of the flow field and enables the turbulent spectrum to develop in the detached regions. Meanwhile, this model provides the standard Reynolds-averaged Navier-Stokes (RANS) capabilities in the stable flow regions. Hence, the SAS-SST model is suitable for the simulation of a developed buffet flow field ( Figure 3). The spatial flux terms and viscosity terms are discretized using a second-order accurate upwind finite-volume scheme which is modified from the one-order accuracy upwind scheme of Barth and Jespersen (1989). The discretization of unsteady terms is based on the secondorder accuracy backward Euler difference scheme, and the marching time step is 5 × 10 −5 s. The Reynolds number, based on the free stream velocity and the chord length, is 15 × 10 6 , and the free stream Mach number M ∞ is set as 0.725, with the angle of incidence α = 3.5°, which is greater than the buffet onset angle of incidence α B = 3.0° (Bartels & Edwards, 1997;Jenkins, 1989). Firstly, the simulation accuracy of the numerical method adopted to simulate the transonic unsteady flow field around the NASA SC(2)-0714 supercritical airfoil is validated via comparison with the results of an existing wind tunnel test (Bartels & Edwards, 1997;Jenkins, 1989). The conditions of the flow field are set as M ∞ = 0.725, α = 3.0°, and Re = 15 × 10 6 . In order to compare the buffet load frequency at different free stream velocities and between different airfoil chord lengths, the reduced frequency k (Equation (1)) is generally adopted: where f is the frequency of buffet load, c is the chord length of the airfoil, and u is the free stream velocity. For the NASA SC(2)-0714 airfoil, the reduced frequency obtained by the numerical simulation is k = 0.22, which is very close to the value of k = 0.21 acquired from the test conducted by Bartels and Edwards (1997). The time-averaged pressure distribution on the airfoil surface and the normalized pressure fluctuation root mean square value on the upper airfoil surface are obtained by numerical method and compared with the results of wind tunnel tests (Figures 4 and 5). The comparison shows that the numerical simulation results are in good agreement with the test results on the whole, with the exception that the numerical method slightly underestimates the range of the shock oscillation. The characteristics of the flow field and buffet load for the baseline airfoil In order to determine the influence of the microtab on the flow field and buffet load of the NASA SC(2)-0714 airfoil, the main characteristics of the flow field and information on the buffet load at the flow condition, M ∞ = 0.725, α = 3.5°, Re = 15 × 10 6 , are presented so as to provide a base of reference. The range of the shock oscillation on the baseline airfoil (airfoil without microtab) is about 10.00% of the chord length ( Figure 6), the fluctuation magnitude of the normal force is about 14.60% of the normal force magnitude (Figure 7), the reduced frequency of the normal force fluctuation is k = 0.22, and the corresponding fluctuation frequency of normal force is 17.38 Hz. Figure 8 illustrates the streamlines around the airfoil when the shock arrives at the upstream and downstream turning points (the most upstream and downstream positions that the shock can reach in a shock oscillation cycle) and also shows the flow separation after the shock in the slightest and most serious condition respectively. The influence of the installation location of the microtab on the buffet load and flow field characteristics In order to investigate the effect of the installation location of the microtab on the buffet load and the characteristics of shock oscillation, the microtab was installed on the upper surface of the NASA SC (2) where H is the absolute height of the microtab over the airfoil surface and W is the width of the microtab in the direction of airflow (Figure 9). Microtab installation at x/c = 0.6 chord-wise on the upper airfoil surface The flow field around the airfoil with the microtab installed at x/c = 0.6 chord-wise on the upper airfoil surface shows that the shock oscillates with a small amplitude and a high frequency and is simultaneously accompanied by an oscillation of a large amplitude and a low frequency. The high frequency of the shock oscillation is 17.38 Hz, which is the same as that of the shock oscillation on the baseline airfoil, while the low frequency is 1.58 Hz. Due to the interaction between the shock and the boundary layer, the flow variation within the boundary layer after the shock has a tendency similar to that of the shock oscillation (Figures 10 and 11). As a result, the pressure on the airfoil surface fluctuates with two frequencies in the regions after the shock ( Figure 12). It is obvious that the separated vortices after the shock, the separated vortices before and after the microtab and the separated vortices near the trailing edge of the airfoil always exist within a shock oscillation cycle. As the transonic buffet load mainly results from the shock oscillation, the magnitude of the buffet load depends on the shock oscillation range, i.e., the greater the shock oscillation range, the greater the buffet load, and vice versa. Figures 13(a) and 13(b) illustrate the chord-wise shock oscillating range in a largeamplitude, low-frequency cycle and a small-amplitude, high-frequency cycle, respectively. Compared with the shock oscillation range of the baseline airfoil (Figure 6), the shock oscillation range on the airfoil with a microtab at x/c = 0.6 chord-wise is reduced by about 50%. Figure 14(a) shows the time history of the normal force coefficient on the airfoil with a microtab at x/c = 0.6 chord-wise. Compared with the baseline airfoil, the fluctuation of normal force on the airfoil has two frequency components rather than a single frequency. The amplitude of normal force on the airfoil with a microtab at x/c = 0.6 chord-wise is reduced to 15% of that on the baseline airfoil. The power spectral density of the normal force is illustrated in Figure 14(b), which shows that the high-frequency fluctuation of normal force is 17.38 Hz, which is the same as that of the baseline airfoil. This demonstrates that the low-frequency fluctuation is induced by the microtab and dominates the normal force. Microtab installations at x/c = 0.7 and x/c = 0.8 chord-wise on the upper airfoil surface When the microtab is installed at x/c = 0.7 or x/c = 0.8 chord-wise on the upper airfoil surface, the shock oscillates and the scale of the vortices varies with a single frequency of 1.58 Hz, which is the same as the low frequency of shock oscillation on the airfoil with a microtab at x/c = 0.6 chord-wise. The behavior of the shock oscillation and scale of the vortex variation cause the surface pressure to fluctuate at the same frequency of 1.58 Hz (Figures 15 and 16). The differences in the flow fields between the airfoil with a microtab at x/c = 0.7 and the airfoil with a microtab at x/c = 0.8 are mainly manifested in the following respects. Firstly, there are two separated vortices after the microtab installed at x/c = 0.7 chord-wise on the upper airfoil surface within one shock oscillation cycle, while there is just a single separated vortex after the microtab installed at x/c = 0.8 chord-wise (Figures 17 and 18). Secondly, the shock oscillation range on the airfoil with a microtab installed at x/c = 0.7 chord-wise is almost 20% larger than that of the airfoil with a microtab installed at x/c = 0.8 chord-wise ( Figure 19). As a consequence, the amplitude of the normal force on the airfoil with a microtab installed at x/c = 0.8 chord-wise is 36% less than that of the airfoil with a microtab installed at x/c = 0.7 chord-wise (Figures 20(a) and 21(a)), and it is only 7% of that of the baseline airfoil. Because the flow fields around these two airfoils vary with the same frequency, the fluctuation frequency of the normal forces on both of these airfoils is equal to1.58 Hz (Figures 20(b) and 21(b)). Microtab installation at x/c = 0.9 chord-wise on the upper airfoil surface Compared with the airfoils with a microtab installed at x/c = 0.6, x/c = 0.7 and x/c = 0.8 chord-wise, the transonic unsteady flow field around the airfoil with a microtab installed at x/c = 0.9 presents a new flow pattern. There are two vortices in the region between the shock and the microtab, and the interaction mechanism between the shock and the boundary layer is similar to that seen with the baseline airfoil (Figures 8 and 22). Nevertheless, due to the interference effect produced by the microtab installed at x/c = 0.9 chord-wise, the interaction between the vortices before and behind the microtab have an evident impact on the flow field. The range of the shock oscillation and the amplitude of the vortex scale change after shock shows periodical variation (Figures 22-24). The variation characteristics of the flow field around the airfoil with a microtab installed at x/c = 0.9 chord-wise makes the amplitude of fluctuation of the surface pressure on the upper airfoil surface vary periodically (Figures 25 and 26). The fluctuation frequency of the normal force on the airfoil with a microtab installed at x/c = 0.9 chord-wise on the upper surface is the same as that of the baseline airfoil (Figure 27(b)), but the amplitude of the normal force varies periodically with time (Figure 27(a)) and the variation frequency of the normal force fluctuating amplitude is 1.58 Hz. The minimal normal force amplitude is 66% of that of the baseline airfoil, and the maximal normal force amplitude is 134% of that of the baseline airfoil. The influence of the microtab protruding height on the buffet load and flow field characteristics Through analyzing the buffet load and the characteristics of the flow field around the NASA SC(2)-0714 airfoil with a microtab installed at different chord-wise positions on the upper surface, it was found that the buffet load can be alleviated by a microtab installed at x/c = 0.6, x/c = 0.7 or x/c = 0.8 chord-wise. However, the microtab installed at x/c = 0.8 chord-wise provides the best buffet load alleviation of the three positions. Therefore, in exploring the effects of the protruding height of the microtab on the buffet load and the flow field, the microtab was fixed at the position of x/c = 0.8. The flow fields around the airfoils with microtabs of protruding heights H/c = 0.50%, H/c = 0.75% and H/c = 1.00% were numerically simulated. The characteristics of the flow field around the airfoils with microtabs of protruding heights H/c = 0.50% and H/c = 0.75% are very similar, the only difference being the shock oscillation range and the amplitude of the vortex scale change (Figures 18, 19(b), 28 and 29). The amplitude of the normal force of the airfoil with a microtrab of protruding height H/c = 0.75% is 15% less than that of the airfoil with a microtrab of protruding height H/c = 0.50%. The variation time histories of flow fields around these two airfoils present a stable and slow attenuation, and this flow field variation tendency cause the surface pressure fluctuations on the upper airfoil surface and the normal force to present the same trend (Figures 30 and 31). However, there is a clear difference in the time history of the flow field variation between the airfoil with a microtrab of protruding height H/c = 1.00% and the other two. The time history of the flow field variation around the airfoil with a microtrab of protruding height H/c = 1.00% can be divided into two stages. The first can be called the 'disturbance stage' as the variation amplitude of the flow field after the shock increases with time, and this variation tendency causes the amplitude of the surface pressure fluctuation and normal force to increase (Figures 32 and 33). In the second stage, the variation amplitude of the flow field decays rapidly with time but the variation frequency of the flow field remains constant, a variation tendency which makes the amplitude of the surface pressure fluctuation and normal force decrease with time, so the second stage can be referred to as the 'amplitude attenuation stage'. Discussion on the mechanism of buffet load alleviation The microtabs installed at x/c = 0.6, x/c = 0.7 and x/c = 0.8 chord-wise on the upper airfoil surface cause a change in the expanding modes of the vortices. Furthermore, because the intensity of the interaction between the shock and vortices before the microtab is greater than the intensity of the interaction between the vortices after the microtab and the airflow above these vortices, the intensity variation of the vortices after the microtab lags behind that of the vortices before the microtab. Hence, there is a height difference between the vortices before and after the microtab (Figure 34), which forms a 'geometry effect' (Iovnovich & Raveh, 2012). This geometry effect causes the velocity variation tendency of the flow after the microtab and above the vortices to be contrary to that caused by the interaction between the shock and the boundary layer, and this weakens the interaction intensity between the vortices after the microtab and the flow above these vortices. Therefore, the variation amplitude and the change rate of the vortex intensity after the microtab decrease substantially. The intensity variation of the vortices after the microtab influence the circulation of the airfoil, and this circulation variation affects the shock intensity, so the shock intensity variation amplitude and rate decrease dramatically, and thus the interaction intensity between the shock and the boundary layer decreases. Consequently, the shock oscillation range and moving speed decrease, and as a result the fluctuation amplitude and frequency of the buffet load also decrease. The interaction behaviors between the shock and the boundary layer are different when the microtab is installed on the upper airfoil surface at x/c = 0.6, x/c = 0.7, x/c = 0.8 and x/c = 0.9 chord-wise. A stable and new mode of interaction between the shock and the boundary layer is formed on these airfoils. This interaction mode is named the microtab mode, and the mode of interaction between the shock and the boundary layer that occur on the baseline airfoil is referred to as the baseline mode. The two modes have their own shock wave oscillation frequencies. When the microtab is installed at x/c = 0.6 chordwise, the shock oscillation and vortex intensity variation have two different frequencies due to the existence of two modes of interaction between the shock and the boundary layer on the airfoil simultaneously -that is, the microtab mode and baseline mode exist simultaneously. However, only the microtab mode exists on the airfoil when the microtab is installed at x/c = 0.7 or x/c = 0.8 chord-wise -and when the microtab is installed at x/c = 0.9 chord-wise, the baseline mode plays a predominant role, while the microtab mode plays a disturbance role in the unsteady flow around the airfoil. Conclusion This paper has presented an investigation of the influence of microtab devices on the characteristics of the buffet load and shock oscillation to which the NASA SC(2)-0714 supercritical airfoil is subjected in transonic air flow. The unsteady flow field was simulated using a URANS method with an SAS-SST turbulence model. The effect of the installation location of the microtab on the buffet load and the characteristics of the flow field was investigated by installing the microtab on the upper surface of the airfoil at x/c = 0.6, x/c = 0.7, x/c = 0.8 and x/c = 0.9 chord-wise. Based on the time history of the normal force and the characteristics of the shock oscillation, it is concluded that the shock oscillation range and the amplitude of the buffet load decrease when the microtab is installed at x/c = 0.7 or x/c = 0.8. This positioning can also decrease the average moving speed of the shock and hence decrease the buffet load fluctuation frequency. The effect of the microtab height protruding from the airfoil surface on the buffet load was also studied with the microtab installed at x/c = 0.8 chord-wise on the upper surface of the airfoil. The results indicate that the buffet load alleviation efficiency is best for a microtab with a protruding height of H/c = 0.75% compared to the other heights that were tested. Finally, the mechanism of microtab buffet load alleviation was presented. The height difference between the vortices before and after the microtab causes the velocity variation tendency of the flow after the microtab and above the vortices to be contrary to that caused by the interaction between the shock and the boundary layer. This change can suppress the shock oscillation and alleviate the buffet load of the NASA SC(2)-0714 airfoil. The influence of a spanwise arrangement scheme of microtabs on the developed buffet flow field and buffet load of a wing is left to future work.
5,796
2016-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Numerical simulation of the fluid dynamics within the tissue engineering scaffolds micro-tubes The core of tissue engineering is the fabrication of a complex three-dimensional space with cells and biomaterials. In the development of porous scaffolds in vitro, no matter the seed cells ran into the scaffold, or its excretion of waste discharged scaffolds, they both need the nutrient solution to be brought into or taken out. Therefore, the flow of the nutrient solution, cells and metabolic waste for the profitable living in vitro culture plays a significant role. the paper constructed bone scaffold models of different geometric parameters, analog cells, and nutrient solution flow conditions in the scaffolds using the FLUENT software. By making a contrastive analysis with the simulated result, the internal organizational design parameters of scaffold for tissue engineering is optimized and a certain amount of data and a theoretical basis for the internal bone scaffold structure design is provided. Introduction From the source map of blood supply for bone, the outer of the Haversian canal has multi-layer bony plates those are centered around the center, arranging in concentric circles, which constitute bone unit with Haversian canal, known as the Harvard system. Harvard system connects with the Volkmann canals compose mainly channel for transport nutrients, which provides a good space and access for the seed cell proliferation, scaffolds in growth as well as the exchange of nutrients and metabolites discharge. Based on the design and development of artificial bone, in view of the ratio of the actual bony structure, the model of bony scaffold was established and the fluid dynamics of cell and nutrient solution within the scaffolds are stimulated. Model establishing Nutrient solution and bone cells are transported to bone scaffolds by Volkmann canals. The fluid flow in the bone scaffold is a mixture of nutrient solution and bone cells. The density of nutrient solution used is 1100kg/m 3 , viscosity is 0.0018Pa.s. The density of bone cell used is 1200kg/m 3 and volume fraction is 0.02 which ratio of the mixture is small. The average velocity of the nutrient solution flow is 0.19m/s. Based on above, the density of the mixture of nutrient solution and bone cell is determined as 1100kg/m 3 , the viscosity is 0.0018Pa·s, and the initial flow rate is set to 0.19m/s [1][2][3] . Because of the flow rate of the nutrient solution and the cell is small, and its viscosity is large, according to the Reynolds number formula, the Reynolds number is calculated as 244, which is much less than 2000. That means the flow is an incompressible laminar flow. The laminar flow model calculation is adopted [4] . When the fluid flows to the Volkmann canals, the flow is influenced by the local resistance generated the bifurcation canal, which makes the rate of the Volkmann canals reduce. In the central three Haversian canals, fluids from two Volkmann canals converge there, flow increases and results in the flow rate among increases. GCMM matecconf/201 4024 within each micro-tubes are uniform and the average flow rate reaches around 0.2m/s, which is beneficial to the transport of cells and nutrient solution. The effect of Haversian canal length When Haversian canal diameter decrease, the entrance resistance increase, the loss of local energy increases, the velocity of the fluid in the Volkmann canal is reduced quickly. Therefore, the fluid of the first layer in the Volkmann canal has a larger flow rate, when the fluid in the Volkmann canals confluence into Haversian canal, due to Haversian canal's diameter is bigger than the Volkmann canal, the flow rate will increase after the fluids confluence, but the increase is not obvious. The situation in other layers is similar to the first layer. Within the range of this simulation, Haversian canal diameter is larger, the fluid flow rate of in each micro-tube bone scaffold is relatively uniform, and the average flow will reach to about 0.2m/s. mixture of nutrient solution and cells in the scaffold. The length of the Haversian canal is 1.5mm, the diameter is 0.8mm and the angle between Harvard's and the Volkmann canal are respectively 30°, 37.5° and 45° [5][6] . When the angle is 30°, the upper of the Volkmann canal fluid flow rate is greater, which is 2.17e-01 m/s, but the flow rate in the tube is not uniform. The effect of the angle between Haversian and the Volkmann canal With the angles become bigger, the flow rate of fluid in the Volkmann canals is gradually reduced and the fluid flow within the tubes is uniform. When the angles are 37.5 °and 45 °, the flow rate of the upper of the Volkmann canal are 1.93e-01m/s and 1.88e-01m/ s respectively; meanwhile, with the angle increase, the fluid rate of each micro-tube in bone scaffold are more uniform. When the angle is large, most of the fluid in the Haversian canal is shunt by the Volkmann canal. When the fluid flows into the bottom of Harvard canal, the volume and velocity of fluid decrease. Whether the fluid flow from the Haversian canal into the Volkmann canals or on the contrary, the flow of fluid are uniform, but the uniform flow area is smaller than the area of 45° [ 7] . Conclusion With the Haversian canal length decreasing, the fluid flow rate of each of the microtube are slightly reduced, the flow rate in each the mirotube is uniform, which is conducive to the delivery of nutrient solution and cells. With Haversian canal diameter increasing, the fluid flow rate of in the each pipeline bone scaffold is relatively uniform, and the average flow will reach to about 0.2m/s. With The angle between the Harvard's tube and the Volkmann canal increasing, it is easier to transport the cells and the nutrient solution and more suitable for tissue engineering scaffolds. Within the range of this simulation, the optimized structure parameter of bone scaffold is the length of Haversian canal of 0.15mm, Haversian canal diameter of 0.6mm, the angle between the Harvard's tube and the Volkmann canal of 45°.
1,401.4
2017-01-01T00:00:00.000
[ "Biology", "Engineering" ]
Implementation of KLEMS Economic Productivity Accounts in Poland The aim of the article is to demonstrate how the KLEMS economic productivity accounts for Poland have been performed. The main research problem was to find solutions to certain coun‐ try‐specific data insufficiencies. On this basis, a hypothesis was put forward that by using some in‐ novative but acceptable missing data assessment techniques, it is possible to supply sufficient data for Poland for the mentioned accounts. After an overview of KLEMS economic productivity accounts and the relevant fundamental methodology, the article presents further how specific data problems that have arisen have been solved. Introduction The acronym KLEMS originates from traditional symbols used in formal equations for economic values (K for capital and L for labour) and from capital letters in English (E for energy, M for materials and S for services). Therefore, it indicates the factors of production included in KLEMS productivity accounting. The first two, i.e. capital and labour, are the so-called primary factors. The other three are components of intermediate consumption otherwise called intermediate input. The aim of this paper is to present KLEMS economic productivity accounts now being implemented in Poland and discuss them. In the second section, the origins of the methodology framework derived from the neoclassical production function are outlined. In the third section, this methodology is explained in more detail in terms of individual production factors. Theory and statistical technicalities are combined in this section. In the fourth section, data processing techniques are presented and discussed to demonstrate that carrying out KLEMS economic productivity accounting for Poland is now feasible. The conclusion section summarises the outcomes. Basic overview Measuring economic productivity growth has a quite long tradition. Initially, the growth of the economy was assumed to depend on only one production factor, i.e. the capital factor or the labour factor. Then the Cobb-Douglas function was tested in the 1920s. This function relates the economic growth to two factors of production -capital K and labour L: . where α is the income share of capital and β is the income share of labour in total income. This means that the gross domestic product (GDP) from the left hand side of the equation (1) is being equated to gross domestic income (GDI) divided into factor shares from the right hand side of the equation (1). The assumption of constant returns to scale and perfect market competition is necessary to treat the elasticities α and β as factor income shares that sum up to unity. From this production function, Robert Solow (1956;1957) derived his well-known decomposition: In this formula, residual value ΔA/A, called the Solow residual, appears. It represents an "unknown factor" contributing also to economic growth. Robert Solow , it also contains all sorts of known and unknown possible factors of production other than capital and labour, and equation (2) is always met. The Cobb-Douglas production function and the Solow decomposition have become the foundations from which the KLEMS economic productivity accounts were developed. In the following developments, the idea of global output decomposition was put forward. This required the inclusion of intermediate input (intermediate consumption) as a production factor in the decomposition and replacing gross domestic product (GDP) by gross value added (GVA). Those changes were accompanied by the introduction of the System of National Accounts (SNA), which allowed to integrate the production theory accounts with statistical methods, therefore a decomposition by industry aggregations has become possible. A new term was introduced for the Solow residual (and its other name total factor productivity -TFP), i.e. multifactor productivity (MFP). According to theoretical developments, KLEMS productivity accounting should be based on the global output decomposition, but there is an inconvenience that may be not negligible. The more vertical integration of firms' economic activities is present in a given economy, the more intermediate consumption is hidden as intrafirm supplies not statistically reported. This hinders international comparisons because there are huge differences in vertical integration of economic activities in firms between different countries. In order to mitigate this problem to some degree, an appropriate subdivision of the economy into statistical industries is chosen (the ideal subdivision would mean that vertical integration should happen only within the statistical industries, not between them). A similar methodology to KLEMS productivity accounting is used in the OECD productivity accounts, as shown in Figure 1. However, in order to encompass as many countries as possible, only a decomposition of GDP is applied, instead of global output and GVA decomposition (see: OECD, 2001;2009, andparticularly: OECD, 2013: 66-70). There is no mention of intermediate input. Some assumptions in the OECD methodology are relaxed in comparison with the KLEMS methodology, such as constant returns to scale, which are considered as being only approximately met. Conversely to the OECD methodology, the KLEMS methodology goes further in its details. It contains also the labour factor decomposition into the contributions of hours worked and labour quality, as well as the capital factor decomposition into the contributions of ICT and non-ICT capital. If the global output decomposition is also performed, then the intermediate input is decomposed into the above-mentioned contributions of energy, materials and services. 1 In the preparatory works, the OECD growth accounting methodology was studied as well for possible insights; see: OECD (2001;2009;2013), Wölfl, Hajkova (2007. 2 See also a large overview of the subject: Jorgenson (2009). The EU KLEMS methodology differs in some few details from Poland KLEMS and this matter will be referred to latter on. where Y is the output, X -intermediate consumption, K -capital stock, L -the labour factor and where A Y stands for residual multifactor productivity (MFP). These values are subscripted by j for industries and t for years. v ̅ with appropriate superscripts and subscripts are average value shares of the individual factors in the output (defined in the superscripts by X, K and L) for two discrete time periods t -1 and t, which are calculated through linear interpolation as v ̅ = (ν t1 + ν t )/2 (subscripts omitted here for simplicity). Since the growth of A Y is residually calculated, equation (3) is always met. The term on the left-hand side and the three factor terms on the right-hand side should be calculated by aggregations with the use of Törnqvist quantity indices as follows: where y stands for individual enterprises or given groups of enterprises within a given industry j (whose aggregation is usually done on a regular basis by the NSI 3 departments responsible for the National Accounts), l stands for adopted labour types in a given methodology (18 in and to use the Törnqvist quantity indices independently for the three components of the intermediate input 4 . As for the EU KLEMS countries performing this accounting, the methodology has been reduced to a GVA decomposition following the standard equation: 3 National Statistical Institutes. 4 The formulae for the three Törnqvist quantity indices are very similar to formulae (4), (5), (6) and (7), with appropriate superscripts and subscripts. where V is GVA and where A V stands for residual MFP. w ̅ with appropriate subscripts are average value shares of the individual factors in GVA (defined in the superscripts as K and L) for two discrete time periods t -1 and t, which are calculated through linear interpolation in a similar way as v ̅ for the above-mentioned formula (3). The other symbols have the same meanings as in equation (3). Both in (3) and (9), the capital factor is decomposed into two sub-factors as follows: where KIT stands for ICT capital and KNIT for non-ICT capital, treated as separate factors, which is expressed also by their different shares jt (appropriately superscripted). These shares are also calculated in the same way as other shares through linear interpolation. The Törnqvist quantity indices should be used here also independently for the two sub-factors. The labour factor is decomposed somehow differently as follows: where H stands for hours worked and Q for labour quality, treated however as a single factor, which is expressed by the same share as for L (calculated also in a similar way as the other shares through linear interpolation). Here the growths of the so-called sub-factors sum up directly to the growth of the entire labour factor as follows: The labour quality sub-factor growth is therefore residually calculated through subtraction. This small difference in comparison with the capital factor decomposition (3) is, however, of no importance as far as the additivity of the sub-factor contributions to the GVA growth is considered 5 . We assume here that the total labour factor (L) growth is the sum of the physical growth of the working labour force accounted as hours worked (H) and of the labour quality (Q) growth. If labour quality is understood as labour composition (LC), as in the EU KLEMS framework, then it should be calculated according to the formula: 5 Equation (15) in O'Mahony, Timmer (2009: F378) also expresses this difference but instead of "labour quality", we have "labour composition", which is more narrowly defined. The first term on the right-hand side is the Törnqvist quantity index applied over 18 kinds of labour l in individual industries j. v ̅ ljt are average value shares of the individual labour kinds l in industries j calculated in a similar way as for formula (3) through linear interpolation. However, labour quality contribution can be understood also differently, as hourly wage growth contribution (arising from wage changes within the above-mentioned 18 kinds of labour), and it may therefore give different results to some extent. In such a case, the formula for labour quality should be: where W with appropriate subscripts stands for the total labour compensation in the given aggregation. For Poland, the final data are now ready in both methodologies of labour quality calculation. All data have been calculated after being converted initially into 2005 prices and presently into 2010 prices, following the same change in Eurostat transmission tables that happened during the work on KLEMS in Poland. The chain index number theory as presented, for example, by Schreyer (2004) and Milana (2009) was applied. Information on data processing by other countries was studied for comparison and reference in Gouma, Timmer (2013a;2013b). During the work, the ESA'95 Eurostat system changed into the ESA2010 system. Polish data processing issues For the labour factor, data are available as a representative survey with a code name Z-12 for the even years : 2004, 2006, 2008, 2010, 2010 and 2014. They are therefore sufficient to perform KLEMS accounting with the final results from 2005 onward. The pre-2004 data are unsystematically and inconsistently collected, therefore of very low quality. For the uneven years, linear interpolation has been applied. For the year 2004, the data deliver information on the number of full-time workers, average hourly gross wages per hour worked in the nominal time and in the overtime during the entire years by full-time workers in Polish zlotys and the number of hours worked by full-time workers. From 2006 onward these data concern also part-time workers. The data are in the NACE 1 classification system (European equivalent of ISIC 3) for the years 2004-2007. From 2008 onward, they are in the NACE 2 classification system (European equivalent of ISIC 4). However, the Demographic Surveys and Labour Market Department of the Central Statistical Office of Poland delivered the data for 2008 also in the NACE 1 classification system 6 . 6 A great many thanks to our colleagues from the CSO Demographic Surveys and Labour Market Department for having performed this conversion, and also for compiling data from the above-mentioned Z12 survey. The growths for 2008 could then be calculated by subtraction of the 2007 levels from the 2008 levels in the NACE 1 system, whereas the growths for 2009 could be calculated by subtraction of the 2008 levels from the 2009 levels in the NACE 2 system. The other ways of doing these calculations delivered results with unusual breaks. When necessary, a simplified subdivision into 14 wide industries combining the two NACE systems provided by the EU KLEMS was also used. According to the KLEMS requirements, the data of the Z-12 survey are available by 18 above-mentioned labour kinds, which arise from subdivisions by 2 sexes, 3 age groups and 3 education attainments. This way, data matrices 18X14 were available for further data processing. The data from the above-mentioned Z-12 representative survey concern 7-8 million employees (the number slowly increasing over the above-mentioned period of 2004-2014), which is not the entire labour market of about 14-16 million of employees together with the self-employed. However, the said data were used only as a structure to distribute the entire labour market data acquired from some other source. The best option for this other source was to use Eurostat transmission tables, which are templates provided by Eurostat to individual countries' National Statistical Institutes (NSIs) to be filled with data. This is because they are filled according to evened regimes for all the countries and in accordance with the SNA and its European equivalent ESA national account systems. As far as the capital factor is considered, it is to be divided into nine categories, according to the KLEMS framework 7 : 1) residential structures, 2) non-residential structures, 3) transport equipment, 4) other machinery and equipment, 5) computing equipment, 6) communications equipment, 7) agricultural biological assets, 8) intangibles, 9) software. In the practical implementation, agricultural biological assets and the intangibles are usually combined into the "other assets" category, therefore the EU KLEMS data sets have only 8 categories of assets. The category of residential structures is specific because there is the problem of ownership vs. use here, and as Timmer et al. (2007a: 42) mention, it is unclear how individual countries deal with this problem. If residential capital is excluded from total capital stock, then only 7 asset categories are left in the accounts, just as it is done in the OECD meth-7 There is some discrepancy in the names of the assets between the Eurostat transmission tables and the EU KLEMS manuals, and also with other references. Therefore, we are using here our own nomenclature, but quite similar and concerning the same items. Pobrany 31-12-2020 W y d a w n i c t w o U Ł odology (see Figure 1). It must be noted here that in the case of Poland the inclusion of dwellings in KLEMS growth accounting remains a controversial issue because of the opaque Polish dwelling market that not always reflects real values. However, for international comparisons with the EU KLEMS countries, it should be included. For Poland, the final data are now ready in both methodologies of capital stock calculation. In Poland, the asset categories 5) computing equipment and 6) communications equipment are not extracted from the category 4) other machinery and equipment. Also, the category 9) software is not extracted from the category 8) intangibles. As we know, in the EU KLEMS framework, these three categories of assets are aggregated into the so-called ICT capital, and the other remaining categories of assets are aggregated into the so-called non-ICT capital. Therefore, for the capital factor, the basic operation was to extract these three categories of ICT capital. This was done thanks to Supply and Use Tables (SUT) from which the structure of software services was used to distribute the values of the aggregated investment figures present in these tables for the above-mentioned three categories of ICT capital, based on the assumption that software services are quite proportional to these three categories of investment (they can be seen as "collateral"). Then the resulting structure was turned into 34 EU KLEMS aggregations. Non-ICT capital values were calculated by subtraction of ICT capital values from total capital values. Since SUTs are available only in NACE 1 and NACE 2 not converted between each other (and they shall not be converted!), the same 14 wide industry correspondences were used as for the labour factor 8 . Asset stocks were used to distribute aggregate capital income shares into capital income shares by industries. This method was chosen because the relatively high quality and very detailed data on asset stocks (a specific and outstanding feature of Polish statistics) made it superior to other methods 9 . One expected problem in KLEMS productivity accounting in Poland was the transition from the ESA'95 to ESA2010 systems, as not all data were converted from one system to the other (and some data shall never be converted as it is the case with SUTs from before 2010). Therefore, although only occasionally, there was a need to use mixed data from both systems. To test whether it is acceptable, subtractions between asset growths in the ESA2010 system and asset growths in the ESA'95 system were performed and it was found that the differences between the two systems in this case were always negligible. The data prepared in this way were further processed conformably to the methodology presented in the previous section. However, for the sake of com-8 Otherwise, the two factors could well be not balancing each other in gross value added from place to place. 9 A great many thanks to our colleagues from the CSO National Accounts Department for compiling data on asset structures for us. Pobrany 31-12-2020 W y d a w n i c t w o U Ł parison, four techniques of calculation, which arise from two dichotomies, were used. One is the possibility to calculate everything at all aggregation levels or, alternatively, only at the lowest aggregations and aggregate partial results using the above-mentioned Törnqvist quantity index, which theoretically is the best procedure. The other is the possibility of using two mathematical formulae for relative growth, i.e. Δx/x and Δlnx, and here when the Törnqvist quantity index is used, logarithms are theoretically necessary. The four techniques delivered similar results, but the most appropriate technique based on the Törnqvist quantity index is the one to be referred to. Conclusions The draft results of KLEMS productivity accounting for Poland are now ready for the years 2005-2014 and they are to be posted on the CSO website 10 . In the 2018 release, the years 2015-2016 should be covered (if no unforeseen setback occurs), and the accounts shall be developed 11 . Meanwhile, on the EU KLEMS platform, a September 2017 release has been posted, just before the final publication of this paper, with 2014 or 2015 (depends on the country) as the last covered year. The results for Poland can therefore be compared with those of the EU KLEMS countries. They are quite similar to those of Gradzewicz et al. (2014) but based on a methodology more in line with KLEMS, thanks to the data operations presented in this paper. The final data for Poland are both for labour quality understood as labour composition and as labour hourly remuneration change. They are also both for capital stocks including and not including residential capital. This gives four combinations available. Thus, it was proven that the KLEMS economic productivity accounts for Poland can be carried out and possibly extended.
4,435.2
2018-02-28T00:00:00.000
[ "Economics" ]
Private and Secure Secret Shared MapReduce (Extended Abstract) . Data outsourcing allows data owners to keep their data in public clouds, which do not ensure the privacy of data and computations. One fundamental and useful framework for processing data in a distributed fashion is MapReduce. In this paper, we investigate and present techniques for executing MapReduce computations in the public cloud while preserving privacy. Specifi-cally, we propose a technique to outsource a database using Shamir secret-sharing scheme to public clouds, and then, provide privacy-preserving algorithms for performing search and fetch, equijoin, and range queries using MapReduce. Consequently, in our proposed algorithms, the public cloud cannot learn the database or computations. All the proposed algorithms eliminate the role of the database owner, which only creates and distributes secret-shares once, and minimize the role of the user, which only needs to perform a simple operation for result reconstructing. We evaluate the efficiency by ( i ) the number of communication rounds (between a user and a cloud), ( ii ) the total amount of bit flow (between a user and a cloud), and ( iii ) the computational load at the user-side and the cloud-side. Introduction Data and computation outsourcing move databases and computations from a private cloud to a public cloud, which is not under the control of a single user. Thus, the outsourcing results in less burden on a private cloud in terms of the maintenance of databases, infrastructures, and executions of queries. Unfortunately, the ease in storing data and executing computations in the public clouds implies a risk of violating security and privacy of the databases and the computations. MapReduce [4] provides efficient and fault tolerant parallel processing of largescale data without dealing with security and privacy of data and computations. The main obstacle for providing privacy-preserving framework for MapReduce in the adversarial (public) clouds is computational and storage efficiency. An adversarial cloud may breach the privacy of data and computations. In this paper, we present techniques for executing MapReduce computations in public cloud while preserving privacy. Motivating examples. We present an example of equijoin to show the need for security and privacy of data and query execution using MapReduce in the public cloud. Consider that the relations X and Y belong to two organizations, e.g., a company and a hospital, while a third user wants to perform the equijoin. However, both the two organizations want to provide results while maintaining the privacy of their databases, i.e., without revealing the whole database to the other organization or the user. Hence, it is required to perform the equijoin in a secure and privacy-preserving manner. Our contributions. We are interested in making a secure and privacy-preserving computation execution and storage-efficient technique for MapReduce computations in the public clouds. Hence, our focus is on information-theoretically secure data and computation outsourcing technique and query execution using MapReduce. Specifically, we use Shamir secret-sharing (SSS) [14] for making secret-shares of each tuple of a relation and send them to the clouds. A user can execute her queries using accumulatingautomata (AA) [5] on these secret-shares without revealing queries/data to the cloud. We can perform count (Section 4.1), search and fetch operations (Section 4.2) in a privacy-preserving manner. Due to the space limitation, we omit details of privacypreserving range selection and equijoin, which may be found in [7]. Related work. PRISM [2], PIRMAP [12], EPiC [1], MrCrypt [16], and Crypsis [15] provide privacy-preserving MapReduce execution in the cloud on encrypted data. However, all these protocols increase computation time due to dependency on encryption and decryption of data. The authors [8] provide a privacy-preserving join operation using secret-sharing. However, the approach [8] requires that two different data owners share some information for constructing an identical share for identical values in their relations. The authors [9] provide a technique for data outsourcing using a variation of SSS. However, the approach [9] suffers from two major disadvantages, as follows: (i) in order to produce an answer to a query, the data owner has to work on all the shares, and hence, the data owner performs a lot of work instead of the cloud; and (ii) a third party cannot directly issue any query on secret-shares, and it has to contact with the data owner. In [9], the authors provide a way for constructing polynomials that can maintain the orders of the secrets. However, this kind of polynomial is based on an integer ring (no modular reduction) rather than a finite field; thus, it has potential security risk. There are some other works [11,10,3] that provide searching operations on secretshares. In [11], a data owner builds a Merkle hash tree [13] according to a query. In [10], a user knows the addresses of the desired tuples, so they can fetch all those tuples obliviously from the clouds without performing a search operation in the cloud. To the best of our knowledge, there is no algorithm that (i) eliminates the need of a database owner except one time creation and distribution of secret-shares, (ii) minimizes the overhead at the user-side, and (ii) provides information-theoretically secure MapReduce computations in the cloud. In this paper, we build a technique for data Online SS PRISM [2] O((nm) Online SS Search and multi-tuples fetch operation rPIR [10] O(nm) Online SS Equijoin Our solution (see in [7]) Online SS Notations: Online: perform string matching in the cloud. Offline: perform string matching at the user-side. E: encryption-decryption based. SS: Secret-sharing based. vSS: a variant of SS. n: # tuples, m: # attributes, : # occurrences of a pattern ( ≤ n), w: bit-length of a pattern. Table 1. Comparison of different algorithms with our algorithms. and computation outsourcing based on SSS and AA [5]. In addition, our algorithms can perform a string matching operation on secret-shares in the cloud, without downloading the whole database of the form of secret-shares. However, most of the existing secret-sharing based privacy-preserving algorithms are unable to do string matching operations in the cloud; see Table 1. The proposed technique overcomes all the disadvantages of the existing secretsharing based data outsourcing techniques [8,9,11,10,3]. Thus, there is no need for (i) sharing information among different data owners, (ii) working at the database owners, except creation and distribution of secret-shares, (iii) having an identical share for multiple occurrences of a value, and (iv) a third party can directly execute her queries in the clouds without revealing her queries to the clouds. System and Adversarial Settings We consider, for the first time, data and computation outsourcing of the form of secretshares to c non-communicating clouds that they do not exchange data with each other, only exchange data with the user or the database owner. The system architecture. The architecture is simple but powerful and assumes the following: STEP 1. A data owner outsources her databases of the form of secret-shares to c (noncommunicating) clouds only once; see STEP 1 in Fig. 1. We use c clouds to provide privacy-preserving computations. Note that a single cloud cannot provide privacypreserving computations using secret-sharing. STEP 2. A preliminary step is carried out at the user-side who wants to perform a MapReduce computation. The user sends a query of the form of secret-shares to all c clouds to find the desired result of the form of secret-shares; see STEP 2 in Fig. 1. The query must be sent to at least c < c number of clouds, where c is the threshold of SSS. The clouds deploy a master process that executes the computation by assigning the map tasks and the reduce tasks; see STEP 3 in Fig. 1. The user interacts only with the master process in the cloud, and the master process provides the addresses of the outputs to the user. It must be noted that the communication between the user and the clouds is presumed to be the same as the communication between the user and the master process. STEP 4. The user fetches the outputs from the clouds and performs interpolation (with the help of reducers) for obtaining the secret-values; see STEP 4 in Fig. 1. Adversarial Settings. We assume, on one hand, that an adversary cannot launch any attack against the data owner. Also, the adversary cannot access the secret-sharing algorithm and machines at the database owner side. On the other hand, an adversary can access public clouds and data stored therein. A user who wants to perform a computation on the data stored in public clouds may also behave as an adversary. Moreover, the cloud itself can behave as an adversary, since it has complete privileges to all the machines and storage. Both the user and the cloud can launch any attack for compromising the privacy of data or computations. We consider an honest-but-curious adversary, which performs assigned computations correctly, but tries to breach the privacy of data or MapReduce computations. However, such an adversary does not modify or delete information from the data. We assume that an adversary can know less than c < c clouds locations that store databases and execute queries. In addition, the adversary cannot eavesdrop all the c or c channels (between the database owner and the clouds, and between the user and the clouds). Hence, we do not impose private communication channels. Under such an adversarial setting, we provide a guaranteed solution so that an adversary cannot learn the data or computations. It is important to mention that an adversary can break our protocols by colluding c clouds, which is the threshold for which the secret sharing scheme is designed for. Parameters for analysis. We analyze our privacy-preserving algorithms on the following parameters: (i) communication cost: is the sum of all the bits that are required to transfer between a user and a cloud; (ii) computational cost: is the sum of all the bits over which a cloud or a user works; and (iii) number of rounds: shows how many times a user communicates with a cloud for obtaining the results. Creation and Distribution of Secret-Shares of a Relation Assume that a database only contains English words. Since the English alphabets consist of 26 letters, each letter can be represented as a unary vector with 26 bits. Hence, the letter 'A' is represented as (1 1 , 0 2 , 0 3 , . . . , 0 26 ), where the subscript represents the position of the letter; since 'A' is the first letter, the first value in the vector is one and others are zero. Similarly, 'B' is (0 1 , 1 2 , 0 3 , . . . , 0 26 ), and so on. The reason of using unary representation here is that it is very easy for verifying two identical letters. The expression S = r i=0 u i × v i , compares two letters, where (u 0 , u 1 , · · · u r ) and (v 0 , v 1 , · · · , v r ) are two unary representations. It is clear that whenever any two letters are identical, S is equal to one; otherwise, S is equal to zero. Binary representation can also be accepted, but the comparison function is different from that used in the unary representation [6]. A secure way for creating secret-shares. When outsourcing a vector to the clouds, we use SSS and make secret-shares of every bit by selecting different polynomials of an identical degree. For example, we create secret-shares of the vector of 'A' ((1 1 , 0 2 , 0 3 , . . . , 0 26 )) by using 26 polynomials of an identical degree, since the length of the vector is 26. Following that, we can create secret-shares for all the other letters and distribute them to different clouds. Since we use SSS, a cloud cannot infer a secret. Moreover, it is important to emphasize that we use different polynomials for creating secret-shares of each letter, thereby multiple occurrences of a word in a database have different secret-shares. Therefore, a cloud is also unable to know the total occurrences of a word in the whole database. Secret-shares of numeral values. We follow the similar approach for creating secretshares of numeral values as used for alphabets. In particular, we create a unary vector of length 10 and put all the values 0 except only 1 according to the position of a number. For example, '1' becomes (1 1 , 0 2 , . . . , 0 10 ). After that, use SSS to make secret-shares of every bit in each vector by selecting different polynomials of an identical degree for each number, and send them to multiple clouds. Privacy-Preserving Query Processing on Secret-Shares using MapReduce in the Clouds Count Query We present a privacy-preserving algorithm for counting the number of occurrences of a pattern, p, in the cloud; throughout this section, we denote a pattern by p. This algorithm is divided into two phases, as: PHASE 1: Privacy-preserving counting in the clouds and PHASE 2: Result reconstruction at the user-side. In short, we apply a string matching algorithm, which is done using AA that compares each value of a relation with p. If a value and p match, it will results in 1; otherwise, we have 0. We apply the same algorithm on each value and accumulate all one that provide the number of occurrences of p. Note that all the values of a relation, a pattern, and the result, i.e., 0 or 1, are of the form of secret-share. Working at the user-side. A user creates unary vectors for each letter of p. In order to hide the vectors of p, the user creates secret-shares of each vector of p, as suggested in Section 3, sends them to c clouds. In addition, the user sends length (x) of p and the attribute of the relation (m ) where to count p, to c clouds. Working in the cloud. Now, a cloud has two things, as: (i) a relation of the form secret-shares, and (ii) a searching pattern of the form of secret-shares with its length, x. In order to count the number of occurrences of p, the mapper in the cloud performs x + 1 steps, see Table 2, for comparing the pattern with each value of the specified attribute of the relation. shows that the node j is executing a step in iteration i. The final value of the node Nx+1, which is sent to the user, is the number of occurrences of the pattern. At this time, the mapper is unable to know the value of the node N x+1 in each iteration and sends the final value of N x+1 to the user of form of a key, value pair, where a key is an identity of an input split over which the operation has performed, and the corresponding value is the final value of the node N x+1 of the form of secret-shares. The user collects key, value pairs from all the clouds or a sufficient number of clouds such that the secret can be generated using those shares. Result reconstruction at the user-side. We need to reconstruct the final value of the node N x+1 . The user has key, value pairs from all the clouds. All the values corresponding to a key are assigned to a reducer that performs Lagrange interpolation and provides the final value of the node N x+1 . If there are more than one reducer, then after the interpolation the sum of the final values shows the total occurrences of p. Aside. If a user searches John in a database containing names like 'John' and 'Johnson,' then our algorithm will show two occurrences of John. However, it is a problem associated with string matching. In order to search a pattern precisely, we may use the terminating symbol for indicating the end of the pattern. Search and Fetch Queries In this section, we provide a privacy-preserving algorithm for fetching all the tuples containing p. The proposed algorithms first count the number of tuples containing p, and then, fetch all the tuples after obtaining their addresses. Specifically, we provide 2-phased algorithms, where: PHASE 1: Finding addresses of tuples containing p, and PHASE 2: Fetching all the tuples containing p. Unary occurrence of a pattern. When only one tuple contains p, there is no need to obtain the address of the tuple, and hence, we fetch the whole tuple in a privacypreserving manner. Here, we explain how to fetch a single tuple containing p. Fetching the tuple. The user sends secret-shares of p. The cloud executes a map function on a specific attribute, and the map function matches p with i th value of the attribute. Consequently, the map function results in either 0 or 1 of the form of secret-shares, if p matches the i th value of the attribute, then the result is 1. After that the map function multiplies the result (0 or 1) to all the m values of the i th tuple. In this manner, the map function creates a relation of n tuples and m attributes. When the map function finishes over all the n tuples, it adds and sends all the secret-shares of each attribute, as: S 1 ||S 2 || . . . ||S m to the user, where S i is the sum of the secret-shares of i th attribute. The user on receiving shares from all the clouds executes a reduce function that performs interpolation and provides the desired tuple containing p. Aside. When we multiply the output of the string matching operation, which is of the form of secret-shares, to all the values in a tuple, it results in all the value of the tuple either 0 or 1 of the form of secret-shares. Thus, the sum of all the secret-shares of an attribute results in only the value of the attribute corresponding to the tuple containing p. By performing identical operations on each tuple and finally adding all the secret-shares of each attribute, the cloud is unable to know which tuple is fetched. Multiple occurrences of a pattern. When multiple tuples contain p, we cannot fetch all those tuples obliviously without obtaining their addresses. Therefore, we first need to perform a pattern search algorithm to obtain the addresses of all the tuples containing p, and then, fetch the tuples in a privacy-preserving manner. Throughout this section, we consider that tuples contain p. This algorithms has 2-phases, as follow: PHASE 1: Finding the addresses of the desired tuples, and PHASE 2: Fetching all the tuples. Tree-based algorithm. We propose a search-tree-based keyword search algorithm that consists of two phases, as: finding the address of the desired tuples in multiple rounds, and then, fetching all the tuples in one more round. We can also obtain the addresses (or line numbers) in a privacy-preserving manner, if there is only a tuple contains p. Thus, for the case of finding addresses of tuples containing p, we divide the whole relation into certain blocks such that each block belongs to one of the following cases: 1. A block contains no occurrence of p, and hence, no fetch operation is needed. 2. A block contains one/multiple tuples but only a single tuple contains p. 3. A block contains h tuples, and all the h tuples contain p. 4. A block contains multiple tuples but fewer tuples contain p. Finding addresses. We follow an idea of partitioning the database and counting the occurrences of p in the partitions, until each partition satisfies one of the above mentioned cases. Specifically, we initiate a sequence of Query & Answer (Q&A) rounds. In the first round of Q&A, we count occurrences of p in the whole database (or in an assigned input split to a mapper) and then partition the database into blocks, since we assumed that tuples contain p. In the second round, we again count occurrences of p in each block and focus on the blocks satisfying Case 4. There is no need to consider the blocks satisfying Case 2 or 3, since we can apply the algorithm given for unary occurrence of a pattern, in both the cases. However, if the multiple tuples of a block in the second round contain p, i.e., Case 4, we again partition such a block until it satisfies either Case 1, 2 or 3. After that, we can obtain the addresses of the related tuples using the method similar to the algorithm given for unary occurrence of a pattern. Fetching tuples. We use the approach described in the naive algorithm for fetching multiple tuples after obtaining the addresses of the tuples.
4,809
2016-07-18T00:00:00.000
[ "Computer Science" ]
Saccharomyces boulardii promoters for control of gene expression in vivo Background Interest in the use of engineered microbes to deliver therapeutic activities has increased in recent years. The probiotic yeast Saccharomyces boulardii has been investigated for production of therapeutics in the gastrointestinal tract. Well-characterised promoters are a prerequisite for robust therapeutic expression in the gut; however, S. boulardii promoters have not yet been thoroughly characterised in vitro and in vivo. Results We present a thorough characterisation of the expression activities of 12 S. boulardii promoters in vitro in glucose, fructose, sucrose, inulin and acetate, under both aerobic and anaerobic conditions, as well as in the murine gastrointestinal tract. Green fluorescent protein was used to report on promoter activity. Promoter expression was found to be carbon-source dependent, with inulin emerging as a favourable carbon source. Furthermore, relative promoter expression in vivo was highly correlated with expression in sucrose (R = 0.99). Conclusions These findings provide insights into S. boulardii promoter activity and aid in promoter selection in future studies utilising S. boulardii to produce therapeutics in the gut. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-023-02288-8. Background The microbiome has increasingly been identified as a potential target for therapeutic interventions, due to its role in the development of a range of diseases [1][2][3].One approach to modify the microbiome involves the application of synthetic biology tools to engineer microbes for therapeutic applications.These living, engineered microbes, termed advanced microbiome therapeutics (AMTs), can produce peptides or small molecules with therapeutic activity directly in the gastrointestinal tract.The yeast Saccharomyces boulardii has had a long history of safe use as a probiotic [4] and many of genetic tools originally developed for S. cerevisiae have been successfully adapted for use in S. boulardii [5][6][7][8][9].Furthermore, as a eukaryotic organism, S. boulardii has the ability to perform more complex post-translational modifications on peptides and proteins [10], making it an attractive AMT chassis. In the context of developing novel AMTs, the presence of a consistent and reliable expression system becomes crucial [11].In particular, the expression of biosynthetic pathways and sensing circuits rely on the ability to balance pathway components using promoters of different strengths [12].Therefore, a library of well-characterised promoters is needed for the establishment of complex therapeutic production in S. boulardii.The activities of select S. cerevisiae promoters have become well-characterised under laboratory conditions [13][14][15][16] as well as under industrially relevant conditions, such as oxygen limitation [17] and heat stress [18,19].While there are high levels of correlation between promoter activities in S. cerevisiae and S. boulardii; there is some divergence in certain cases.For instance, P ALD6 is significantly stronger in S. boulardii than in S. cerevisiae [6].The markedly different expression under laboratory conditions and industrially relevant ones demonstrate the importance of environmental context in performing promoter characterisations.Studies with E. coli have identified promoters behaving similar in vitro and in vivo [20], yet expression of some promoters can vary greatly between in vitro and in vivo conditions [20] as well as along the gastrointestinal tract [21] as a result of the changing conditions across the GI tract [22][23][24].Despite this, the use of synthetic biology tools often relies heavily on genetic parts that have only been characterised under optimal laboratory conditions, not those found in the gastrointestinal tract.Currently, only one study has characterised promoter expression in S. boulardii in vitro and used these promoters to successfully express complex biosynthetic pathways in vivo [6].Yet, detailed characterisation of S. boulardii promoters in the gastrointestinal tract is needed to establish promoters that can be reliably used for in situ expression of complex biosynthetic pathways. Direct characterisation of promoters in the rodent gastrointestinal tract is the gold standard when it comes to understanding how promoters behave in vivo; however, animal work comes with economic, technical and ethical barriers [25].In vitro approaches, where gastrointestinal conditions are replicated in the lab, present a more accessible but potentially less accurate alternative [25].That said, the translatability of in vitro approaches for promoter characterisation has not yet been investigated for S. boulardii.In this study we develop in vivo and in vitro promoter characterisation protocols, using GFP as a tool to evaluate the performance of 12 S. boulardii promoters, to compare the translatability of various in vitro conditions.Our in vitro characterisation compares promoter expression under conventional conditions (aerobically, with glucose as a carbon source) and with conditions more relevant to the gastrointestinal tract (namely, carbon sources found in gut or diet, and anaerobic conditions).Finally, this work aims to provide researchers working with S. boulardii a library of promoters that have been thoroughly characterised in various carbon sources, and in the murine gastrointestinal tract. Promoter selection Suitable candidate promoters were identified via a literature search.We focused on literature of S. cerevisiae promoters, due to the lack of literature on S. boulardii, knowing that there should be a high level of transferability between the two species [6].Promoters were selected that met at least one of the following two criteria: it is in widespread use in synthetic biology applications, or its expression is dependent on conditions present in the gastrointestinal tract (such as low glucose or oxygen levels).On this basis, we selected twelve S. boulardii promoters for this study: P ALD6 , P CYC1 , P CYC7 , P DAN1 , P HSP26 , P HXT7 , P JEN1 , P SSA1 , P SUC2 , P TDH3 , P TEF1 and P TPI1 .All the sequences used had > 97% identity to those of S. cerevisiae S288C as determined by BLAST search (Table 1).The rationale for selecting each promoter is summarised in Table 1.The selected promoters are involved in diverse areas of cellular metabolism; however, those involved in carbon metabolism are overrepresented (Fig. 1A; Table 1).Due to the lack of available glucose in the lower sections of the gastrointestinal tract [26] we aimed to select promoters that do not require high levels of glucose for activity, hence the over representation of promoters involved in carbon metabolism. Characterisation of promoters under aerobic and anaerobic conditions using yEGFP as reporter We began by characterising the promoters in glucose under aerobic and anaerobic conditions.This allowed us to test if the S. boulardii promoters behave similarly to their homologues in S. cerevisiae and to test if our protocol could be used to characterise expression under anaerobic conditions.We chose yeast enhanced green fluorescence protein (yEGFP) [27] as our measure of expression as it has previously been shown to be a good reporter for gene expression [13].Although yEGFP requires molecular oxygen to fluoresce [28], it has a short maturation period once exposed to oxygen [29].While this precludes continuous fluorescence measurements, defined time points can be analysed by including an aerobic incubation step [20].This is desirable as yEGFP is brighter than anaerobic variants, which have primarily been designed for fluorescence imaging [30].It is therefore better at resolving small differences in expression, particularly for low expression promoters.We integrated the promoter-yEGFP expression cassettes at the XII-5 locus [31], which has previously been used for the expression of therapeutic peptides [8].Additionally, we generated a control strain with a negative integration (P null ) at the same site.In total, we generated 13 strains for characterisation. To determine which time points to analyse in depth, we first measured growth and fluorescence of biological triplicates continuously for 48 h aerobically with a microplate reader.The P HXT7 strain was the only strain to have a statistically significant difference in growth rate relative to all other strains, with a reduced maximum growth rate (Fig. 1B and Additional file 2: Table S1).Promoters P ALD6 , P CYC1 , P HXT7 , P SSA1 , P TDH3 , P TEF1 and P TPI1 , accumulated fluorescence over the course of the exponential phase.P HSP26 , had a steady level of fluorescence over the exponential phase (Fig. 1C).From here, we decided to focus on 8 h and 24 h.These time points cover exponential and stationary phase respectively, as well as significantly different fluorescence levels. For analysis of the 8-and 24 h time points, we used flow cytometry to measure the relative fluorescence intensity.Cycloheximide was added to samples to prevent translation of yEGFP [32] once the sample has been taken, and a 20 min aerobic incubation step was included for all samples to allow yEGFP to mature before measurement.Aerobically at 8 h promoter expression can be ranked from highest to lowest as follows: P TDH 3 > P TEF1 > P HXT7 > P SSA1 > P TPI1 > P ALD6 > P HSP26 > P CYC1 > P SUC2 > P JEN 1 > P CYC7 > P DAN1 > P null .Promoter expression was overall lower in anaerobic conditions than in aerobic conditions.Anaerobically at 8 h promoter expression can be ranked from highest to lowest as follows: P TEF 1 > P TDH3 > P HSP26 > P SSA1 > P HXT7 > P TPI1 > P DAN1 > P ADL6 > P SUC2 > P CYC7 > P CYC1 > P JEN1 > P null .P DAN1 was the only promoter with statistically significant higher expression (15-fold) in anaerobic conditions; P HSP26 was the only promoter with no statistical difference in expression between oxygen conditions (Fig. 1D).Expression was overall lower at 24 h than it was at 8 h; P HSP26 was the only promoter with higher expression at 24 h than 8 h (Fig. 1E).At 24 h, anaerobic expression was further reduced for most promoters; P DAN1 was no longer expressed beyond the level of the P null control strain (Fig. 1E). Overall, the relative expression levels correlate with previously published findings [13,14,16].Additionally, the induction of P DAN1 under anaerobic conditions serves to confirm that our method for anaerobic characterisation is functional.With this confirmation, we decided to move forward to characterising our selected promoters under conditions more relevant to the gastrointestinal tract. Carbon source-dependent regulation of S. boulardii promoters To represent the conditions of the gastrointestinal tract we prepared defined media with acetate, fructose, sucrose and inulin as carbon source.Acetate was chosen as the most common short chain fatty acid present in the colon [33], fructose and sucrose were chosen as common dietary carbon sources [34] and inulin was chosen as it is a common prebiotic that supports S. boulardii growth [35]. All the characterised promoters experienced carbonsource dependant changes in expression (Fig. 2A).At 8 h, under aerobic conditions, P JEN1 was induced 24-fold in acetate and 15-fold in inulin (relative to its expression in glucose), making it the promoter with the highest induction by a non-glucose carbon source.P SUC2 was induced 15-fold in inulin but not induced in sucrose; this is possibly due to the 8 h time point falling relatively late in the exponential phase, and most of the sucrose having been consumed by this point (Fig. 2B).Under anaerobic conditions, P TDH3 , P TEF1 and P TPI1 were all significantly repressed in acetate and sucrose, relative to their expression in glucose (Fig. 2B).Additionally, P DAN1 had significantly lower expression in acetate, inulin and sucrose compared to glucose (Fig. 2B).Conversely, P CYC7 , P HXT7 and P SUC2 showed significant induction in inulin relative to glucose.P HXT7 was induced 11-fold in inulin relative to glucose, producing a normalised RFI of 534 and thus the highest anaerobic expression measured in this study (Fig. 2B). At 24 h, expression was generally similar or slightly lower than expression at 8 h, except for P HSP26 , which has higher expression (Additional file 1: Fig S1).P JEN1 was the only promoter to experience significant carbonsource related induction under aerobic conditions, with 41 and 38-fold increases in expression in acetate and inulin respectively (Additional file 1: Fig S1).Under anaerobic conditions, P CYC1 , P CYC7 , P HSP26 , P HXT7 , P SSA1 , P SUC2 , P TDH3 , P TEF1 and P TPI1 were all significantly induced in inulin.P HSP26 , P HXT7 , and P TEF1 had the greatest change with 53, 65 and 79-fold induction compared to glucose (Additional file 1: Fig S1).Overall, promoter expression is similar in glucose and fructose, while sucrose appears to have a predominately negative effect on expression.Inulin has a predominately positive effect on expression levels, while acetate's effect on expression is closely oxygen-dependent.Thus, carbon source has a significant effect on promoter expression, with even promoters considered constitutive being induced and/or repressed in some of the carbon sources considered here.To ensure that all the promoters we characterise in vivo have sufficient signal to be detected over the background fluorescence of the mouse gut content, we selected the strongest seven promoters, as well as P JEN1 , the most carbon-inducible promoter. Characterisation of promoters in the murine gastrointestinal tract To assess promoter expression levels in the murine gastrointestinal tract eight promoters (P ALD6 , P HSP26 , P HXT7 , P JEN1 , P SSA1 , P TDH3 , P TEF1 and P TPI1 ) and the control strain were selected.Previous studies have highlighted the difficulty in isolating target microbes from mouse gut content using flow cytometry, due to high levels of green background fluorescence [20].Hence, we integrated an mKate2 [36] expression cassette at the XI-3 locus to allow target cells to be identified via the red channel, before green fluorescence was measured (Fig. 3A). For the study, 27 mice were started on an antibiotic cocktail 5 days prior to the study starting [37].This reduces their microbiome and improves S. boulardii colonisation in the gastrointestinal tract [37].The dual red/green promoter strains were administered daily via oral gavage for a period of 5 days.A 48 h washout period was included prior to sacrificing the mice, during which S. boulardii was not administered.The washout period reduced the likelihood of measuring yEGFP from the gavage material and ensures the cells have sufficient time in the gastrointestinal tract to begin to adapt to their surroundings.Faeces were collected on days 1, 4 and 6 of the study; small intestine, caecum and large intestine samples were collected during dissection on day 6 (Fig. 3A).Dissection samples were homogenised in a solution containing 10 mg/mL cycloheximide, immediately after collection to ensure no new yEGFP was translated following sample collection. The addition of cycloheximide (a fungicide) to the sample solution meant colonisation could not be estimated via colony forming units (CFUs).As an alternative to CFU, we decided to use the event count from the flow cytometer as a proxy for CFU.We considered every mKate2-positive event captured in the singlets gate (Fig. 3A) as equivalent to a single CFU, to reduce the likelihood of counting debris or double cells collected in the mKate2 + as a single CFU.Using this method, we found colonisation was highest in the faeces and lowest in the small intestine with 23.7 × 10 6 and 1.1 × 10 6 cells per gram of sample, respectively (Fig. 3B). Once we identified our cells of interest in the red channel, we measured the green fluorescence of those cells to determine promoter expression levels (Fig. 3A).We set an minimum threshold of 500 events, below which we would not consider a sample for analysis.This threshold ensured we only considered samples where enough data was collected in our analysis.Due to the reduced level of colonisation in the small intestine, only samples from one or two mice per promoter met the threshold for analysis for that section.However, promoter expression was quite consistent across the gastrointestinal tract (Fig. 3C and Additional file 1: Fig S2 ), including across the small intestine samples that could be analysed. Promoters P HXT7 and P TDH3 were found to be the strongest across the gastrointestinal tract (Fig. 3C and Additional file 1: Fig. S3); however, both experienced high levels of inter-animal variability (Fig. 3C).Promoter P TEF1 had lower mean normalised RFI, but their expression was less variable between animals.Similarly, promoters P SSA1 and P HSP26 had similar mean expression; however, P SSA1 exhibited much lower variation between animals (Fig. 3C).P ALD6 and P TPI1 had lower overall expression but it is relatively consistent across the gastrointestinal tract, as well as having a low level of variability between animals (Fig. 3C). Correlation between in vitro and in vivo results Lastly, we wanted to identify which of the in vitro characterisation set-ups best described the conditions in vivo.To do this we performed a correlation analysis comparing the normalised expression under each in vitro condition to the normalised expression in the large intestine. The correlation analysis showed that there was a high degree of correlation between the in vitro and in vivo conditions.Overall aerobic conditions showed a greater degree of correlation than anaerobic ones (average R-value of 0.819 and 0.798, respectively) and the 8 h time point showed a greater degree of correlation than the 24 h time point (average R-value of 0.828 and 0.765, respectively).As for the effect of the carbon source, fructose had the strongest correlation across time points and oxygen conditions (Fig. 4 and Additional file 1: Fig S3) and acetate had the lowest (average R-value of 0.908 and 0.535, respectively).However, the in vitro condition that best described the in vivo expression levels was aerobic characterisation in sucrose, at 8 h (R-value of 0.99). Discussion S. boulardii has been identified as a prospective AMT chassis [6,8,38]; however a lack of knowledge on the translatability of in vitro promoter characterisations to in vivo applications is limiting the development of more complex S. boulardii-based AMTs.In this study, we characterise S. boulardii promoters under a variety of in vitro conditions, as well as in vivo and compare the translatability of each set of in vitro conditions. We first characterised promoters in vitro with conventional characterisation conditions.We found that the relative expression levels of the selected promoters correlated with previously published findings [13,14,16].One exception was P ALD6 ; Durmusoglu et al. [6] previously reported higher than expected expression from this promoter when they characterised the S. cerevisiae sequence in S. boulardii.However, our study found expression from the S. boulardii P ALD6 sequence to be in line with previous studies on expression in S. cerevisiae.Furthermore, we developed a protocol for anaerobic characterisation and found P DAN1 to be induced under anaerobic conditions, supporting that our protocol worked.While the level of induction is significantly lower than what has previously been reported (15-fold, as opposed to 300fold) [39], we suspect it is a consequence of the differences in cultivation and characterisation method used in the respective studies. Next, we characterised the promoters in alternative carbon sources associated with the gastrointestinal tract, diet or probiotic formulations.We found that carbon source has a significant effect on promoter expression, even for promoters generally considered constitutive.Nonetheless, we found that relative promoter expression was somewhat consistent, with strong promoters tending towards being strong in all conditions.This reflects previous findings that showed expression of S. cerevisiae promoters scales across conditions [19]. Our in vivo characterisation showed that colonisation is lowest in the small intestine, confirming previous findings [37].Additionally, it showed that the same promoters express at similar levels across the gastrointestinal tract; however, promoters P HXT7 , P TDH3 and P HSP26 had high levels of inter-mouse variation.Promoters P TEF1 and P SSA1 may be better choices than P HXT7 , P TDH3 and P HSP26 due to their lower inter-animal variability in this study. Comparing the promoter expression in vivo to each of the in vitro expression profiles we found that with the exception of acetate conditions, there was generally a high degree of correlation between the relative expressions of promoters in vitro and in vivo.We found that the correlation was generally stronger under aerobic conditions than anaerobic ones.This may seem contradictory given the specific inclusion of anaerobic conditions in this study due to the low-oxygen nature of the gastrointestinal tract; however the gastrointestinal tract has a radial oxygen gradient, with higher oxygen levels at the tract walls [23,40].Furthermore, a previous study characterising S. cerevisiae promoters under microaerobic conditions found a distinct microaerobic-specific expression profile [17].Taken together, these factors could explain why anaerobic characterisation did not result in significantly greater correlation. A limitation of this study is that our method determines the degree of correlation between relative promoter strengths.This makes our method ideal for determining which promoters to select when balancing the expression of components in complex biosynthetic pathways or biosensor circuits, but not necessarily for predicting in vivo therapeutic production levels.Furthermore, while we encourage others to characterise their promoters of interest under the in vitro conditions we have laid out, we accept that these characterisations are unable to predict the level inter-animal variability that can be determined by performing the in vivo characterisation.Finally, we selected the most common mouse strain and diet combination for our in vivo characterisation.Other mouse Fig. 4 Correlation of in vitro and in vivo normalised median RFI values.Pearson correlation was used to analyse the data.P-values are adjusted for 8-and 24 h comparisons using the false discovery rate method strains and diet combinations may not exhibit the same levels of correlation; however, due to the number of animal and diet combinations that may be of interest it is outside the scope of this study to test them all. Conclusion We characterized twelve S. boulardii promoters and found a high degree of similarity to their S. cerevisiae counterparts.Our study revealed carbon-source-dependent changes in promoter expression, with P JEN1 showing particularly strong carbon induction.In addition, promoter expression was generally lower in anaerobic conditions, except for P DAN1 , which exhibited higher expression under anaerobic conditions.Notably, inulin emerged as a favourable carbon source, promoting predominantly positive effects on S. boulardii promoter expression and may be a relevant prebiotic to investigate in future S. boulardii AMT tests.Overall, we hope that this study provides valuable insights into S. boulardii promoter characteristics, aiding in the selection of reliable promoters for targeted therapeutic applications. Promoter library selection Promoters were selected following a literature search.Selected promoters have been broadly utilised for synthetic biology, or have properties that suggest their suitability for in vivo gene expression.A full list of promoters and the rationale behind their selection can be found in Table 1. Media For transformations S. boulardii was cultured in yeastpeptone-dextrose (YPD) medium (10 g/L yeast extract, 20 g/L casein peptone and 20 g/L glucose (Sigma Aldrich)).Transformations were streaked on synthetic complete medium without uracil plates (SC U-; 6.7 g/L yeast nitrogen base without amino acids, 1.92 g/L Yeast Synthetic Drop-out Medium without uracil, 10 g/L agar and 20 g/L glucose (Sigma Aldrich)) or SC U-supplemented with geneticin (1.7 g/L yeast nitrogen base without amino acids and ammonium sulphate, 1 g/L monosodium glutamate, 1.92 g/L Yeast Synthetic Dropout Medium without uracil, 200 mg/L geneticin (G418), 10 g/L agar and 20 g/L glucose (Sigma Aldrich)).In vitro characterisations were cultured in synthetic complete (6.7 g/L yeast nitrogen base without amino acids, 1.6 g/L Yeast Synthetic Drop-out Medium without leucine and 76 mg/L leucine) with one of the following carbon sources as appropriate 20 g/L glucose, 20 g/L fructose, 19 g/L sucrose, 16 g/L inulin or 32.4 g/L sodium acetate (Sigma Aldrich).E. coli was cultured in LB supplemented with 100 mg/L ampicillin sodium salt (Sigma Aldrich).All cultures were incubated at 37 °C and 250 rpm. Strain and plasmid construction Plasmids, strains, primers, and sequences used in this study are listed in Additional file 2: Tables S2, S3, S4 and S5.All oligonucleotides and double-stranded DNA fragments (gBlocks) were ordered from Integrated DNA Technologies (IDT).yEGFP was assembled into p2909 by User cloning [41], further assemblies were performed with Gibson Assembly [42] and both transformed into One Shot ® TOP10 Escherichia coli (Thermo Fisher Scientific). S. boulardii with a uracil auxotrophy was used for as a base for all strains and obtained from previous work [37].S. boulardii was transformed according to the protocol in Durmusoglu et al. [6].Genomic integration cassettes were digested with restriction enzyme NotI (FastDgiest Enzyme, Thermo Scientific ™ ) prior to transformation.Markerless plasmids where co-transformed with pCfB6920, into strains previously transformed with pCfB2312.Genomic integration was confirmed using colony-PCR with Taq polymerase (Ampliqon).Primers flanking the integration were used to confirm the integration.Genomic DNA was extracted by boiling cells at 95 °C for 20 min in 20 mM NaOH.One single amplification band, ~ 4000 bp, on gel electrophoresis indicated a successful integration into both chromosomes.Where necessary, strains were cured for pCfB2312 and pCfB6920 after genome integration. In vitro characterisation Strains were streaked from − 80 °C cryostocks on to YPD plates and incubated aerobically at 37 °C for 48 h.This was repeated once, incubating anaerobically where appropriate.Experimental and pre-cultures were grown in 250 µL media in a 96-deep-well plate.Pre-cultures were inoculated with a single colony, in triplicate and incubated overnight.Experiment cultures were inoculated at an OD600 nm of 0.2.For plate reader experiments, cells were incubated in a Synergy H1 Microplate Reader (BioTek) for 48 h at 37 °C with continuous shaking and the following setting for measuring yEGFP: excitation 485/20, emission 528/20, gain 80. Growth rate analysis was performed using the Qurve app [43].For flow cytometry experiments, at the relevant time points, 20 µL of experiment culture was diluted in 180 µL experiment solution (1X filtered PBS, 100 µg/mL cycloheximide and 2.5% DMSO) in a clear, flat-bottom microplate and incubated aerobically on the benchtop for 20 min.Flow cytometry was performed using a Novocyte Quanteon ™ (Agilent).The following settings were used: FSC and SSC were measured with gain 400; GFP was measured using a blue laser at 525 nm and with gain 470; an FSC-H threshold of 8000 was used.Fluorescence data was collected from 2000 cells falling in the yeast gate for each sample and analysed using FlowJo software.All anaerobic work was performed using a Whitley A95 anaerobic workbench and anaerobic conditions were maintained for incubation using the BD GasPak system. In vivo characterisation All animal experiments were conducted according to the Danish guidelines for experimental animal welfare, and the study protocols were approved by the Danish Animal Experiment Inspectorate (license number 2020-15-0201-00405).The study was carried out in accordance with the ARRIVE guidelines [44]. All in vivo experiments were performed on male C57BL/6NTac mice (6 weeks old; Taconic Bioscience).All mice were housed at room temperature on a 12 h light/dark cycle and given ad libitum access to water and a standard chow diet (Safe Diets, A30).All mice had 1 week of acclimatisation prior to antibiotic treatment and randomised according to body weight.The researchers were blinded in all mouse experiments. Drinking water was supplemented with an antibiotic cocktail containing 0.3 g/L ampicillin sodium salt, 0.3 g/L kanamycin sulfate, 0.3 g/L metronidazole, and 0.15 g/L vancomycin hydrochloride for the duration of the study [37].Following 5 days antibiotic treatment, mice were administered ~ 10 8 CFU S. boulardii via intragastric gavage for 5 days.Mice were divided into 9 groups (n = 3), receiving either the control strain or a yEGFP expressing strain.Mice were euthanized by cervical dislocation following a 48 h wash-out period, followed by collection of gut content.Gut content was collected in pre-weighed 1.5 mL Eppendorf tubes containing 1 mL of 1 × PBS, 50% glycerol and 10 µg/mL cycloheximide and weighed again to determine content weight.All samples were kept on ice prior to treatment.The samples were homogenised by vortexing at ~ 2400 rpm for 20 min, then spun down at 100 g for 30 s. 20 µL of supernatant was added to 180 µL 1X PBS in a clear, flat-bottom microplate and transferred to the flow cytometer within 45 min of dissection.Flow cytometry gates were determined as follows: one gate was created in the red channel based on the red population present in faeces samples collected on day 1 and 4 of the study.The mKate2 + population was subsequently gated for single cells by comparing the front scatter area vs front scatter height, in order to exclude potential debris collected in the mKate2 + gate, as well as exclude double cells which could produce a higher yEGFP level than equivalent single cells.The median fluorescence of the singlets population was taken as the fluorescence read that sample (Additional file 1: Fig S4).The same gates were applied to every sample regardless of gut section. Statistical testing Statistical analyses were performed in RStudio version 2023.06.0 with the tidyverse, rstatix and DescTools packages.The false discovery rate method was used to correct for multiple comparisons and the statistical significance level was set at p < 0.05. Fig. 1 Fig. 1 Promoter function and characterisation in glucose.A Schematic overview of native roles of promoter genes.B Aerobic growth of the promoter strains over time.C Aerobic relative fluorescence intensity (RFI) of yEGFP produced by the promoter strains over time.Dashed lines in (B) and (C) indicate 8 and 24 h, the time points that were selected for further investigation by flow cytometry.D Mean of the normalised median relative fluorescence intensity of yEGFP from the 8 h time point and (E) from the 24 h time point of the aerobic (blue) and anaerobic (green) flow cytometry experiments.All plots display the mean of three biological replicates, from independent pre-cultures.Bar plots are displayed as the mean ± SD.Points represent individual replicates.* p < 0.05 and ** p < 0.005 (D) and (E) were analysed with t-tests and p-values were adjusted for false discovery rate Fig. 2 Fig. 2 Promoter expression in simulated gut conditions.A Normalised median relative fluorescence intensity of yEGFP from the 8 h time point.Points represent the individual replicates and error bars represent the standard deviation.B Log2 of the fold change in normalised RFI relative to the level in glucose.All plots show the mean of three biological replicates from independent pre-cultures.Data was analysed with a one-way ANOVA and Dunnett's post hoc test, with Control from the same Carbon_Oxygen condition as a reference.P-values were adjusted for all comparisons.* p < 0.05 Fig. 3 Fig. 3 Characterisation of promoters in the murine gastrointestinal tract.A Schematic overview of the in vivo characterisation, representing the strain design, the animal study and the flow cytometry analysis.27 mice were divided into 9 groups, with three mice per strain of S. boulardii.B Colonisation across the gastrointestinal tract, as determined from flow cytometry.C Normalised median relative fluorescence intensity of yEGFP across the gastrointestinal tract, from mKate2 + cells.Bar plots are presented as the mean + SD.Points represent the mean of two technical replicates.Faeces data shown here come from samples collected on day 6 of the study Table 1 Summary of native promoter functions and rationale for inclusion in the study TPI1Triose phosphate isomerase, involved in glycolysis Common promoter for protein production.Medium strength 99.86
7,047.6
2024-01-07T00:00:00.000
[ "Biology", "Medicine" ]
Do Genomic Factors Play a Role in Diabetic Retinopathy? Although there is strong clinical evidence that the control of blood glucose, blood pressure, and lipid level can prevent and slow down the progression of diabetic retinopathy (DR) as shown by landmark clinical trials, it has been shown that these factors only account for 10% of the risk for developing this disease. This suggests that other factors, such as genetics, may play a role in the development and progression of DR. Clinical evidence shows that some diabetics, despite the long duration of their diabetes (25 years or more) do not show any sign of DR or show minimal non-proliferative diabetic retinopathy (NPDR). Similarly, not all diabetics develop proliferative diabetic retinopathy (PDR). So far, linkage analysis, candidate gene studies, and genome-wide association studies (GWAS) have not produced any statistically significant results. We recently initiated a genomics study, the Diabetic Retinopathy Genetics (DRGen) Study, to examine the contribution of rare and common variants in the development of different phenotypes of DR, as well as their responsiveness to anti-VEGF treatment in diabetic macular edema (DME). Our preliminary findings reveal a novel set of genetic variants involved in the angiogenesis and inflammatory pathways that contribute to DR progression or protection. Further investigation of variants can help to develop novel biomarkers and lead to new therapeutic targets in DR. Introduction Diabetic retinopathy (DR) is a microvascular complication of diabetes that involves blood-retinal barrier alteration, inflammation, and neuronal dysfunction [1][2][3]. According to the 2017 International Diabetes Federation Atlas, about 425 million people have diabetes mellitus in the world, and by 2045 this number is projected to reach 629 million [4]. With 35% of the diabetic population afflicted, DR is the most common cause of blindness among middle-aged working adults [5]. Duration of diabetes is the strongest predictor for the progression of DR. Interestingly, some diabetics do not develop DR at all, or only develop mild DR (few microaneurysms), in spite of a long duration of diabetes [6]. Similarly, not all diabetics develop the sight-threatening phenotype of diabetic macular edema (DME) or proliferative diabetic retinopathy (PDR) [7][8][9][10][11]. Furthermore, the response to anti-VEGF (vascular endothelial growth factor) drugs in DME patients is variable, with only 27-45% patients responding well (>15 letters of vision improvement) [12]. The variability in phenotype of DR and anti-VEGF treatment responsiveness in DME suggests a potential role for other factors in the development of DR. Several risk factors have been associated with the prevalence of DR. Large-scale epidemiological studies revealed that duration of diabetes, hyperglycemia, hypertension and hyperlipidemia are the major risk factors associated with this disease (see Table 1) [7,[13][14][15][16][17][18]. Although there is strong clinical evidence supporting the role of blood glucose, blood pressure, and lipid level in controlling the slow progression of DR (Diabetes Control and Complications Trial [DCCT], United Kingdom Prospective Diabetes Study [UKPDS], Action to Control Cardiovascular Risk in Diabetes [ACCORD]) [14][15][16][17][18][19][20][21] the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) showed that hemoglobin A1C, cholesterol and blood pressure only account for 10% of the risk for developing retinopathy [22]. Furthermore, a follow-up statistical analysis of the Diabetes Control and Complications Trial (DCCT) revealed that the glycemic exposure (duration of diabetes, HbA1C level) explains only 11% of the decrease in retinopathy risk [23]. Other factors, genetic and/or environmental, may explain the remaining 89% of the variation in retinopathy risk [23]. Together, these observations suggest that genetic factors may play a role in the development and progression of DR. [7] No association between glycemic control and prevalence of DR glycation endproduct combinations, Wisconsin Epidemiologic Study of Nearly all type 1 diabetic persons and~80% of type 2 diabetics develop some high plasma, carboxyethyl-lysine, and Diabetic Retinopathy (WESDR) [8,13] retinopathy after 20 years of diabetes pentosidine Hyperglycemia WESDR [8,13] Incidence of Diabetic Macular Edema (DME) over a 10-year associated with higher concentration of glycosylated hemoglobin Diabetes Control and Complications t Tight glucose control (HbA1c < 6.05%) in Type 1 diabetics prevented development of DR by 76% and slowed progression by 54% Attributed to increased levels of IGF-1 Complications Trial (DCCT) [16] Worseniing of retinopathy in~10% of DR patients with too tight glucose control or insulin that can further upregulate VEGF, (HbA1c < 6.05%) resulting in cotton-wool spots and blot hemorrhages Epidemiology of Diabetes Interventions 10 years after the end of the DCCT study, the benefit of early tight control persisted ·Histone posttranslational modification by and Complications (EDIC) [21] with risk of retinopathy progression reduced by 53% acetylation or methylation Action to Control Cardiovascular Type 2 diabetic persons (HbA1c level of 6.4% in intensive group vs. 7.5% in Risk in Diabetes (ACCORD) Eye Study [19] conventional group) reduced DR progression by 35% over a 4-year span Study discontinued after 3.7 years due to mortality in tight glucose control group [14] (targeting a systolic blood pressure <150 vs. <180 mmHg with standard control) ACCORD Eye Study [19] No benefit of tight blood pressure control observed Action in Diabetes and Vascular Disease: No benefit of tight blood pressure control observed Preterax and Diamicron Modified Release Controlled Evaluation (ADVANCE) [18] Mechanistic studies have identified four major hyperglycemia-induced biochemical pathways associated with DR (polyol, advanced glycation end products, protein kinase C, and hexosamine). These pathways have been shown to lead to detrimental downstream cascading events (oxidative stress, inflammation, and vascular dysfunction) [24]. Despite the success in identifying these biochemical mechanisms and the ability of pharmacological interventions to block these pathways in animal models, these therapeutic strategies have not proven to be efficacious in human clinical trials. Based on these experimental and clinical outcomes, it has become critical to explore alternative factors which may be involved in the development of DR. Role of Genetics in DR Since the introduction of hereditary transmission by Gregor Mendel, the field of genetics has flourished tremendously [25]. Rapid developments in technology have set the stage for gene mapping and the investigation of numerous disease-associated genetic variants. What started as simple observations in pea plants has evolved into sophisticated methods for investigating the genetic components of complex, multifactorial diseases such as DR [26,27]. Familial clustering studies have consistently shown the involvement of genetics in DR. Early observations of non-insulin-dependent diabetic twins revealed 95% agreement in the degree of severity of this disease [28,29]. Additionally, the DCCT study has shown that diabetic first-degree family members of study subjects who progressed to severe non-proliferative diabetic retinopathy (NPDR) or PDR had a risk ratio of 3.1 for progression compared with those study subjects who did not have such progression [30]. Furthermore, differences in the frequency and severity of DR have long been observed among different ethnic populations [31][32][33]. Together these studies provide evidence in support of the role of genetics in DR. Thus, in this review, we discuss (1) the current understanding of DR genetics and (2) assess recent key studies. Lastly, we propose strategies to address the challenges of previous studies with the goal of furthering insight into the underlying genetic architecture of DR. Although the role of genetics in DR is well recognized, the precise gene variant(s) underlying this disease remain elusive. While studies have identified many DR-associated genetic variants, only a few have been replicated. However, these confirmatory studies have all resulted in weak associations. Thus, it is likely that these results are indicative of the elaborate disease mechanisms underlying DR. Revisiting previous studies may help in understanding the pitfalls as well developing new strategies to further understand the genetics of DR. Heritability and Linkage Analysis Early studies of sibling pairs have long established the role of genetics in DR. Linkage analyses have provided a foundational method that relies on the physical proximity of non-random associations of alleles of chromosomal mutations to identify disease-associated links [34][35][36]. Thus far, this method has had a long history of success in the identification of variants in monogenic diseases [37][38][39]. However, success in elucidating the role of genetics in complex diseases, including DR, has been arduous [40,41]. To date, cohorts of Pima Indians and Mexican Americans have been studied for DR-associated linkages [42][43][44]. However, these studies have yielded varying results. Interestingly, no common linkage regions were identified in two separate analyses of Pima Indians, despite examination of the same cohort. [42,43]. While one study with Pima Indians demonstrated linkage in the same chromosomal region (1p36) as the Mexican American cohort, the threshold suggestive of linkage by conventional criteria (Logarithm of Odds score > 3.3) was not met. [42][43][44]. While these studies provided strong evidence for genetic contribution in this disease, the lack of reproducibility of the identified DR-associated linkages may be indicative of the involvement of additional factors. The genetic understanding of DR presents a unique challenge because of the etiological mechanisms involved. While DR is recognized as a complex multifactorial disease, understanding disease pathogenesis is further complicated by virtue of retinopathy being a mere, but detrimental, complication of another complex disease (diabetes) [45,46]. To address this challenge, several approaches have been utilized to understand the underlying role of genetics in DR. Candidate Gene Association Studies Candidate gene association is an epidemiologic approach frequently used to understand the pathological processesinvolved in disease [47]. In contrast to gene mapping methods, in which the precise location of genes on the chromosome can be linked to disease, candidate gene association relies on hypothesis-driven inferences with an emphasis on pathological observations [48,49]. Supported, but limited, by clinical observations and biochemical pathway knowledge, this approach provides a practical method for the identification of genetic variants. To date, many candidate genes have been associated with DR: vascular endothelial growth factor (VEGF), hypoxia-inducible factor 1-alpha (HIF1A), and erythropoietin (EPO) genes [50][51][52][53]. Additionally, several glucose metabolism, vascular tone, blood pressure regulation, and inflammatory-associated genes have been identified (receptor for advanced glycation end product (RAGE), aldose reductase (AKR1B1), glucose transporter 1 (SLC2A1), angiotensin-1 converting enzyme (ACE), nitric oxide synthase 3 (NOS3), and intracellular adhesion molecule-1 (ICAM1)) [51,53,54]. However, these studies have yielded variable results. These candidate genes have been previously reviewed elsewhere [55,56]. Importantly, the Candidate gene Association Resource (CARe) study showed that, among 39 genes known to be associated with DR or diabetes, three single nucleotide polymorphisms in P-selectin were associated with DR [57]. None of the genes reported in the candidate gene studies have been replicated in other cohorts. Here, we highlight the vascular endothelial growth factor (VEGF) gene because of its therapeutic success. Vascular Endothelial Growth Factor The importance of VEGF in ocular neovascularization was first established in studies of laser-induced hypoxia in non-human primate models [58]. These studies revealed elevated levels of VEGF in aqueous fluid that correlated with the severity of neovascularization. This key finding has since made VEGF a strong proponent in the development of DR [59]. The role of VEGF was further confirmed when clinical observations revealed elevated VEGF levels in the vitreous and aqueous fluid of DR patients [60]. To date, several genetic studies have identified various VEGF polymorphisms associated with DR [50,61,62]. However, these studies have produced variable results [51,57,63]. The putative role of VEGF in ocular neovascularization led to adaptation of anti-VEGF therapies for the treatment of DR [64]. Interestingly, despite the success of anti-VEGF in restoring visual acuity in PDR patients, success has been limited in DR patients with diabetic macular edema (DME), [65] indicating the possible influence of genetic polymorphisms. In a recent study, VEGF polymorphism C634G was identified as a genetic risk factor for DME and its presence resulted in a 'good response' outcome to anti-VEGF therapy [66]. However, VEGF polymorphism C634G as a pharmacogenetic marker has yet to be confirmed in follow-up studies. Additionally, it should be noted that VEGF polymorphism C634G has yielded varying results among different population groups [67][68][69]. While the candidate gene association approach has provided valuable genetic and mechanistic insight, strategies that address study variability and lack of reproducibility are yet emerge. Nevertheless, with rapid growth and advancement in the field comes the promise of ever-evolving approaches that can aid in expanding the current understanding of the role of genetics in DR. Genome-Wide Association Studies (GWAS) Genome-wide association study (GWAS) approaches have enabled the identification of hundreds of genetic variants associated with complex diseases by the screening of single nucleotide polymorphisms (SNPs) across the complete genome for disease associations [70]. The first successful GWAS identified disease-associated SNPs in three independent studies of age-related macular degeneration (AMD) [71][72][73]. The success of these studies is commonly attributed to disease heritability, where heritability accounts for 50% of the genetic disease-associated variants in AMD [74,75]. However, this level of genetic heritability is not shared amongst other complex diseases, including DR. For example, Crohn's disease and Type 2 Diabetes can only be explained by 20% and 6% heritability, respectively [76,77]. Despite the case-to-case variability in heritability, GWAS has proven to be a powerful tool for the identification of SNPs in numerous complex diseases [78]. To date, GWAS has been used to identify DR-associated risk genes in various populations: Texan Mexican-Americans, American Caucasians, Taiwanese, Chinese, Japanese, and Australians (see Table 2) [79][80][81][82][83][84][85]. These studies have been extensively reviewed elsewhere [55,56,86,87]. Recently, GWAS identified genetic variation near the GRB2 gene (downstream of rs9896052, on chromosome 17q25.1) to be associated with sight-threatening DR [85]. To date, these results are the first to be confirmed with reproducible results in independent cohorts. Previously, all DR GWAS have yielded variable results and have lacked reproducibility. One possible explanation for the varying outcomes be the inconsistencies in definitions of DR and controls used in these studies. Additionally, a unique challenge of GWAS is that this method presents a high probability for false positives, due to the vast amount of genetic information yielded from whole-genome mapping. This can be problematic due to genetic variants identified using this method being often located on non-coding genetic regions of the genome [78]. Since it is believed non-coding regions do not offer functional relevance, [74] it is hypothesized that exome-focused approaches may yield a better understanding of genetic associations in DR. Whole Exome Sequencing Whole exome sequencing (WES) methods rely on genome mapping specific to the protein coding (exome) regions [88]. Although exomes comprise only~1% of the human genome, it has been speculated that exomes harbor~85% of disease-associated variants [74]. Thus, WES has emerged as a novel and efficient method to identify gene variations that could help explain the role of genetics in complex diseases such as DR. Recently, the WES approach has been used to identify the genetic variants associated with DR in two independent studies (see Table 3) [89,90]. Shtir and colleagues based their study on an 'extreme' phenotype design to search for 'protective' gene variants in a Saudi population, hypothesizing that using stringent criteria for study controls would enhance the probability to yield robust candidate variants [89]. Thus, individuals with 10 years duration of diabetes and no sign of retinopathy served as controls, while excluding those with high myopia, advanced glaucoma, and ocular ischemic syndrome, which have been previously shown to offer protection from DR. The DR phenotypes studied were NPDR and PDR with varying severity. Three genes were identified as protectant variants (NME3, LOC728699, and FASTK). More recently, Ung and colleagues used a similar approach to analyze an African American (AA) Type 2 diabetic cohort from the African American Proliferative Diabetic Retinopathy Study and a mixed ethnicity (ME) cohort that included Type 1 and Type 2 diabetic participants of African American, Caucasian, and Hispanic backgrounds [90]. The DR phenotype under study was PDR and these cases were compared to the AA Type 2 diabetic control cohort which had a duration of diabetes for a minimum of 10 years. Together, AA and ME cohorts revealed a potential role of 25 novel variants in 19 genes associated with DR. Furthermore, expression-level validation studies demonstrated the potential role of six of the candidate genes identified to play a role in DR pathogenesis. However, one major drawback of this study was the use of the AA cohort as a control for both the AA and ME cohorts. To our knowledge, these have been the only DR WES studies to date. While both studies revealed novel DR-associated gene variants, these independent studies yielded variable results. The discrepancy of these results may be due to population heterogeneity and varying case definitions for DR phenotypes. However, one major limitation observed in these studies is the definition of controls with regards to no retinopathy despite 10 years of diabetes. As it may take up to 15 years to develop some features of DR, as shown in the WESDR study, controls with no DR should ideally be chosen from patients with a longer duration of diabetes (at least 20 years or longer). Despite the success in the identification of DR-associated variants, these studies must be replicated for meaningful biological conclusions and be further functionally validated. Nevertheless, these studies have provided valuable insight into the role of genetics in DR. Lessons Learned and Road Ahead At present, the genetic understanding of DR remains convoluted. Identifying the genetic factors responsible for DR has used traditional linkage analysis, candidate gene studies, GWAS, and WES analysis [42][43][44][45][50][51][52][53][54]57,[79][80][81][82][83][84][85]89,90]. Out of three linkage studies done in Pima Indians and Mexican Americans, only one study showed a logarithm of odds score of 3.01 for single point and 2.58 for multiple-point analysis at 1p36 in Pima Indians. Additionally, association studies for numerous candidate genes, including VEGF, have yielded variable results [57]. The lack of overall success with the candidate gene studies includes failure to comprehensively identify variation in the genes of interest and incorrect hypotheses about which candidate genes are involved in the disease. Further, GWAS and WES studies for DR have also not produced any genome-wide statistically significant results [79][80][81][82][83][84][85]89,90]. These studies have lacked success due to (1) variability in case definitions that include examination of different DR case definitions including NPDR, PDR, and DME within and between studies, (2) inconsistently defined controls with regards to the duration of diabetes, and (3) population heterogeneity (e.g., discovery and replication samples coming from completely different ethnic populations). With such heterogeneity in phenotype definitions, let alone population heterogeneity, it is not surprising the findings have varied or have not been successful at all. If identified, the genetic factors that contribute to DR can be of added clinical value to determining a person's risk for DR. Thus, we have recently initiated a genomics study, the Diabetic Retinopathy Genetics (DRGen) Study, in efforts to address previous challenges, further understand the contribution of environmental factors on rare variants, responsiveness to anti-VEGF treatment, and determine if there are variants which protect against the initiation of DR. Diabetic Retinopathy Genetics (DRGen) Study Approach The DRGen Study is a collaboration of UNM School of Medicine and Harvard's Joslin Diabetes Center. Using a well-defined, clinically supported phenotypic strategy, we seek to better understand the role of rare variants in DR progression, or protection, and anti-VEGF response in DME. We propose for the first time a comprehensive genetic study of the genes and genetic variations involved in the inflammatory and angiogenesis pathways ( Figure 1). Using whole exome sequencing (WES) technology, we aim test the coding region of all human genes for associations with DR. In addition to WES, all samples will be genotyped for ancestry informative markers for purposes related to admixture mapping. Our interest lies in genes known to be involved in inflammatory and angiogenesis pathways, as both processes are known to play a role in DR pathology but have shown weak associations (in heterogeneous sample collections) with DR previously [91,92]. To date, two studies, as described above, utilized the WES technique in extreme DR phenotype patients. However, both studies used a very loose definition of the 'extreme' phenotype (i.e., no diabetic retinopathy despite at least 10 years of diabetes) [89,90]. The inclusion of participants with such a short duration of diabetes may lead to the misclassification of controls, given the results of Harvard's Joslin Medalist Study [7]. Thus, the DRGen study has an emphasis on the phenotypic heterogeneity of DR. Phenotypic Heterogeneity in DR After a period of no DR (clinical absence of vascular lesions in the retina) for a variable period of time (7-10 years), retinopathy develops. DR is classically divided into a non-proliferative (NPDR) or a proliferative (PDR) stage ( Figure 2) Phenotypic Heterogeneity in DR After a period of no DR (clinical absence of vascular lesions in the retina) for a variable period of time (7-10 years), retinopathy develops. DR is classically divided into a non-proliferative (NPDR) or a proliferative (PDR) stage ( Figure 2) [6,93,94]. The earliest clinical signs are microaneurysms, followed by dot and blot intraretinal hemorrhages. With the leakage of lipid (hard exudates) and plasma (edema), diabetic macular edema (DME) develops in some NPDR patients. Classically, the natural course of DR is thought to be no DR, mild-moderate NPDR, followed by PDR with longer duration of diabetes. However, DR appears to be a heterogenous disease, in which not every patient will go through the same sequence of events. Furthermore, not all diabetics develop DME or PDR [6]. New vessels grow in a subset of advanced DR patients, resulting in pre-retinal and vitreous hemorrhage, and eventually traction retinal detachment may occur in some patients. The WESDR Study has clearly shown that only about 50% of type 1 diabetics will develop PDR during their life time in spite of long durations of diabetes [8]. Currently, it is not known what protects the other 50% of diabetics from PDR. 'Extreme' Phenotype Furthermore, some diabetics, despite long durations of diabetes (20 years or longer), do not show any sign of DR, or show minimal NPDR (a few microaneurysms). Harvard's Joslin Medalist Study reported this "extreme phenotype" in about 40% of their diabetics with a duration of 50 years diabetes or longer [7]. In fact, diabetics who did not develop advanced retinopathy over long durations of diabetes are unlikely to experience further worsening of retinopathy once they have had 17 or more years of follow-up. However, it is unknown what factors "protect" these patients from developing DR. Thus, the DRGen study aims to harmonize clinical data to reduce phenotypic heterogeneity; Figure 2. Based on our clinical observations and large epidemiological studies, the hypothesis proposed is that not every diabetic retinopathy (DR) patient goes through the same sequence of events. After a period of diabetes, patients can develop mild non-proliferative diabetic retinopathy (NPDR), followed by moderate NPDR. 20% of type 2 diabetes patients develop proliferative diabetic retinopathy (PDR) while 50% of type 1 diabetes patients develop PDR. In PDR patients, only 15% develop concurrent diabetic macular edema (DME), and the other 85% never develop any macular edema. Interestingly, 5% diabetic patients never develop DR or only have mild NPDR (1 or 2 microaneurysms, as indicated by white arrow), in spite of 20 or more years of diabetes ("Extreme Phenotype"). Images of mild NPDR and moderate NPDR phenotypes courtesy of the ETDRS Diabetic Retinopathy severity scale Ophthalmology (1991). 'Extreme' Phenotype Furthermore, some diabetics, despite long durations of diabetes (20 years or longer), do not show any sign of DR, or show minimal NPDR (a few microaneurysms). Harvard's Joslin Medalist Study reported this "extreme phenotype" in about 40% of their diabetics with a duration of 50 years diabetes or longer [7]. In fact, diabetics who did not develop advanced retinopathy over long durations of diabetes are unlikely to experience further worsening of retinopathy once they have had 17 or more years of follow-up. However, it is unknown what factors "protect" these patients from developing DR. Thus, the DRGen study aims to harmonize clinical data to reduce phenotypic heterogeneity; NPDR, PDR, and DME. Additionally, cases and controls will be defined consistently for the different samples studied. Important co-variates such as the duration of diabetes, hemoglobin A1C levels, and ancestry will also be included in the harmonization. Studying extreme phenotypes will further reduce issues related to phenotypic heterogeneity seen in previous studies. DME and PDR: Two Distinct Disease Processes Interestingly, our preliminary findings revealed that DME and PDR may be two distinctive disease processes [95]. In a retrospective cross-sectional study at UNM, we examined a sample of 165 eyes (majority Hispanics and Native Americans) with a new diagnosis of PDR with active neovascularization and 166 eyes with a new diagnosis of DME. Among the PDR eyes, only 15.7% of eyes (95% CI 9.5-21.8%) had DME by clinical examination or optical coherence tomography measurements of central retinal thickness (Figure 3). Thus, the majority of PDR patients did not have concurrent DME, leading us to ask: why do not all patients with PDR show concurrent vascular leakage (DME) in spite of high VEGF levels? Similarly, among the eyes with DME, only 20.3% of eyes (95% CI 13.5-27.1%) had concurrent PDR. Thus, the majority of DME patients did not have concurrent neovascularization or PDR. Stratified risk factor assessment demonstrated that neither gender, age, type of diabetes, HbA1C, mean arterial pressure (MAP) nor LDL control were statistically significant in the development of DME in the PDR patients, or PDR in DME patients. Therefore, PDR and DME appear to represent two distinct disease processes of the same spectrum, possibly driven by distinctive molecular mediators and possibly distinct genetic factors (Figure 3). J. Clin. Med. 2020, 9, x FOR PEER REVIEW 12 of 20 DME and PDR: Two Distinct Disease Processes Interestingly, our preliminary findings revealed that DME and PDR may be two distinctive disease processes [95]. In a retrospective cross-sectional study at UNM, we examined a sample of 165 eyes (majority Hispanics and Native Americans) with a new diagnosis of PDR with active neovascularization and 166 eyes with a new diagnosis of DME. Among the PDR eyes, only 15.7% of eyes (95% CI 9.5-21.8%) had DME by clinical examination or optical coherence tomography measurements of central retinal thickness (Figure 3). Thus, the majority of PDR patients did not have concurrent DME, leading us to ask: why do not all patients with PDR show concurrent vascular leakage (DME) in spite of high VEGF levels? Similarly, among the eyes with DME, only 20.3% of eyes (95% CI 13.5-27.1%) had concurrent PDR. Thus, the majority of DME patients did not have concurrent neovascularization or PDR. Stratified risk factor assessment demonstrated that neither gender, age, type of diabetes, HbA1C, mean arterial pressure (MAP) nor LDL control were statistically significant in the development of DME in the PDR patients, or PDR in DME patients. Therefore, PDR and DME appear to represent two distinct disease processes of the same spectrum, possibly driven by distinctive molecular mediators and possibly distinct genetic factors (Figure 3). Furthermore, the response to anti-VEGF injections is variable in DME and PDR patients. New retinal vessels in PDR regress completely with one or two anti-VEGF injections in most patients [96], whereas such a robust effect is hardly seen in DME patients [97]. Interestingly, intravitreal anti-VEGF injection is the first line of treatment in DME patients, although the response is suboptimal in many. The variability in treatment responsiveness or differential efficacy of anti-VEGF drugs in DME and PDR suggests that there may be separate molecular pathways and genetic risk factors for the development of DME and PDR. To date, all three major clinical trials (DRCR, RIDE/RISE, VISTA) with anti-VEGF drugs have shown that only 27-45% of DME patients show three-line visual acuity improvement [98,99]. A post hoc analysis of the DRCR Protocol I data revealed that another 30-40% Furthermore, the response to anti-VEGF injections is variable in DME and PDR patients. New retinal vessels in PDR regress completely with one or two anti-VEGF injections in most patients [96], whereas such a robust effect is hardly seen in DME patients [97]. Interestingly, intravitreal anti-VEGF injection is the first line of treatment in DME patients, although the response is suboptimal in many. The variability in treatment responsiveness or differential efficacy of anti-VEGF drugs in DME and PDR suggests that there may be separate molecular pathways and genetic risk factors for the development of DME and PDR. To date, all three major clinical trials (DRCR, RIDE/RISE, VISTA) with anti-VEGF drugs have shown that only 27-45% of DME patients show three-line visual acuity improvement [98,99]. A post hoc analysis of the DRCR Protocol I data revealed that another 30-40% of DME patients do not respond completely to anti-VEGF therapy [12]. Based on our preliminary findings and clinical evidence, we hypothesize that genetic factors may play a significant role in the susceptibility of DR, as well as in the response to anti-VEGF therapeutics. DME and PDR appear to be mediated by separate molecular mediators where inter-individual variation in responsiveness ("good responders" vs. "poor responders") to anti-VEGF therapy in DME may be attributable, in part, to genetic variants. Thus, this is suggestive that VEGF may be a good pharmacogenetic marker rather than a disease identifying variant. Admixture Mapping The heterogeneity in the populations studied previously may explain the lack of reproducibility of DR genetics studies. It is well known that DR, as with diabetes generally, varies across ethnicities [100,101]. Interestingly, studies of the same ethnic group have also failed to reproduce similar results. Previous studies have relied on self-reported ancestry, which could potentially be an issue if cases and controls are unintentionally pooled from different ethnic groups. If one of the cohorts has higher disease prevalence than the other, one will be overrepresented and the other underrepresented, potentially yielding a high probability of false-positive results [102,103]. Thus, to overcome this limitation, admixture mapping has been used to identify the genetic factors associated with a phenotype in heavily admixed populations [104]. Admixture-based association analyses rely on methods to quantify the degree of ancestry both across the genome as a whole and within defined genomic regions [105]. To overcome this challenge, the DRGen study will use Infinium Multi-Ethnic Global SNP Array, not necessarily to identify individuals by their ancestry, but rather to quantify degrees of admixture, in particular genomic regions that can then be correlated with a phenotype to identify chromosomal regions harboring variants likely to be associated with the DR phenotype [106]. Preliminary Findings Using the aforementioned study design, two cohorts of patients were selected from the DRGen study population established at the UNM School of Medicine [107]. Briefly, we analyzed an 'extreme' phenotype cohort (no DR despite >25 years of diabetes; n = 6) and an 'advanced' DR cohort (PDR within 15 years of diabetes; n = 6). All subjects were matched for gender and age. After obtaining informed consent, DNA was isolated from white blood cells and WES was performed using the SureSelect All Human XT v5 exome kit, analyzed on the Illumina NovaSeq platform, followed by in-house downstream analysis pipeline to align the sequence reads and complete variant calling and annotation. We tested the enrichment of "risk" alleles in cases (PDR Group) with MAF <0.05% and identified four heterozygous missense variants and a frame shift mutation in the PDR group. The analysis of rare coding variants revealed a novel set of genetic variants involved in the angiogenesis and inflammatory pathways that contribute to DR progression (KLF17, ZNF395, CD33, PLEKHG5, and COL18A1) or protection (NKX2.3). These variants are of particular interest, as KLF17 and ZNF395 have been postulated to promote the downstream activation of VEGF [108,109]. Similarly, CD33, a transmembrane receptor expressed on cells of myeloid lineage such as monocytes, has also been suggested to play a role in VEGF expression and inflammation [110]. Furthermore, PLEKHG5 is responsible for encoding a protein leading to NFKB1 signaling pathway activation, known to be involved in DR pathogenesis [111]. Additionally, it is well known that COL18A1 regulates the expression of endostatin, a potent endogenous angiogenesis inhibitor [112]. NKX2.3, a member of the NKX transcription factor family, has been shown to regulate genes involved in immune and inflammatory response, cell proliferation and angiogenesis [113]. While these variants have not been studied well in the context of DR, our preliminary analysis of mRNA isolated from human retinal endothelial cells treated with high glucose, has shown increased expression of COL18A, ZNF395, and PLEKHG5(p < 0.0001). Further validation of these variants is necessary to confirm our findings. At present, the DRGen study is actively enrolling patients with selected DR phenotypes. Future Perspectives Although clinical evidence indicates that genetic factors are implicated in DR, their precise role remains elusive. We recognize that our preliminary findings represent the "tip of the iceberg" and therefore future plans include acquiring a larger study cohort and collecting additional biospecimens (blood and vitreous). Furthermore, we acknowledge that DR is not a homogenous phenotype, thus we will continue to harmonize the clinical data as described herein. The rare variant hypothesis represents the beginning of our vision of a comprehensive strategy involving clinical, genomic, and molecular data coupled with traditional statistical analyses and higher dimensional data analyses (e.g., deep learning). Furthermore, we hope to better understand the role of genetics in the variable anti-VEGF response observed in DME patients. We believe that the approach of harmonization of phenotypes and stringent patient cohort criteria, may lead to identification of novel genetics-based drug targets for DR. We recognize that the value of such results can help lead the forefront in personalized medicine that could potentially diagnose and help choose more efficacious treatments for patients. Immediate plans towards these future directions include creating a repository of blood samples with extracted genetic material and clinical phenotype information as a resource for the research community.
7,740.6
2020-01-01T00:00:00.000
[ "Medicine", "Biology" ]
FAS-Based Cell Depletion Facilitates the Selective Isolation of Mouse Induced Pluripotent Stem Cells Cellular reprogramming of somatic cells into induced pluripotent stem cells (iPSC) opens up new avenues for basic research and regenerative medicine. However, the low efficiency of the procedure remains a major limitation. To identify iPSC, many studies to date relied on the activation of pluripotency-associated transcription factors. Such strategies are either retrospective or depend on genetically modified reporter cells. We aimed at identifying naturally occurring surface proteins in a systematic approach, focusing on antibody-targeted markers to enable live-cell identification and selective isolation. We tested 170 antibodies for differential expression between mouse embryonic fibroblasts (MEF) and mouse pluripotent stem cells (PSC). Differentially expressed markers were evaluated for their ability to identify and isolate iPSC in reprogramming cultures. Epithelial cell adhesion molecule (EPCAM) and stage-specific embryonic antigen 1 (SSEA1) were upregulated early during reprogramming and enabled enrichment of OCT4 expressing cells by magnetic cell sorting. Downregulation of somatic marker FAS was equally suitable to enrich OCT4 expressing cells, which has not been described so far. Furthermore, FAS downregulation correlated with viral transgene silencing. Finally, using the marker SSEA-1 we exemplified that magnetic separation enables the establishment of bona fide iPSC and propose strategies to enrich iPSC from a variety of human source tissues. Introduction Pluripotent stem cells have long been considered a potent source for cell-based therapies. In 2006 Shinya Yamanaka's groundbreaking study paved the way to convert somatic cells into the socalled induced pluripotent stem cells (iPSC) [1], opening up new avenues for disease-specific drug modeling and patient-specific therapies. Rapidly, iPSC technology was proven to be a versatile tool for derivation of iPSC from healthy [2;3] and diseased [4;5] individuals and a proof-of-principle study demonstrated successful treatment of a genetic disorder via the iPSC interstage [6]. Reprogramming initiation was shown to be driven by a mesenchymal-to-epithelial transition, followed by a maturation phase before reaching a stably reprogrammed state [7][8][9]. An elaborate study investigating changes in mRNA and miRNA levels, histone modifications, and DNA methylation revealed that respective changes preferentially occur in two distinct waves [10]. An associated proteome analysis likewise observed bi-phasic expression changes and identified functional classes of proteins being differentially expressed in distinct phases [10]. Downregulation of fibroblast and mesenchymal markers was detected early in reprogramming and upregulation of epithelial markers shortly after [9;10]. Re-activation of several pluripotency-associated transcription factors (e.g. OCT4, NANOG, SOX2) is typically observed at intermediate or late stages of reprogramming displaying some degree of variability in the predictability of single markers for bona fide reprogrammed cells [10][11][12][13][14]. The first studies succeeding in induction of mouse iPSC took advantage of transgenic reporter systems linking reactivation of such pluripotency-associated gene promoters to either drug selection [1;15-17] or expression of fluorescent proteins [11;12] to identify the reprogrammed cells. While iPSC generated from a Fbx15-based reporter system failed to produce adult chimera, Oct4and Nanogbased systems allowed the successful generation of germlinecompetent iPSC [1;15-17]. However, transgenic systems are labor-intense in their generation and cannot be employed when producing human iPSC for clinical purposes, rendering naturally expressed surface proteins an attractive alternative. Despite growing insight in gene expression changes in general and proteome changes in particular, a limited number of surface protein-based strategies have successfully been implemented that allow the discrimination of cellular subsets in reprogramming cultures. To date, no systematic investigation aiming at the identification of antibodies with the ability to discriminate reprogramming stages has been reported. MEFs undergoing reprogramming were shown to phenotypically progress from a THY1 + to a THY1 2 /SSEA1 2 subpopulation, followed by a THY1 2 /SSEA1 + stage, ultimately achieving an SSEA1 + /Oct4-GFP + phenotype [10;12]. Accordingly, SSEA1 was successfully used to enrich for cells that had acquired pluripotency [10][11][12], as were EPCAM, E-CADHERIN [18] and combinations of PECAM1 with various other markers [19]. Likewise, a recent publication demonstrated the suitability of PGP-1 (CD44) and ICAM1 when combined with a Nanog-GFP-reporter [20]. SSEA1 and EPCAM were also successfully employed during directed differentiation depriving iPS-derived neuronal cells of remaining pluripotent cells of mouse and human origin, respectively [21;22]. In our study we sought to test a comprehensive library of 170 antibodies to identify surface proteins that are differentially expressed between MEF and PSC. The differentially expressed proteins should further be examined with regard to their dynamic expression changes in the course of reprogramming and their ability to enrich cells that are poised to become iPSC. Ultimately, we aimed to bypass low reprogramming efficiencies thereby easing the generation of iPSC lines. Results Twelve surface markers are differentially expressed between PSC and MEF An antibody screening experiment was performed to identify surface markers that are expressed mutually exclusive on either mouse PSC or MEFs, thereby potentially allowing the discrimination of PSC in the heterogeneous cell mixture of reprogramming cultures. The screening was based on a library of 170 antibodies directed against mouse surface proteins (Table S1). The library consisted of antibodies that had been generated in house and commercially available antibodies, some of which were selected due to their potentially differential expression previously reported in the literature. We compared expression by MEFs with the expression by mouse ESC and iPSC lines. Surface proteins were considered as potential reprogramming markers, if expression frequencies reached more than 90% on positive cell types and below 30% on the other cell types. Furthermore, markers were either designated as ''MEF associated'' or ''pluripotency associated'' markers with respect to their expression characteristics in our screening system. Beside the well described pluripotency associated marker SSEA1 and the MEF associated marker ITGAV ( Fig. 1A and S.K., unpublished data) 12 candidate markers were identified. CEACAM1, ENG, C-KIT, DDR2, as well as the previously described proteins E-CADHERIN and EPCAM were identified as potential pluripotency associated markers. Six surface proteins were categorized as potential MEF associated markers (PGP-1, SELP, THY1.1, FAS, ALCAM and SCA-1) (Fig. 1B, C). Of note, the THY1 genotype is strain-dependent in mice. While CF1-MEFs expressed THY1.1, the Oct4-GFP (OG2)-MEFs (C57Bl/6J x C3H/HeN background) expressed THY1.2 (Fig. S1A). Though not meeting the aforementioned criteria, SELP was included for further evaluation because of a high standard deviation hampering interpretation. Interestingly, SELP was not expressed on OG2-MEFs, while no unexpected expression of the potential pluripotency associated markers was observed (Fig. S1A, B). We point out that a lack of protein detection in the given screen might also result from suboptimal antibody titers or specificity of the employed antibody clones. Hence, it might still be possible to detect some of the negatively tested proteins by different staining protocols. In conclusion, we identified 12 potential reprogramming markers classifiable in MEF and pluripotency associated markers. Definition of reprogramming stages by activation and silencing of two combined reporter systems A reliable system to identify reprogrammed cells was needed before the expression characteristics of the candidate markers could be investigated in the reprogramming process. Therefore, a well-described Oct4-GFP pluripotency reporter mouse strain [23][24][25] was employed, that had previously been shown to be activated simultaneously or after silencing of lentiviral transgenes during reprogramming [26]. We observed an Oct4-GFP signal in most established iPSC that expressed OCT4 protein, but rarely in absence of OCT4 protein. However, in both standard culture ( Fig. 2A) and differentiation-inducing conditions (Fig. 2B) many cells were observed in which OCT4 protein was detectable despite the absence of an Oct4-promoter dependent GFP signal. During reprogramming progression Oct4-GFP expressing cells exclusively arose as a subfraction of the OCT4 protein containing compartment (Fig. 3A). Flow cytometric analysis of established iPSC lines finally demonstrated co-expression of Oct4-GFP with OCT4 protein and SSEA1, respectively (Fig. 3C). Altogether, these observations indicate that live cell detection of Oct4-GFP likely underestimates the number of OCT4 expressing cells, most pronounced during early reprogramming, thus representing a very conservative marker of pluripotency induction in live cell imaging approaches. Reprogramming was achieved by lentiviral transduction of hOct4, hKlf4, hSox2 and hc-Myc (hOKSM), all co-expressed from a single transgenic construct in which reprogramming factor expression is linked by intergenic 2A peptides. In addition, a terminally IRES-linked coding sequence of dimeric Tomato (Tom) fluorescent protein enables tracking of reprogramming factor expression [26]. At early time points (day 4 p.t.) most of the OCT4 protein expressing cells co-expressed the dTOMATO reporter, while from day 9 p.t. the majority of OCT4-positive cells had silenced transgenes as indicated by loss of dTOMATO expression (Fig. 3D) suggesting reactivation of endogenous OCT4 synthesis. Combining both reporter systems we found that dTOMATO was strongly expressed in transduced cells. First Oct4-GFP positive cells arose from this Tom + fraction at day 4 p.t. (Fig. 3D). The mean fluorescence intensity pattern of dTOMATO altered over time discriminating a Tom high and a Tom low subpopulation which could clearly be distinguished from day 12 p.t. on. Importantly, the Oct4-GFP + compartment was entirely Tom low at that time point and subsequently further downregulated dTOMATO, indicating that this reporter combination represents a valuable tool to follow temporal reprogramming progression. Thus, a classification of the different reprogramming stages could be implemented that features (1) a Tom + (single positive) early phase of reprogramming (2) a Tom + /GFP + double positive intermediate phase and (3) an Oct4-GFP + single positive late reprogramming stage. (4) Since from day 9 on far more cells expressed OCT4 protein as compared to Oct4-GFP or dTOMATO (Fig. 3A, B) an alternative intermediate phase (Oct4-GFP 2 /Tom 2 double negative) was concluded, reflecting that Oct4 promoter dependent GFP detection succeeded transcriptional activation of endogenous OCT4 expression. However, it is important to note that reprogramming cultures also contained non-transduced cells. Thus the Oct4-GFP 2 /Tom 2 compartment consists of intermediate phase reprogrammed and untransduced cells. Due to a massive reduction of autofluorescence in the reprogrammed cell fraction as compared to the MEF population, the gating strategy for Oct4-GFP and dTOMATO expressing cells was performed rather strict, leaving out doubtful areas (total frequency of Oct4-GFP+ cells is 4.3% at day 15 p.t.). cytometric analysis is shown to exemplify SSEA1 and ITGAV expression properties (n.d. = not determined), both of which were previously shown to be differentially expressed between MEF and PSC. B) Expression frequencies of antibody-targeted surface markers were tested by flow cytometry comparing MEFs (CF1), ESC line HM1 and iPSC line LV1-7b (n = 4 for MEFs: mean +/2 SD; n = 2 for ESC/iPSC each). Given are the percentages of positive cells for identified candidate markers (6 potential pluripotency associated markers on the left-hand side and 6 potential MEF associated markers on the right-hand side). Expression data of all antibodies tested in the screen can be found in Table S1, additional expression characteristics on OG2-MEFs are shown in Figure S1. C) Representative histograms are shown for selected markers. doi:10.1371/journal.pone.0102171.g001 Altogether, we were able to distinguish four distinct stages of reprogramming based on expression characteristics of reprogramming factors and Oct4-GFP reporter signals. EPCAM, SSEA1 and FAS expression changes reflect the reprogramming stages In order to investigate whether expression characteristics of the 12 candidate markers correlate with distinct stages of the reprogramming process, the expression of all markers was examined over time in the reprogramming stages defined above (Fig. 3D). We found that the potential pluripotency associated markers SSEA1 and EPCAM were gradually upregulated in the course of reprogramming leading to partial expression in the Tom + /GFP + intermediate stage and high frequencies in the Tom 2 /GFP + late stage (Fig. 4A). While CEACAM1 was upregulated only transiently, ENG failed to be upregulated at all (Fig. S2). C-KIT and DDR2 were only expressed by a marginal fraction of the Oct4-GFP + cells. MEF associated markers FAS and THY1.2 were highly expressed by untransduced OG2 cells, while striking downregulation could be observed in reprogramming stages as early as day 4 p.t., ultimately resulting in expression frequencies below 14% and 4% in the Tom 2 /GFP + fraction, respectively ( Fig. 4A and Fig. S2). PGP-1 downregulation occurred rapidly and, notably, not only in reprogramming but also untransduced cells lacking any clear correlation with the reprogramming status (Fig. 4A). SCA-1 downregulation was incomplete as implicated by a dramatic drop of the mean fluorescence intensity (not shown) while having little effect on the population frequency. ALCAM was not downregulated in any fraction or at any point of time (data not shown). ITGAV expression characteristics were difficult to interpret, because of a low expression dynamic of ITGAV in day 12 reprogramming cultures. Furthermore, negative correlation of ITGAV and OCT4 protein was incomplete (Fig. 4B). Consequently, ITGAV was neglected for subsequent experiments. FAS downregulation occurred in all Oct4-GFP and most OCT4 protein expressing cells 12 days p.t., demonstrating a negative correlation between FAS and OCT4 protein (Fig. 4B, C). In contrast, the EPCAM + subfraction predominantly correlated positively with expression of OCT4 protein. Although the SSEA1 + subfraction arose entirely from the cell population of OCT4 protein expressing cells, the majority of OCT4 protein expressing cells did not yet express SSEA1. This might indicate that detection of SSEA1 in our system lagged behind expression of OCT4 protein. Interestingly, the minority of Oct4-GFP expressing cells co-expressed SSEA1 and vice versa. This might indicate that SSEA1 and Oct4-GFP independently lag behind the expression of OCT4 protein (as detailed above). We concluded that expression characteristics of SSEA1, EPCAM, FAS and THY1.2 are able to reflect reprogramming progression with EPCAM upregulation and FAS downregulation preceding the upregulation of SSEA1. We omitted ITGAV from further analysis due to its insufficient dynamic range in expression changes. Establishment of pluripotent stem cells lines from magnetically separated cells In a proof-of-principal study we aimed to establish pluripotent stem cell lines from magnetically separated cells. We chose a positive selection strategy based on the well described marker SSEA1 to demonstrate that iPSC can be established from a particle-bearing cell fraction. Cells were therefore reprogrammed by lentiviral transduction 1 day after seeding, transferred onto feeder cells 12 days p.t. and separated at day 18 p.t. (Fig. 5A). After magnetic enrichment for SSEA1, cells were seeded in limiting dilution assays. Colonies that grew to sufficient size within 6 days and displayed ESC morphology were chosen for expansion ( Fig. 5B). At this stage the clones did also exhibit a homogenous Oct4-GFP signal. Marker expression was quantified by flow cytometry demonstrating that all clones consisted of at least 75% SSEA1 + and .90% Oct4-GFP + cells (data not shown). 6 clones were subcloned in a second round of limiting dilutions. Flow cytometric analysis was utilized to assess expression of SSEA1, Oct4-GFP, OCT4 protein, and remaining expression of transgenic dTOMATO in a screening experiment to preselect subclones for further investigation (Fig. 5C). Some subclones demonstrated residual dTOMATO expression (e.g. the derivatives of R24.23) and differences between the expression levels of the Oct4-GFP became obvious. All subclones expressed high levels of SSEA1 and demonstrated a robust expression of intracellular OCT4 protein. 4 subclones were chosen, each originating from a different parental clone, and tested in teratoma assays. 3 out of 4 subclones formed tumors consisting of differentiated tissues originating from the 3 germ layers (Fig. 5D). Importantly, the subclone (R24.23.4) that failed to produce a tumor exhibited residual transgene expression at high levels. In line with previous reports [27], we demonstrated that the procedure of magnetic cell sorting is well suited for establishment of pluripotent stem cell lines. Separation of SSEA1 + , EPCAM + or FAS 2 enriches cells committed to become iPSC We next sought to examine whether separation by alternative markers, i.e. the identified reprogramming markers EPCAM and FAS, is able to improve the procedure in terms of cell yield or phenotype of the target fraction. Twelve days after induction of reprogramming in OG2-MEFs magnetic separations were performed comparing the enrichment of SSEA1 + or EPCAM + and the depletion of FAS + cells (Fig. 6A). A representative example for the efficiency of each separation strategy is given in Figure S3. The desired target fractions (SSEA1 + , EPCAM + , FAS 2 ) typically achieved purities of 89%, 98% and 98%, respectively ( Fig S3). Each fraction was seeded on top of feeder cells and subsequently cultured for another 6 days. Flow cytometric analysis directly after separation demonstrated that the phenotype of any target fraction was entirely composed of EPCAM + /FAS 2 cells independent of the administered separation strategy (Fig. 6B). Importantly, the SSEA1 + cells represented a subfraction of the EPCAM + (FAS 2 ) cells. Consequently, SSEA1 enrichment yielded 6-fold less cells (data not shown), but led to an accordingly higher population frequency of SSEA1 + cells. Remarkably, only FAS depletion completely eliminated the Tom high subpopulation in the target fraction, thereby removing cells with diminished transgene silencing. Six days after separation all three separation strategies (FAS, SSEA1 and EPCAM) had led to a similar and significant enrichment of the Oct4-GFP expressing cells (85%, 84% and 65%, respectively), confirming that any of the given strategies is suitable to enrich cells poised to become iPSC (Fig. 6C). Notably, though the respective magnetic isolation protocols were mainly optimized to yield highly pure target fractions (i.e. the column flow-through or ''negative'' fraction for FAS, and eluted or ''positive'' fraction for SSEA-1 and EPCAM, Fig S3), reduced ratios of GFP+ cells (yet statistically insignificant) were observed in the non-target fractions of FAS and EPCAM when compared to the unseparated cells 6 days after separation (data not shown). In summary, both EPCAM enrichment and FAS depletion were characterized by an enhanced cell yield compared to SSEA1 enrichment. FAS depletion represented the only strategy that enabled the complete removal of the transgene expressing Tom high cell population. Tissue distribution of potential reprogramming markers on various human cell types The tissue distribution of some of the candidate markers on various human cell types was investigated using the Genevestigator software tool [28], which utilizes publicly available microarray data sets. To estimate whether the markers might also be suitable to selectively enrich reprogrammed cells from alternative human source tissues, we examined mRNA expression of the candidate genes in different tissues and cell lines. The excerpt given in Figure 7 focuses on cell types that are easily accessible and had repeatedly been reprogrammed in previous reports (fibroblasts, blood cells, skin, adipocytes). mRNA of pluripotency associated marker EPCAM had not been detected in human fibroblasts, adipose tissue and various blood cell subtypes (Fig. 7). In line with these data we found EPCAM protein to be expressed by hiPSC lines, but not human foreskin fibroblasts (hFF) and human umbilical vein endothelial cells (hUVEC) (Fig. 8A, B). It might thus be possible to also employ EPCAM-enrichment for isolation of iPSC in the human system, if fibroblasts or endothelial cells are to be reprogrammed. Since EPCAM is a well known epithelial surface protein, its mRNA was expressed in epithelial cell sources (Fig. 7), arguing against hiPSC enrichment via EPCAM from this cell type. In contrast, mRNA of somatic marker FAS seemed to be expressed not only by fibroblasts and blood cells but also epithelial cells (Fig. 7), thereby potentially enabling the isolation of hiPSC from epithelial tissues. While FAS protein was indeed found to be expressed significantly less in hiPSC than in hFF (Fig. 8A, B), a subpopulation of these hiPSC expressed low amounts of FAS protein. Noteworthy, FAS protein was only weakly expressed in hUVEC. Future investigations are therefore required to assess the suitability of FAS (and also EPCAM) for isolation of iPSC from human tissues. Altogether, we demonstrated the suitability of SSEA1, EPCAM and FAS for enrichment of iPSC from mouse embryonic fibroblasts. Moreover, we hypothesize that EPCAM and FAS selection strategies might also be useful for isolation of iPSC from various human cell types. Discussion Usage of surface markers to identify reprogramming stages can be readily applied to unmodified cells and allows for live-cell imaging and antibody-based separation strategies. Accordingly, some differentially regulated mouse surface markers have already been utilized for selective isolation of reprogramming subpopulations. These markers include SSEA1, which was previously shown to be upregulated as one of the earliest markers in reprogramming [11;12]. EPCAM, which has been shown to actively promote cellular reprogramming [29], also allowed for successful enrichment of NANOG-expressing cells [18]. Combinations of PE-CAM1 with SSEA1, ITGA6, PVRL2 and EPCAM, respectively, were shown to enrich the fraction of pluripotency factor expressing cells [19]. Also, the combination of PGP-1/ICAM1/Nanog-GFP was recently suggested to provide high-resolution information during late pluripotency gene upregulation [20], although without providing a generic marker code enabling the isolation of iPSC generated from wild type cells. Likewise though with different intention surface marker SSEA1 was used to deprive iPS-derived neuronal cells of remaining pluripotent cells [22] Nonetheless, only (also see Figure S2). Reprogramming subpopulations were defined as shown in Fig. 3D. B) Correlation of ITGAV, SSEA1, EPCAM and FAS with expression of OCT4 protein as analyzed by flow cytometry at day 12 p.t. C) Likewise, correlation of the selected candidate markers with the Oct4-GFP reporter system is shown at day 12 of reprogramming. doi:10.1371/journal.pone.0102171.g004 limited numbers of antibodies had so far been tested for their potential to discriminate distinct stages of reprogramming. We therefore aimed to employ a panel of 170 antibodies to detect expression of surface proteins on MEFs and PSC and were able to identify 12 differentially expressed proteins. We investigated expression changes of these respective markers in distinct reprogramming subpopulations. Combining a traceable lentiviral expression system with a transgenic Oct4-GFP reporter system, we Most colonies already demonstrated a strong and homogenous GFP signal. A representative colony is depicted. Expanded clones were subcloned in a second limiting dilution and expanded as described above. C) To pre-select subclones with the most promising potential for pluripotency several subclones of each clone were screened for expression levels of Oct4-GFP and SSEA1 (left plot) as well as expression of intracellular OCT4 and silencing of transgenic dTOMATO (right plot, n = 1). D) When injected into immunodeficient mice, 3 out of 4 subclones gave rise to teratomas with differentiation into derivatives of the 3 germ layers. Depicted is representative subclone R24.16.5 that formed keratinizing epithelium (ectoderm), cartilage (mesoderm) and pancreas-like glandular structures (endoderm). doi:10.1371/journal.pone.0102171.g005 were able to define four distinct reprogramming stages. These stages were characterized by silencing of vector transgenes, which was previously described as hallmark of reprogramming [12;15-17;26], and reactivation of the Oct4-GFP reporter. Previous reports demonstrated the suitability of combined approaches of transgene silencing and pluripotency factor (NANOG, SOX2) reactivation that were shown to reflect progression through the different stages of reprogramming [14;30]. We found that reporter signals from our transgenic Oct4-GFP cassette succeeded the endogenous reactivation of OCT4 protein during reprogramming but established a full correlation in established iPSC, representing a conservative indicator of OCT4 expression. Despite the delayed expression characteristics of the Oct4-GFP signal, it correlated entirely with the expression intensity of the reprogramming factors, i.e. first with the Tom low and later with the Tom 2 subpopulation. We concluded that our combined system is thus suitable to define sequential reprogramming stages. Employing this two-reporter-system we were able to investigate the kinetic changes of our potential reprogramming markers in the course of reprogramming. We confirmed the sequential upregula- tion of first EPCAM and second SSEA1 [10-12;31]. Previous studies had revealed a function of EPCAM in maintenance of the undifferentiated phenotype in mouse and human ESC [32;33]. Importantly, this function is exerted by regulation of pluripotencyassociated factors, such as OCT4, SOX2, KLF4, C-MYC and NANOG [33]. In line with these findings, we could observe that EPCAM expression correlated with expression of OCT4 protein even in cells that had already silenced the transgenic reprogramming factors as indicated by loss of Tom expression. Noteworthy, a recent study suggested EPCAM to be as informative to predict the iPSC state as NANOG and LIN28 [31]. Since upregulation of EPCAM precedes the expression of SSEA1, the latter marker was shown to detect only a subpopulation of the OCT4 expressing (EPCAM + ) cells. Interestingly, besides SSEA1 also the Oct4-GFP reporter succeeded OCT4 protein expression. However, both markers did not correlate with one another in early reprogramming, suggesting independent induction of the two markers. Coexpression with robust expression levels was finally observed after establishment of iPSC lines. While our antibody screen had revealed expression of C-KIT by PSC, we observed only marginal expression in the Oct4-GFP single positive stage until day 12 of reprogramming. Recent data suggest that this protein might represent an intermediate marker of reprogramming progression [31] and would thus also be an interesting candidate for separation of iPSC. Likewise, DDR2, which demonstrated similar expression kinetics as C-KIT in our study, might be an additional candidate for further experiments. Interestingly, our data does not corroborate the recently reported ICAM1+/PGP-1-signature of late reprogramming stages [20]. We found ICAM1 to be broadly expressed on the MEF population as well as on iPS (Table S1). PGP-1 was excluded from isolation studies due to the lacking correlation with mature reprogramming stages defined by our dual reporter system. To our knowledge, we are the first to characterize expression dynamics of FAS in the course of reprogramming of MEF into iPSC. FAS was quickly downregulated on OG2-MEFs upon induction of reprogramming. This is well in accordance with previous reports suggesting the absence of FAS expression in naïve mouse ES cells [34;35]. Furthermore, the expression of FAS was reported to be regulated by cooperating transcription factors, e.g. repression of FAS expression by OCT4 [36;37]. This is in line with our data demonstrating anti-correlation of FAS and OCT4 protein expression. In addition, activated tumor suppressor protein P53 positively correlates with FAS expression levels [37][38][39][40] and vice versa. Importantly, inhibition of P53 has been shown to enhance reprogramming efficiencies [41]. It would thus be interesting to investigate, whether FAS downregulation actively promotes reprogramming or is a consequence of negative regulation by OCT4 and lacking upregulation by P53. In our study we were able to confirm, that not only pluripotency marker upregulation but also downregulation of a somatic marker can be applied when isolating iPSC [20]. We suggest that downregulation of FAS, which is strongly expressed on MEFs, enables the enrichment of cells poised to become iPSC. A direct comparison of EPCAM-or SSEA1-enrichment with FAS-depletion demonstrated similar frequencies of Oct4-GFP + cells in the SSEA1 + and FAS 2 target fractions after a subsequent culture period. Though all sorting paradigms led to a robust enrichment of Oct4-GFP + cells, minor fractions of Oct4-GFP 2 were detectable 6d after isolation. These Oct4-GFP 2 could represent either a) spontaneously differentiated iPSC, b) reprogramming cells with retarded kinetics or c) cells which have not fully matured after having initiated the reprogramming process [42]. Isolation of the EPCAM + fraction resulted in a slightly lower frequency of Oct4-GFP + cells. This is in line with our observation that SSEA1 + cells arose as a subfraction of the EPCAM + population, reflecting that EPCAM is upregulated even before SSEA1. Although the SSEA1 + cells also represent a subpopulation of FAS 2 cells, FAS depletion worked equally well as SSEA1-enrichment. In consequence, selection strategies based on EPCAM as well as FAS yield higher cell numbers with comparable potential to SSEA1 + cells. Considering that only depletion of FAS expressing cells was able to completely abolish the fraction of immature, transgene dependent Tom high cells from the reprogramming culture, it is tempting to hypothesize that lack of FAS indicates a more mature reprogramming stage with a higher degree of epigenetic remodeling. It would thus be interesting to address the functional relevance of FAS for cellular reprogramming in future studies. By enrichment of SSEA1 + cells we have exemplified that the procedure of magnetic separation is well suited for the establishment of pluripotent stem cell lines even from a particle-bearing cell fraction. Established iPSC lines demonstrated pluripotency to the level of teratoma formation with differentiation into tissues derived from the 3 germ layers. Likewise, Dick and colleagues observed no adverse effects of magnetic particles when transducing cells with magnetic-particle-bearing lentiviruses and were able to establish hiPSC lines from positively selected human cells [27]. Importantly, magnetically separated cells are frequently reported to result in stable engraftment and survival of transplanted cells in vivo, e.g. using ES-derived cells [43;44] or in cancer treatment [45][46][47][48][49]. Thus, magnetic separation is suitable for rapid enrichment of pluripotent cells and a robust method for clinical therapies. To date, we demonstrated the selective enrichment of iPSC derived from mouse embryonic fibroblasts. However, for future clinical applications selection strategies for human iPSC are needed. In addition, it might be of interest to reprogram cell types that are more easily accessible and can be obtained in sufficient amounts. We thus inspected the mRNA expression patterns as observed in various human tissues based on numerous publicly available microarray data sets. These data suggest that FAS might serve to isolate hiPSC from epithelial cells, while EPCAM as known epithelial marker cannot be employed. This example highlights the need to take the cellular background into account when selecting a suitable marker for separation. Protein data moreover demonstrated significant expression differences of FAS between fibroblasts and hiPSC derived thereof. Furthermore, EPCAM expression differed significantly between hFF and hFFderived iPSC and hUVEC and hCBEC-derived iPSC, respectively. Although the collective data support the notion that FAS and EPCAM might also be suitable to isolate iPSC in the human system, expression kinetics need to be investigated and separations to be tested to draw definite conclusions. Noteworthy, in contrast to its expression on mouse pluripotent stem cells SSEA1 is only expressed on a differentiating subpopulation of human PSC cultures [50;51]. Therefore, it cannot be used to positively select for potential iPSC from human reprogramming cultures. Instead, alternative markers have been employed in proof-of-principle studies including SSEA4, TRA-1-60, TRA-1-81 and TNSFS8 (CD30) [27;52-56]. Additionally, combinations of the positive markers SSEA4 and TRA-1-60 with the negative marker Aminopeptidase N (CD13) have been used for selection of human pluripotent stem cells [56]. In conclusion, we reported 12 naturally occurring surface proteins that are differentially expressed between mouse embryonic fibroblasts and pluripotent stem cells. SSEA1, EPCAM and FAS allow the selective enrichment of cells poised to become iPSC from reprogramming MEF cultures by magnetic separation, thereby overcoming low efficiencies and easing the generation of iPSC lines. We hypothesize that some of these separation strategies can also be used to enrich iPSC from human source tissues and that they can aid the generation of patient-and disease-specific iPSC for research and clinical applications. Ethical Statement This study was conducted in accordance with the German animal protection law and with the European Communities Council Directive 86/609/EEC for the protection of animals used for experimental purposes. All experiments were approved by the Hannover Medical School Institutional Animal Care and Research Advisory Committee and permitted by the local government (LAVES, permit number 10/0209) according with the German animal protection law and with the European Communities Council Directive 86/609/EEC for the protection of animals used for experimental purposes. Flow cytometry For the screening assay cells were harvested using 0.25% trypsin-EDTA. Reprogramming cultures were harvested as detailed in the ''Reprogramming'' paragraph of the Material and Methods section. For surface marker stains, primary antibody staining was performed in PEB buffer (PBS/2 mM EDTA/0.5% BSA) for 10 min at 4uC, if not stated otherwise. Antibodies and staining conditions of the antibody screening are listed in Table S1. Moreover, anti-mSSEA1, anti-mITGAV, anti-hCD95 and anti-hEPCAM were used according to manufacturer's instructions (all Miltenyi Biotec). Cells were washed once and, if required, secondary staining also performed for 10 min at 4uC. Virally transduced cells were additionally fixed in 1.85% formaldehyde (Miltenyi Biotec) for 20 min at room temperature before flow cytometric analysis. Staining for intracellular OCT4 was conducted after surface marker staining. According to manufacturer's instructions (BD, Heidelberg, Germany) cells were fixed in a 1:1 mixture of Cytofix and Cytoperm for 20 min at 4uC and subsequently washed in Perm/Wash solution. The OCT4 intracellular stain was conducted using anti-Oct4 Alexa Fluor 647 (BD, Heidelberg, Germany) for 30 min at 4uC and cells were again washed in Perm/Wash. For flow cytometric analysis cells were resuspended in PEB buffer. Data were acquired using the MACSQuant Analyzer or MACSQuant VYB and analyzed with the MACSQuantify Software. Stain indices (SI) were calculated as follows: (Median of labeled cells -Median of unlabeled cells)/(26 standard deviation of unlabeled cells). Immunofluorescence Cells grown in standard culture dishes were rinsed with PBS, fixed with 4% paraformaldehyde (Merck), permeabilized with 0.1% Triton X-100 (Sigma-Aldrich) and blocked with 10% FCS in PBS. Cells were incubated with Anti Oct-4 antibody (Santa Cruz (Heidelberg, Germany) sc-5279, 1:50) in blocking solution for 1 h at 4uC. Secondary antibody staining was performed for 45 min (goat anti-mouse-Alexa Fluor 594, Invitrogen), followed by DAPI staining (Sigma) for 5 min. Cells were covered by mounting medium (Invitrogen) and analyzed using a Nikon Eclipse TS 100. Reprogramming For reprogramming 6,5610 3 MEFs per cm 2 were seeded on gelatin-coated dishes one day prior to transduction. MEFs were transduced virally using a multiplicity of infection of 4-7. Medium was changed to MEF medium 8-16 h p.t. Cells were further cultured in MEF medium until day 4 p.t. and in iPSC medium hereafter. Medium was exchanged every other day supplemented with 2 mM valproic acid (Merck, Darmstadt, Germany) from day 2 onwards, 25 mg/ml vitamin C (Sigma-Aldrich) from day 2-8 and ''3i'' cocktail from day 8 onwards. To harvest the cells, dishes were washed once in PBS, pre-treated with 1 mg/ml Dispase (Roche, Penzberg, Germany) for 7 min at 37uC, washed and dissociated in 0.25% trypsin-EDTA (Sigma-Aldrich) for 5 min at 37uC. Magnetic cell separation Reprogrammed cells were harvested as described above. Cell suspensions were filtered via 30 mM pre-separation filters (Miltenyi Biotec). Magnetic separations were performed according to manufacturer's protocol. In brief, 5610 6 cells were magnetically labeled as follows. For FAS separation (indirect separation) cells were first stained with 1 mg/ml primary antibody (Anti-FAS-Biotin) for 10 min at 4uC in 0.1 ml of PEB buffer, washed by addition of 2 ml buffer, centrifuged and resuspended in buffer. Magnetic labeling in general was performed for 15 min at 4uC in 0.1 ml of a 1:5 dilution of the according MicroBeads in PEB (Anti-SSEA-1 (CD15)-MicroBeads (Miltenyi Biotec 130-094-530), Anti-Biotin-MicroBeads (Miltenyi Biotec 130-090-485) and Anti-EpCAM-MicroBeads, respectively). Cell suspensions were then washed in PEB and resuspended in 0.5 ml iPS-Medium without LIF. Columns were pre-equilibrated and placed in appropriate magnets. Cells suspensions were administered and columns afterwards washed with medium by gravity flow. The entire flow-through was collected as negative fraction. Positive fractions were eluted in required volumes using the provided plunger. EPCAM and SSEA1 separations were carried out using MS columns, FAS separations using LD columns. After separation, cells were investigated by flow cytometry or seeded on top of CellTrace Violet Dye pre-stained (Invitrogen) gamma-irradiated feeder cells (1610 5 cells per well of a 12well plate). Seeded cells were further cultured in ''3i'' conditions for 6 days with media changes every other day. Establishment of iPSC lines after magnetic separation Limiting dilutions were performed to isolate single cells after separation. Separation based on SSEA1 was performed at day 18 p.t. and the SSEA1 positive fraction was diluted in ''3i'' conditions to a cell density of 5 cells/ml. 0.1 ml of this cell suspension was seeded per well of a 96well plate containing gamma-irradiated CF1-MEFs and further cultured for 6-8 days. Single colonies were expanded and analyzed by flow cytometry. Lines that expressed highest levels of SSEA1 were subcloned by a second round of limiting dilutions to ensure clonality of derived iPSC lines. Teratoma assay iPSC were harvested, 1610 6 cells resuspended in 0.2 ml PBS and injected subcutaneously into the flanks of six NOD.Cg-Rag1tm1Mom Il2tm1Wjl/Sz (NRG) mice. Teratoma formation occurred 4-8 weeks after injection. During this time frequent checking of the teratoma formation ensured the termination of the experiment at the approved state. The guidelines issued from the GV-Solas (Society for Laboratory Animal Science) and TVT (Veterinary association for Animal Welfare, Germany) served as basis for defining the humane endpoints. After anesthesia using carbon dioxide, animals were sacrificed by cervical dislocation. Teratomas were fixed in 4% neutral buffered formalin (pH 7.2), embedded in paraffin and 2 mm sections Hematoxylin & Eosin stained. Microarray meta-analysis Publicly available microarray datasets were analyzed using the Genevestigator anatomy tool [28]. This tool displays expression levels of genes of interest in various tissues and cell lines. The expression level within a tissue type is the average expression across all samples that were annotated with that particular cell type. Data sets analyzed in this study were derived on Affymetrix microarray ''Human133_2: Human Genome 47k array''.
8,491
2014-07-16T00:00:00.000
[ "Biology", "Medicine" ]
Numerical and Experimental Research on Single-Valve and Multi-Valve Resonant Systems Multiple valves in the pipeline system belong to obvious periodic structure distribution types. When a high-speed airstream flows through the pipeline valve, it produces obvious aero-acoustic and acoustic resonance. Acoustic resonant systems with single and six-pipe valves were investigated to understand the flow and acoustic characteristics using a numerical simulation method and testing method. The strongest acoustic resonance occurred at a specific flow velocity with a corresponding Strouhal number of 0.47 corresponding to the geometric parameters in the paper. Moreover, acoustic resonance occurred in a certain velocity range, rather than increasing with the increase of the velocity of the pipeline. This regular increase provided an important theoretical basis for the prediction of the acoustic resonant and ultimate acoustic load of a single-valve system. When the pipeline was attached with multiple valves and the physical dimension was large, the conventional aero-acoustics calculation results were seriously attenuated at high frequency; the calculation method involving a cut-off frequency in this paper was presented and could be used to explain the excellent agreement of the sound pressure level (SPL) below the cut-off frequency and the poor agreement above the cut-off frequency. A new method involving steady flow and stochastic noise generation and radiation (SNGR) was proposed to obtain better results for the SPL at the middle and high frequencies. The comparison results indicated that the traditional method of Lighthill analogy and unsteady flow could accurately acquire aerodynamic noise below the cut-off frequency, while the new method involving steady flow and SNGR could quickly acquire aerodynamic noise above the cut-off frequency. INTRODUCTION Periodic structures are usually used in the design of acoustic metamaterials, and have a large number of applications in low-frequencysound absorption and sound insulation (Ma et al., 2021a). Lowfrequency sound absorption and sound insulation performance are often improved by changing the layout of periodic structures and the structural characteristics of single cells (Zhu et al., 2014;Pelat et al., 2020;Ma et al., 2021b). When high-velocity air flows through variable cross-section pipes, strong resonant noise is generated if the frequency of vortex shedding coincides with the resonant frequency of the bypass pipe. A substantial flow rate causes strong acoustic energy to occur at the junction of the main steam line valve. The amplified sound pressure wave propagates in the main steam line at the speed of sound and acts on the structure surface. When the acoustic resonant frequency of the pipeline valve is close to the frequency of the structure, the structural vibration increases significantly, leading to severe damage (Shiro et al., 2008). Mechanical noise, aerodynamic noise (usually called aeroacoustic noise), and cavitation noise represent the primary sound sources effecting a pipeline valve and are investigated using a method that combines theoretical and experimental aspects. In 2005, (Ryu et al. (2005) studied the relationship between the valve spool opening and the noise level in the engine intake and exhaust pipelines via tests, revealing the influence of different valve spool openings on the noise level of pipelines. Alber et al. investigated the characteristics of valve noise sources propagating through structures and air (Alber et al., 2009;Alber et al., 2011). An equivalent analytical model for structural sound propagation analysis was established that could effectively and quickly predict the propagation of valve noise in the structure. The flow velocity, cavity shape, and Strouhal number all had a significant influence on the magnitude of aerodynamic noise. In 2010, (Du and Ouyang (2010) used experimental methods to study the mechanism of howling in the compressor pipeline, and found that the howling was most pronounced at a Strouhal number of 0.51. (Ziada and Shine (1999) summarized the acoustic resonance laws of various valve types and multiple valve combinations, and obtained the relationship between the acoustic resonance frequency and the acoustic resonance order, the corresponding flow velocity, and the diameter of the pipe valve. Oldham et al. adopted a theoretical method (Oldham and Waddington, 2001) to study influencing factors, such as pipeline cut-off frequency and flow velocity on the sound propagation in the pipeline, and calculated the aerodynamic noise in the pipeline system, while Sanjose et al. (Alice et al., 2014;Charlebois-Menard et al., 2015;Marsan et al., 2016) built airflow generation devices and test benches. Noise testing of the constructed butterfly valve for aviation was performed using the wall pulsation pressure test method, and the relationship between the Strouhal number and the aerodynamic noise of the valve was studied. Ref (Durgin and Graf, 1992;Uchiyama and Morita, 2015). summarized the mechanism of acoustic resonance of single-valve and multivalve systems and the acoustic resonance relationship with Strouhal number; the conclusion can be used to summarize the typical acoustic resonance with different valve numbers. Most previous studies regarding pipeline valve noise are based on experimental tests, and it is apparent from what rare simulation studies do exist that there is a big difference between simulation and testing. The resonance frequency does not match well, and the high-precision resonant sound pressure cannot be directly obtained via the simulation. With the development of computer and numerical methods, numerical simulation has been used to examine the pipeline resonance cavity in recent years. The Lighthill acoustic analogy method (Lighthill, 1952;Lighthill, 1954), in which aerodynamic sound generation makes it possible for tailored algorithms to be used for both tasks, is used nowadays by computational aero-acoustics tools. This is a straightforward way of arbitrarily combining a sound generation method with another sound transport technique. Its accuracy is limited by the cut-off frequency of computational fluid dynamics (CFD) results. When the characteristic frequency is high, a small grid must be satisfied, which often requires substantial computing resources. The cavity resonance frequency of the pipe valve can be obtained according to the quarter-wavelength tube formula, and it is inversely proportional to the height of the pipe valve. When the primary pipe size is relatively large, and the flow velocity is high, the characteristic frequency is high. It is difficult to predict the acoustic resonance with limited computing resources and low cut-off frequency. To overcome the expensive computation of an unsteady flow field, a model that synthesizes the flow fluctuations can present an interesting alternative. The SNGR method (Bechara et al., 1994;Bailly and Juve, 2012) has established itself as a complementary module, generating a turbulent velocity field that respects the experimental and theoretical characteristics of the turbulence. Although this method compensates for the expensive cost of unsteady computation, it requires a profound knowledge of turbulence statistics. The source generation process of SNGR is based on two steps. One involves turbulent velocity synthesis, while the other is concerned with source computation based on synthetic velocity, using the Lighthill or the Möhring analogy. The SNGR method is applied for the rapid prediction of external aerodynamic noise, pipeline jet noise, landing gear noise, and vehicle wind noise Paolo et al., 2013;Paolo et al., 2015), and is characterized by the ability to quickly obtain broadband noise magnitudes based on steady-state CFD results. However, since the SNGR method is based on steady-state CFD results, it is difficult to accurately predict low and midfrequency noise. In this paper, the aerodynamic noise of a resonance cavity system with a single pipe valve, as well as multi-pipe valves is studied at high flow velocity. Furthermore, the influence of the geometrical size of the pipe valve, the airflow velocity, and the Strouhal number on the frequency of aerodynamic noise is investigated. The SNGR method based on a steady flow field is combined with the acoustic analogy method based on a transient flow field to obtain the full frequency band aerodynamic noise. The accuracy of the simulation method is compared to the experimental data, deriving the method for defining the cut-off frequency. The remainder of the paper is structured as follows: In Section 2, the methodology of the resonant pipeline-cavity system is introduced. In the next section, the aerodynamic noise of a single pipe valve, as well as six-pipe valves are then illustrated and discussed. Finally, the conclusions are presented. Unsteady CFD Simulation An aerodynamic noise test and a simulation of the resonant pipeline-cavity system with a single pipe valve, as well as six-pipe valves, were performed at different flow velocities. Both steady and unsteady flow fields were simulated using realizable k-epsilon and large eddy simulation (LES). The aerodynamic noise was predicted using the Lighthill analogy methods. The realizable k-epsilon two-layer model was chosen to predict the steady flow field of the resonant cavity. It has been used effectively for a wide variety of flow simulations, with excellent applicability in free flows with jet and mixed flows or flows with large separations (Shih et al., 1995). The governing equation is expressed as follows: Here, G k is the generating term of the turbulent kinetic k caused by the average velocity gradient; G b is the generating term of the kinetic k caused by buoyancy; Y M represents the contribution of the pulsation expansion within the compressible turbulence. C ε1 , C ε2 , and C ε3 are empirical constants; σ k and σ ε are Prandtl numbers corresponding to kinetic energy k and dissipation rate ε; S k is the original term. LES (Lesieur and Metais, 1996) was performed to gain better insight into the noise analysis. During LES, the energy-containing eddies were resolved, while the small-scale structures in the dissipation range were modeled via the subgrid-scale stress term. The governing equations employed for LES were filtered Navier-Stokes and continuity equations: Here, u i and u j are the filtering velocity, i≠j 0,1,2, t is time, ρ is density, pis the filtering pressure, v is kinematic viscosity coefficient, and τ ij is the subgrid stress. The Smagorinsky-Lilly model represents an eddy viscosity subgrid model, which was proposed by Lilly (Lilly, 1962), and is used to model the subgrid stress. Therefore, to overcome the Smagorinsky-Lilly model constant, which is significantly larger in some turbulent flow problems, Germano proposed the dynamic Smagorinsky model used in the current study, which is based on the idea of eddy viscosity coefficients in Kraichnan spectral space (Germano et al., 1991). Lighthill Analogy for Aero-Acoustic Simulation Lighthill first proposed a hybrid method during the study of nozzle aerodynamic noise in 1952 and 1954, triggering the change from the N-S equation to the classic Lighthill equation, and marking the generation of modern aeroacoustics. To achieve compatibility with the formulation used in the paper, the alternative equation used for the Lighthill's analogy in the frequency domain was Eq. 6. The spatial derivatives were partially integrated using Green's theorem to obtain the weak variational form, as shown in Eq. 7. This approach to treating the aerodynamic noise problem was intended to be used in low Mach configurations (below 0.3), neglecting the convection and refraction effects in the propagation. This method needed to convert the velocity and density of the sound source area obtained by CFD into a sound source and then use Lighthill's analogy to obtain sound propagation characteristics. The accuracy of the Lighthill analogy was dependent on the fluid and acoustic grids. For broadband noise, a coarse grid led to low accuracy in the middle and high frequencies, but too fine a grid would not be satisfactory if the computing resources or time were limited. Consequently, SNGR could be an option to solve this problem. AERO-ACOUSTIC ANALYSIS OF THE SINGLE PIPE VALVE Testing on the resonant pipeline-cavity systems with a single pipe valve was performed at different flow velocities, as shown in Figure 3. For the single-valve system, the effective length and the inner diameter of the straight pipe were 4,000 mm and 110 mm, respectively, while the pipe valve had an inner diameter of 60 mm and a length of 120 mm. CFD Simulation Analysis The numerical model of the straight pipe with a single pipe valve was created, which was consistent with the test. Considering that the current study primarily focuses on the aerodynamic noise characteristics of the pipe valve, the length of the computational domain was 1800 mm, about 30 D (D represents the diameter of the pipe valve), as shown in Figure 1A. There was a 600 mm distance from the upstream of the pipe valve and a 1,200 mm downstream distance. Except for the same seven acoustic measuring points on the wall as in the test, five static pressure measuring points were located in the flow. The position of point1 corresponded with V1. Point2 was located where the center lines of the pipe valve and straight pipe intersected, while point3 and point4 were located at the starting point and midpoint of the centerline of the pipe valve. Point5 was located downstream and 600 mm away from the pipe valve. The surface and volume meshes of the computational domain were created using ICEM meshing tools. The size of the straight pipe and pipe valve was 4 mm. Furthermore, the height of the first boundary layer mesh was 0.05 mm to ensure that y+ ≈4.0 at an inlet velocity of 80 m/s. The growth rate and the number of layers were 1.2 and 15, respectively. The boundary layer mesh was applied to all the surfaces, while its quality was acceptable. The total number of grids totaled about 3.9 million, and the middle plane of the grid is depicted in Figure 1B. The incoming velocity of the computational domain inlet was 80 m/s, while the pressure of the computational domain outlet was 0 Pa. The wall boundary conditions were used on the surfaces of the straight pipe and pipe valve. A compressible model and a pressure-based solver were used to carry out the aerodynamic calculations. The discretization of pressure, momentum, and energy were second-order upwind for steady calculations, but the momentum discretization changed into bounded central differencing for unsteady calculations. First, the realizable k-epsilon was run to initialize the flow field in 5,000 iterations, helping to obtain a quick and robust convergence of unsteady simulation. Then, the computation transformed into an unsteady state. LES started to run in a time-step of 0.0001 s with 25 iterations in each time-step. The duration of 0.3 s was roughly 13 times the flow-through time from inlet to outlet. The data sampling began after the flow field reached instability. Data sampling was conducted for 1,000 time-steps, while no universal criterion was available for judging the convergence. Consequently, during this investigation, the calculation was considered convergent when each variable met the convergence criteria, which was about 10 −4 . Furthermore, the pressure and velocity were monitored to confirm that the flow field variable did not change after multiple iterations. Over time, the static pressures of the five measuring points in the flow field showed that it displayed a typical periodicity. Figure 1C showed the change curve of the static pressure with time for measuring point1, whose duration was 0.0016 s. The power spectral density analysis of each measuring point revealed a significant peak near 625 Hz in Figure 1D, which was significantly higher than the other frequencies. Point4 was located in the pipe valve, and exhibited the largest peak value, while point1 was upstream of the pipe valve pipe and displayed the smallest peak value. For the current straight pipe with a pipe valve, an acoustic resonance phenomenon occurred, while the resonance frequency was calculated using Eq. 8. The theoretical calculation frequency was consistent with the current unsteady flow field simulation calculation frequency, indicating the accuracy of the unsteady flow-field simulation method. Wherein, n is the resonance order; L is the length of the pipe valve; r is the radius of the cavity. Furthermore, to clearly reflect the changes in the flow field during a specific period, the velocity contour of three periods in the middle plane was chosen. Figure 2 shows that the area Due to the vortex impact, a feedback compression wave propagating upstream was generated in the trailing edge of the cavity. The feedback compression wave propagated upstream and finally reached the leading edge of the cavity. Consequently, noise disturbance was induced, the shear layer of the leading edge was excited again, and the resonance period was closed. The periodic intermittent changes in velocity were indicative of such a flow mechanism. Aero-Acoustic Simulation and Experiment Part of the fluid model was changed into an acoustic model to analyze the resonant cavity system containing a single pipe valve. The grid size of the acoustic model was 10 mm, ensuring that the point per wavelength exceeded 8at a calculation frequency of 4,000 Hz. This meets the point per wavelength requirements of about 6∼8. The velocity and density of the fluid model were converted into the sound source of the acoustic model by interpolation. Furthermore, sound propagation was performed using the Lighthill analogy in the frequency domain. The wall surface of the pipeline was reflected completely, while the end surfaces at both ends of the pipeline were defined as the modal boundary of the pipeline. When the simulated sound wave was transmitted to the end surface of the pipeline, it propagated down the pipeline without reflection, simulating the true acoustic impedance of the cross-section of the pipeline. Since both the sound source area and the sound propagation area solved the sound wave equation, the acoustic measuring points could be arranged in these locations. The noise test of the resonant pipeline systems with a single pipe valve was performed at different flow velocities, as shown in Figure 3 The test facility included an airflow generation system, test piping, and the test system. The airflow system was composed of an air storage tank, flow control valve, straight pipe, front-stage reducer, rear exhaust port, and muffler room, which could produce an airflow up to 100 m/s. As shown in Figure 3B, for exploring the influence of different shapes and sizes of the valves on the acoustic results, multiple types of valve resonators with different sizes were selected in the experiment, including but not limited to an L11 model (with valve size ∅ 6 * 12 cm), L12 model (with valve size ∅ 6 * 24 cm), and L13 model (with valve size ∅ 4 * 12 cm). Finally, the L11 model was selected for research and is described in this paper. The test pipeline system consisted of the main pipeline and the resonant cavity. Due to the high airflow velocity of the resonant cavity, the front end of the microphone was placed as close to the inner wall of the straight pipe and pipe valve as possible to reduce the impact of the airflow on the microphone and to truly reflect the airflow noise in both locations. Moreover, a small amount of porous sound-absorbing material was placed in front of the microphone, which acted as a windproof ball. For the single valve system, seven measuring points were arranged on the straight pipe and the pipe valve, and are shown in Figure 3C. P1 and P2 were arranged upstream of the straight pipe, while P5∼P7 were located downstream of the straight pipe. The distance between the two measuring points was 150 mm. P3 was located in the center of the side of the pipe valve, and P4 was arranged on top of the pipe valve. A 1/4 inch MP401 pressure field microphone was used to collect the sound pressure at different airflow velocities, while the velocity in the straight pipe was measured using a Testo 512 differential pressure measuring instrument. An SQlab multichannel, real-time analyzer was used to collect and assess the sound pressure. It should be noted that no interference was evident from other strong sound sources in the test environment. All the test data can be used to create a relationship between the pipe resonance cavity and the sound field, verifying the hybrid simulation method. The simulation and test results of the upstream and downstream acoustic measuring points of the pipe valve were selected and compared, as shown in Figure 4A. For the upstream measuring point P2, the simulated value of the first-order resonance frequency and magnitude was smaller than the test value 130 dB at about 625 Hz. However, the difference in the second-order resonance frequency and magnitude exhibited an increase, while the values of the third-order resonance were even more significant. Overall, the simulation of the sound pressure level (SPL) in a frequency band below 1800 Hz corresponded well with the experiment, but the agreement at higher frequencies was poor. For the downstream measuring point P5 in Figure 4B, the first-order resonance frequency did not differ much, but the magnitude displayed substantial differences. Except for the frequency exceeding 1800 Hz, the simulated value of the SPL of other frequencies differed little from the experimental value, while the overall agreement was good. The resonant frequency and the SPL contour of the other two frequencies are shown in Figure 4C, indicating that when acoustic resonance occurred, the sound pressure in the pipe was much more substantial than that of other frequencies, and the sound pressure in the pipe valve exceeded that of the straight pipe. Furthermore, the downstream sound pressure of the pipe was higher than the upstream. The frequency characteristics and acoustic resonance intensity could be improved by changing the form and position of the pipe valve, such as the rounding of the connection. The following section analyzes the resonance characteristics of different speeds based on the test results. Figure 5 shows that the overall sound pressure level (OSPL) of the test point inside the pipe valve at the same airflow velocity was significantly higher than that of other test points in the straight pipe. The OSPL increased rapidly in conjunction with an increase in the velocity from 15 m/s to 45 m/s, showing a gradual rise as the velocity increased from 60 m/s to 80 m/s. Due to the acoustic cavity resonance, OSPL depended on the resonance peak. The OSPL of the same test point at a velocity of 45 m/s was higher than at 70 m/s. The small difference between the resonance peaks of 60 m/s to 70 m/s resulted in a small OSPL difference. The Strouhal number defined by Eq. 9 was 0.83 at a velocity of 45 m/s and decreased to 0.47 at a velocity of 80 m/s, so found that the corresponding Strouhal number range was [0.47, 0.83]. Here, f is the acoustic resonance frequency, d is the diameter of the pipe valve, and v is the velocity. Model Introduction In this application, airflow through a single valve or branch duct system produced a big aerodynamic noise, which was used as a sound source in subsequent experiments. The inner diameter of the main pipe was 110 mm, and the diameters of the different branch ducts were 60 and 90 mm (with chamfering). As showed in Figure 6, the airflow was generated by an air tank, and the pressure in the main pipe was adjusted by the pressure control valve. When the airflow flowed through the valve/side branch within a certain velocity range, obvious acoustic resonance was generated in the pipe. The flow rate of the airflow in the main pipeline was controlled by the flow control valve, and finally went to the external free field through the exhaust muffler exhaust. The adjustment range of the pressure control valve was 0.3 ∼ 0.7 mpa; the adjustable valve can control the flow rate in the main pipe from 20 m/s to 80 m/s. Simulation and Regular Models The straight pipeline and valve department shown in the experiment were taken as the analysis object in the CFD As shown in Figure 7, the time-domain curve in the above figure was processed by discrete Fourier transform (DFT) to obtain the pressure fluctuation in the frequency domain, as shown by the red line. There were several obvious characteristic peaks in the red line, corresponding to 300 Hz and its harmonic frequencies. These frequencies and peaks corresponded to the frequency (Hz) and pressure fluidizations (dB) of loadcase_5 in the monitors in branch duct column in the following Table 1. As shown in Table 1, there was no acoustic resonance in loadcase_1 and loadcase_4 and there was no obvious characteristic frequency. In loadcase_2 and loadcase_5 there occurred obvious acoustic resonance with three obvious characteristic frequencies and characteristic peaks, respectively. The resonance frequency was concentrated at 300 Hz and its harmonic frequencies and the pressure fluctuation amplitude was much higher than in other working conditions. Obviously, loadcase_5 was the working condition with the strongest SPL in the main pipe, and loadcase_2 was the working condition with the strongest SPL in the resonant cavity\branch pipe. We imported the CFD data into the CAA program for aeroacoustic calculation, and the simulation results were compared against and verified with the experimental results. The branch duct model without a chamfer and with an inner diameter of 60 mm and height of 250 mm was selected for description. As shown in Table 2, there were 3 obvious characteristic peaks in the experimental and simulation results of acoustic resonance phenomena with different flow velocity conditions, and the results of numerical simulation were in good agreement with the experimental data. When the flow velocity was 50 m/s, the maximum SPL of acoustic resonance reached 179dB, which was much larger than the SPL of velocity at 25 m/s and 75 m/s. APPLICATION 2: THE AERO-ACOUSTICS ANALYSIS OF THE MULTI-VALVES 5.1 Numerical Model and Sound Field Prediction The acoustic test of the resonant pipeline-cavity system with six-pipe valves was performed at different flow velocities, as shown in Figure 8. For the multiple-valve system, the effective length and inner diameter of the straight pipe were 9,000 mm and 305 mm, respectively, while the pipe valve had an inner diameter of 64 mm and a length of 500 mm. The pipe valve spacing was about 1,000 mm. For the multiple-valve system, twelve measuring points were arranged on the six-pipe valves, and are shown in Figure 8A. Two measuring points were arranged on each pipe valve at space intervals of 80 mm. A flow velocity measuring point was established in the pipeline system, while there was 300 mm in front of the resonant cavity. The numerical model of the straight pipe containing six-pipe valves was created, which was consistent with the test. The length of the computational domain was 9,000 mm, about 30 D (D is the diameter of the pipe valve). There was a distance of 2000 mm from upstream of the pipe valve and a 2000 mm downstream distance. All the acoustic measuring point positions of the six-pipe valves were the same as in the test. The surface and volume meshes of the computational domain were created via ICEM. The average sizes of the straight pipe and pipe valves were 8 mm. Furthermore, the height of the first boundary layer mesh was 0.03 mm to ensure y+≈2.0 when the inlet velocity was 65 m/s. The growth rate and the number of layers were 1.2 and 15, respectively. The boundary layer mesh was applied to all the surfaces, and the mesh quality was acceptable. The total number of grids was about 26.6 million, and the middle plane of the grid is depicted in Figure 9. The velocity entering the inlet of the computational domain was 65 m/s, while it displayed the strongest acoustic resonance. The boundary conditions and calculation process were consistent with that of the single pipe valve, and will not be repeated here. Part of the fluid model was changed into an acoustic model to analyze the resonance cavity system with six-pipe valves. The grid size of the acoustic model was 20 mm, ensuring that the point per wavelength exceeded 8 if the calculation frequency was 2000 Hz. The sound field was predicted using the transient flow and Lighthill analogy, while the boundary conditions and calculation process were also consistent with that of the singlepipe valve system, which will not be repeated here. P3 and P5 were chosen to compare the differences between the sound pressure levels of the simulation and test results, as shown in Figure 10. The first-order resonance frequency of 480 Hz and the second-order resonance frequency of 820 Hz were found at the SPL spectrums of the simulation and test. However, their magnitudes were distinctly different. As for P3, the simulated values of the first-order resonance peak and the second-order resonance peak were about 10 and 9 dB different from the test value. Smaller differences were evident for P5, where only a 3 dB difference was apparent. Overall, the simulation of the SPL in the frequency band below 1,250 Hz corresponded well with the test, matching the cut-off frequency. The SPL contours of the first-order and second-order resonance frequencies are shown in Figure 11, indicating that when acoustic resonance occurred, the sound pressure in the pipe valve exceeded that of the straight pipe. The SPL of the second-order resonance frequency was higher than that of the first-order resonance frequency, for which the SPL of the third pipe valve was the lowest, but changed to the sixth pipe valve for the second-order resonance frequency. The Sound Field Prediction via SNGR and Steady Flow The stochastic noise generation and radiation (SNGR) method resynthesized the flow field data containing the time term based on the time-averaged velocity and turbulent kinetic energy obtained from the Reynolds Averaged Navier-Stokes calculation results by adding random perturbations. The turbulent pulsation velocity in Eq. 10 and Eq. 11 was synthesized using a stochastic model approach, which could be derived from the N Fourier modes. SNGR produced sound sources equivalent to volume Lighthill analogy sources in the frequency domain, as shown in Eq. 12. SNGR is based on stochastic isotropic turbulence theory, which is suitable for noise generated by small-scale eddies. It was challenging for SNGR to predict noise generated by middle-and large-scale eddies, however, this is exactly what traditional CFD and the Lighthill analogy can do. Therefore, a hybrid simulation method combining LES, the Lighthill analogy, and SNGR was used to predict the aerodynamic noise generated by the resonant cavities of pipeline valves. u t i x j , t 2 · N n u n cos K n k n j x j + φ n + ω n · t · σ n j (10) Here, E(k n )is the turbulent energy density spectrum and S t fd v is the wavenumber step; S t fd v is the angular turbulent frequency associated with the n th velocity mode; S t fd v is the random phase associated with the n th velocity mode; k n is the turbulent wavenumber associated with the n th velocity mode; S t fd v is the random orientation of turbulent wave vector associated with the n th velocity mode; S t fd v corresponds to the direction of the n th velocity mode and is restricted in a plane perpendicular to k n j to ensure mass conservation; ⊗ denotes the convolution product; and f is the maximum frequency deduced from the Kolmogorov wavenumber. A requirement for aero-acoustic simulations based on unsteady CFD data demands that the CFD mesh supports the maximum frequency targeted by the user, which is called the mesh cut-off frequency and depends on the turbulent quantities and the cell size presented in the CFD mesh. The cut-off frequency was defined according to Eq. 13 and Eq. 14. Due to the significant turbulence dissipation rate and large grid size, the cut-off frequency in the main stream of the pipe was about 1800 Hz, which was consistent with the distortion frequency shown in Figure 12. One challenge pertained to what could be done in cases where higher frequency noises might be important. By following the Buckingham π theorem (Buckingham, 1914), Here, k is the turbulent kinetic energy, Ɛ is the turbulent dissipation rate, and Δx is the element size. SNGR could be used as a method for rapidly predicting the turbulent noise of middle and high frequencies. It generated several realizations of the turbulent velocity field, respecting experimental and theoretical characteristics of the turbulence. Only the velocity, turbulent kinetic energy, and turbulent dissipation rate of the steady-state calculation were exported into the Aero-Acoustics computation procedure as sound sources. The sound propagation was performed using the Lighthill analogy and the same boundary conditions to obtain the sound pressures of both the nearfield and far-field. Furthermore, the SNGR and steady flow method was adopted to obtain the sound pressure at a higher frequency, and their comparison results are shown in Figure 13. The results of the transient flow and the Lighthill analogy corresponded well with the test results below the 1,250 Hz frequency, while their differences gradually increased as the frequency became higher. The blue curve shows that the result of steady flow and SNGR from 1,250 Hz to 5,000 Hz corresponded well with the test. Considering the calculation time and grid complexity of the two calculation methods, it was recommended again that the method involving transient flow and the Lighthill analogy be used for low frequencies while applying the technique involving steady flow and SNGR to middle and high frequencies for similar problems. CONCLUSION The resonance cavity system with either a single pipe valve or sixpipe valves was investigated via numerical simulation and testing. The traditional method involving unsteady flow and the Lighthill analogy was used to understand the fluid and acoustic characteristics of the resonant cavity system. The acoustic resonance phenomenon occurred within a specific velocity range with 45 m/s and 80 m/s, and the strongest acoustic resonance appeared at a velocity of 80 m/s; the corresponding Strouhal number range was [0.47, 0.83]. The energy of the acoustic resonance was primarily concentrated in the pipe valve. The traditional method allows for the acquisition of an SPL below the cut-off frequency that displays excellent consistency between the simulation and the test. However, more substantial differences are evident as the frequency increases. A new method involving steady flow and SNGR is proposed to solve the differences encountered at middle and high frequencies. The consistency of the entire frequency band shows that combining the traditional method with this new technique is the ideal choice when confronted with limited time and computer resources. Therefore, it is recommended that the traditional method involving transient flow and the Lighthill analogy be used for low frequencies while applying this new technique involving steady flow and SNGR to middle and high frequencies for similar problems.
7,971.6
2021-09-27T00:00:00.000
[ "Physics", "Engineering" ]
Research of vibration resistance of non-rigid shafts turning with various technological set-ups The article considers the definition of the stability range of a dynamic system for turning nonrigid shafts with different technological set-ups: standard and developed ones; they are improved as a result of this research. The topicality of the study is due to the fact that processing such parts is associated with significant difficulties caused by deformation of the workpiece under the cutting force as well as occurrence of vibration of the part during processing, they are so intense and in practice they force to significantly reduce the cutting regime, recur to multiple-pass operation, lead to premature deterioration of the cutter, as a result, reduce the productivity of machining shafts on metal-cutting machines. In this connection, the purpose of the present research is to determine the boundaries of the stability regions with intensive turning of non-rigid shafts. In the article the basic theoretical principles of construction of a mathematical system focused on the process of non-free cutting of a dynamic machine are justified. By means of the developed mathematical model interrelations are established and legitimacies of influence of various technological set-ups on stability of the dynamic system of the machine-tool-device-tool-blank are revealed. The conducted researches allow to more objectively represent difficult processes that occur in a closed dynamic system of a machine. Introduction The components with rotation bodies account for 30 % of all the machine components. It is the most labour consuming to make the details characterized by fragility: all kinds of shafts, torsion bars, shafts, stems, leading cylinders, nonrigid shafts, etc. Non-rigid shafts are the ones with length-to-diameter (L to D) rate is over 12 (L/D >12). Due to fragility of the processed non-rigid shaft the technological system machine-device-tool-blank is pliable to the influence of external transverse force and dynamic factors of the cutting process [1]. Processing such components is quite difficult due to the fact that a processed component gets deformed during cutting and to vibration in the component during processing. These factors can be so strong that the cutting regime is to be reduced or multiple-pass operation is to be used, they cause decrease in firmness and operational life of the cutting instrument. [2][3][4][5][6][7]. Vibrations are not wanted at finish process with cutting at shall depth, when non-vibration movement of the component and the cutter in the cutting zone can lead to defective goods. The problem of vibration is topical at metalprocessing with the help of CNC machines, as it leads not just to decrease in processing accuracy: vibrations in the cutting zone can also lead to accelerated amortization of the machine. At the same time uncontrolled mechanical oscillations with a comparatively big amplitude are a limiting factor for increasing productivity of cutting process [8,9]. And oscillations appear due to presence and mutual influence of technological cutting conditions, external perturbing forces and parameters of the elastic system of a manufacturing lathe. That's why a decrease in efficiency of non-rigid shafts processing depends mainly on providing the conditions of processing stability. Nowadays stability is understood according to the experience of the technologist. The existing methods do not provide accuracy, as the mathematical models used for its calculations are oversimplified. Coping with this research issue successfully depends not only on the traditional ways, but also on availability of necessary mathematical models that are able to describe the interdependence of vibrations of the elastic system of the machine and the dynamic process of cutting. Thus the aim of the research is to work out a mathematical model by meas of which it is possible to calculate the limits of stability zones at linear turning of non-rigid shafts for different kinds of technological equipment of a turning lathe. The tasks are the following: 1. To work out the methods of researching the mathematical model as for vibration-resistance at linear turning of non-rigid shafts; 2. To get a characteristic equation in order to realize the mathematical model of vibration-resistance; 3. To determine the stability zones of the dynamic system under research at lathe work of non-rigid shafts with different technological equipment. Materials and Methods In the research materials and methods of cutting theory, structural resistance, mathematical modeling were used. The research of influence of various technological equipment of turning lathes on vibration-resistance was fulfilled by the methods of serial presence detection of the parameters of the worked out mathematical model. Mathematical modeling of vibrationresistance of linear turning of non-rigid shafts To calculate the resistance of the dynamic system in question let us consider the analytical model (Pic. 1). Let us assume that the shaft vibrates in plane that goes through the symmetry axis of the processed shaft. In the cutting area of the analytical model there is a support attachment group of the processed shaft. (Table 1). Researching the first and the fourth kinds of technological defects the bearer 3 is not taken into account. Below one can see the method of researching stability of the mathematical model. The equation of free oscillations of a non-rigid shaft is as follows: where EJ -rigidity of the processed shaft at the flexure; U -the flexure of the processed shaft, m -mass of a unit of shaft length. In the cutting process the cutter excites a perturbing force which is applied in some intermediate section of the shaft dividing the length in two parts. Thus it is necessary to determine eight constants from the four conditions at the ends of the shaft and four conditions of connection. The conditions of connection are the following: The cutting power is expressed by the equation: where τ is a rotation period of the shaft (lag time); К (1) , К (2) is dynamic characteristics of cutting of the first and second type. The boundary data are the following: , . . Putting the equation (6) U o p as follows: Fp is determined this way: where ∆(р, х0) means a characteristic equation for an open-loop system. The research of stability zones for different kinds of equipment of a turning lathe At lathe work with non-rigid shafts, in order to research the influence of different kinds of technological equipment of a turning lathe on vibration-resistance, it is the most convenient and economical to use the method of successive parameters identification of the worked out mathematical model which is limited by the process of restricted oblique cutting. Choosing W = -1/H as the complex parameter on the basis of which the curve of D-division is drawn, it is possible to find the depth cutting limit for different outfit. The calculations presented are made with the help of a complex of programs where the method of stability calculation in "the minor" is used according to the following algorithm. 1. For calculating the transfer function for each value of angular rate on a complex plane (RеW, JmW) a hodograph of the transfer function of the researched system is made (Pic. 2). 3. Having found all the crossing points of the hodograph and the real axis ReW one should choose the point that is the most remote from the central point of coordinates. It is the boundary of the stability area in "the minor", and the corresponding value of cyclic frequency is the frequency of excitation of authoroscillations. The value of the limit cutting depth for stability in "the minor" which is calculated for all the kinds of the outfit under researched (Table 1). In the table: 1 means processing with the help of a lathe self-centering chuck with contraction by the rear turning centre; 2 means processing in the centres with driving equipment; 3 means processing in a lathe selfcentering chuck with cam readjustment with contractor by the rear turning centre with readjustment; 4 means processing in the leading riffled centre with contraction by the rear rotating centre with readjustment; 5 means the same as 4 but using a steady vibration suppressor which is moving along the axis of the processed component. Conclusion The following results were obtained in the research: 1. Methods of stating the stability zone of a dynamic system at sharpening non-rigid shafts with different outfit were worked out. 2. A characteristic equation for calculating the limit cutting depth by means of ECM was got. 3. By means of the worked-out mathematical model the boundaries of stability zones for different types of outfits of a lathe machine were stated. The methods of researching vibration-resistance of linear turning of non-rigid shafts let reflect complex dynamic processes in a closed-loop system of a lathe machine more objectively. With the help of a mathematical model it is possible to determine the boundaries of the stability zones at intensive lathe processing of non-rigid shafts, to find out the connection between stability zones, the technological system machine-device-tool-blank, and the limit cutting depth with different outfits of a lathe machine. On the basis of the analysis it is possible to give important practical recommendations.
2,073.4
2017-01-01T00:00:00.000
[ "Materials Science" ]
Restoring electronic coherence/decoherence for a trajectory-based nonadiabatic molecular dynamics By utilizing the time-independent semiclassical phase integral, we obtained modified coupled time-dependent Schrödinger equations that restore coherences and induce decoherences within original simple trajectory-based nonadiabatic molecular dynamic algorithms. Nonadiabatic transition probabilities simulated from both Tully’s fewest switches and semiclassical Ehrenfest algorithms follow exact quantum electronic oscillations and amplitudes for three out of the four well-known model systems. Within the present theory, nonadiabatic transitions estimated from statistical ensemble of trajectories accurately follow those of the modified electronic wave functions. The present theory can be immediately applied to the molecular dynamic simulations of photochemical and photophysical processes involving electronic excited states. where R is an N-dimensional vector of nuclear coordinates and an overdot denotes a time derivative (N-dimensional velocity) with the sum over all adiabatic electronic states. Tully's fewest switches 1 and the semiclassical Ehrenfest 2 algorithms, which are the two representative methods, provide a powerful tool to perform nonadiabatic molecular dynamics simulations with simple trajectory-based approaches. These trajectory-based nonadiabatic molecular dynamics methods with various modified versions have been successfully applied to photochemical and photophysical related molecular spectra and reaction dynamics for large systems [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] . Tully's fewest switches method propagates nonadiabatic trajectory on each adiabatic electronic state with trajectory surface hopping from one adiabatic potential energy surface to another. The semiclassical Ehrenfest method propagates nonadiabatic trajectory on averaged adiabatic electronic states in which a single mean-field potential energy surface governs nuclear motion. Both methods suffer an overcoherence problem in electronic wavefunction (or density matrix ρ kj in eq. (1)) when nonadiabatic trajectory passes through non-zero region of nonadiabatic coupling vector. In order to reproduce exact coherent motion of quantum wavefunction propagating on multiple adiabatic electronic states, the semiclassical density matrix described by eq. (1) should decohere. Numerous algorithmic coherence/decoherence schemes in the literature have been designed for nonadiabatic trajectory coupled with electronic motion [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32] . All of these approaches can improve decoherence/coherence effects of the electronic wavefunction to some extent but at either high computational cost or through the use of complicated algorithms. in a unified way for both Tully's fewest switches and the semiclassical Ehrenfest methods. Let us exam solution of eq. (1) in the regions where nonadiabatic coupling vector is equal to zero, we have kj kj t k j 0  Let us compare the phase integral in eq. (2) with the conventional time-independent Jeffreys-Wentzel-Kramers-Brillouin 33 where μ is reduced mass, E is total energy and T = (E − U) is kinetic energy. From trajectory-based approach, R in eq. (3) can be considered as a classical trajectory propagating along a one-dimensional curved space. Alternatively, we can obtain the same relation as in which U(R) is an effective potential energy surface for nonadiabatic trajectory. For Tully's fewest switches approach, U(R) is a single adiabatic potential energy surface U j (R) or U k (R) on which trajectory is propagating, and for semiclassical Ehrenfest approach, U(R) is defined as an average potential energy surface Finally, we obtain modified coupled time-dependent Schrödinger equations, (4) shows that time-dependent and time-independent approaches have the same limit in the high-energy regime. We can expect that both original and modified coupled Schrödinger equations should agree each other for the high-energy regime, namely E − U ≫ |U k (R) − U j (R)|). Using a similar approach to the semicalssical Ehrenfest method, the symmetrical windowing quasi-classical approach 27 (that is as simple as the present theory) restores coherence/decoherence pretty well with windowing parameter γ = 0.366. However, numerical tests in Supplementary Note 1 show that the present modified Ehrenfest approach is slightly better than the symmetrical windowing quasi-classical approach. Using a similar approach to the Tully's fewest switches trajectory surface hopping method, the Gaussian wavepackets phase correlation method 28 (that is more complicated than the present theory) restores coherence/decoherence in a similar way as the present one, and actually two are the same in one-dimensional case. However, the two are different in multidimensional case, their diagonal element in Hamiltonian does not approach limit U k (R)−U j (R) in the high-energy regime, and their decoherent term (p 1 · p 2 /m) requires calculation of classical momentum on two adiabatic potential energy surfaces simultaneously. Numerical tests in Supplementary Note 3 show that the present modified the Tully's fewest switches approach can nicely reproduce quantum oscillation for the certain two-dimensional model system. We select four well-known model problems in the following to perform both semiclassical Ehrenfest and Tully's fewest switches calculations from original eq. (1) and modified eq. (6). Model 1 describes the electronic transitions in the non-crossing case of adiabatic potential energy surfaces initially developed by Rosen and Zener 34 . Model 2 and Model 3 describe the electronic transitions in the avoided-crossing case of adiabatic potential energy surfaces initially developed by Landau 35 , Zener 36 and Stückelberg 37 . Model 4 describes the electronic transitions in the crossing case with the peculiar degeneracy of adiabatic potential energy surfaces initially developed by Renner 37,38 . All these pioneer studies focused on developing analytical formula for nonadiabatic transition probability from various mathematical methods nicely documented by Child 39 . The present study focuses on simulating nonadiabatic transition probability for all cases numerically solving original eq. (1) and modified eq. (6). We compute the overall nonadiabatic transition probability defined as the probability of starting on the lower adiabatic potential energy surface at x = −∞ and finishing on the upper adiabatic potential energy surface at x = + ∞ . Accurate quantum mechanical calculations for the four one-dimensional two-state models are performed using the conventional time independent close-coupling method. Semiclassical calculations are performed by using the fourth-order Runge-Kutta method for numerically integrating the trajectories as well as coupled time-dependent Schrödinger equations. For a given total energy, the Tully's fewest switches method with the parameters chosen to be A = B = 0.025 Hartree, x 0 = 3.0 Bohr, and C = 0.7 Bohr −2 . This model can induce electronic coherence from overlapping of two semiclassical wavepackets in between two non-crossing peaks at x = ± 3.0 Bohr as shown in Fig. 1(a). The results simulated by Tully's fewest switches (see Fig. 2(a)) and semiclassical Ehrenfest (see Fig. 2(b)) within the modified semiclassical eq. (6) follow exact quantum oscillations and amplitudes of the overall nonadiabatic transition probabilities very well. The results simulated from the original semiclassical eq. (1) cannot reproduce exact quantum results, and besides Tully's fewest switches and semiclassical Ehrenfest methods do not agree each other for oscillations and amplitudes of the overall nonadiabatic transition probabilities. Dual Landau-Zener-Stückelberg avoided-crossings. Model 2 is defined by two diabatic potentials having dual crossings given in the diabatic representation: This model can induce different electronic coherence from overlapping of two semiclassical wavepackets in between two avoided-crossings at x = ± 2.07 Bohr as shown in Fig. 1(b). The results simulated by Tully's fewest switches (see Fig. 3(a)) and semiclassical Ehrenfest (see Fig. 3(b)) within the modified semiclassical eq. (6) follow exact quantum oscillations and amplitudes of the overall nonadiabatic transition probabilities very well. The results simulated from the original semiclassical eq. (1) cannot reproduce exact quantum results, and besides Tully's fewest switches and semiclassical Ehrenfest methods show very different oscillations and amplitudes of the overall nonadiabatic transition probabilities. This model system has been well studied in the Simple avoided crossing. Model 3 is defined by two diabatic potentials having one simple crossing given in the diabatic representation: with A = C = 0.01 Hartree, B = 1.6 Bohr −1 , and D = 1.0 Bohr −2 . One simple diabatic crossing is appeared at x = 0 as shown in Fig. 1(c). However, the strong Gaussian-type diabatic coupling can produce two vague avoided crossings around x = ± 1.0 Bohr, and thus it can induce small electronic coherence at low energies as shown in Fig. 4 The results simulated by Tully's fewest switches (see Fig. 4(a)) and semiclassical Ehrenfest (see Fig. 4(b)) within the modified semiclassical eq. (6) can well reproduce this small oscillation and amplitudes of nonadiabatic transition probabilities. The results simulated from the original semiclassical eq. (1) cannot well reproduce this small oscillation, although Tully's fewest switches and semiclassical Ehrenfest methods almost agree with the exact quantum amplitudes of the overall nonadiabatic transition probabilities for high energies, but two are not exactly the same. Renner-Teller crossing. Model 4 is defined by two diabatic potentials having same as V 11 (x) and V 22 (x) in eq. (9). However, diabatic coupling is changed as modified methods cannot follow exact calculation of the overall nonadiabatic transition probability as shown in Fig. 5. This means that the restoring term in eq. (6) is still not good enough for describing electronic transitions in significant degenerate case and this term basically comes from approximation of kinetic energy operator in nuclear degree of freedom. Further study should be carried out in the near future. Concluding remarks. In three out of the four model systems given above, we have shown that the results simulated from the modified semiclassical eq. (6) follow exact electronic coherence as well as amplitude of the overall nonadiabatic transition probabilities, and both the semiclassical Ehrenfest and Tully's fewest switches methods agree with each other and even for small oscillation in Model 3. On the other hand, the results simulated from the original semiclassical eq. (1) cannot follow exact electronic coherence, and besides the semiclassical Ehrenfest (Tully's fewest switches) method shows slight greater (smaller) amplitude of the overall nonadiabatic transition probabilities than exact quantum results. This can be seen clearly from Model 3 in the region of monotonically increasing of the overall nonadiabatic transition probabilities against energy. More tests have been performed with changes of potential parameters, and conclusion is the same for the modified coupled Schrödinger equations. The present modified eq. (6) only modifies diagonal element by in eq. (1) while preserving original simplicity of Tully's fewest switches and semiclassical Ehrenfest algorithms. For instance, detailed balance behavior in the present Tully's fewest switches should follow Tully's fewest switches and global flux surface hopping 41 , while the present semiclassical Ehrenfest should follow detailed balance of symmetrical quasi-classical treatment of the Meyer-Miller (MM) model 42 . In the multi-state nonadiabatic molecular dynamic simulation, the global flux surface hopping 43 shows promising accuracy in comparing Tully's state-to-state surface hopping. This can be immediately applied to the present coupled electronic Schrödinger eq. (6) to perform multi-state nonadiabatic molecular dynamic simulation. However, it should be noticed that when two adiabatic potential energy surfaces significantly degenerate at crossing zone with Renner-Teller coupling, the present modified method still fails. Otherwise, we conclude that in the present theory both semiclassical Ehrenfest and Tully's fewest switches algorithms are shown to work equally well and the both follow electronic coherence/decoherence the same as exact quantum results. The present theory can be immediately applied to nonadiabatic molecular dynamic simulation for photochemical and photophysical processes involving with electronic excited states.
2,546.6
2016-04-11T00:00:00.000
[ "Physics" ]
INCREASING METAPHOR AWARENESS IN LEGAL ENGLISH TEACHING In legal language, metaphors are a fundamental way to express and apprehend abstract notions. For instance, responsibility is perceived as WEIGHT (“the burden of proof”), falsehood or unacceptability as a DECAYING LIVING BEING (“the fruit of the rotten tree”) or the law can be used as a WEAPON (“take the law into your own hands”, “use the law as a sword and not as a shield”). This has now been accepted by the academic community, which not only recognizes the value of metaphors in legal language, but has started to pay due attention to the way they operate, as witness the interesting contributions made in the field. However, this has not yet led to metaphors being incorporated as an important component of legal ESP. In our paper, we shall argue for the inclusion of metaphors in the teaching of Legal English and suggest a few sample exercises based on our teaching practice. Our intention is to prove the usefulness of this component within a complicated variety of ESP, where metaphors may both provide relief from other more intricate areas and also help learners to understand the concepts underlying such metaphors. And he doesn't have any capital other than the fungus that grows between his toes.And if his feet are teeming with microbes, his mouth is as fresh as a head of lettuce and his tongue more tangled than a pile of seaweed. Antonio Skarmeta, The Postman METAPHOR IN ESP Back in 1991, an issue of English for Specific Purposes contained an article by Seth Lindstromberg entitled "Metaphor in ESP: A Ghost in the Machine".This title, which probably summarizes a whole attitude towards figurative language, takes us back to an era where the presence of metaphor outside literary texts was felt as a rare occurrence, and even, as we shall see later, an undesirable one.The author complained that the growing interest in metaphor in the 1980s had not been reflected in TESOL and ESP books (Lindstromberg, 1991: 208). Plenty of studies have proved the pervasiveness of metaphor in specialized languages, and thanks to the efforts of cognitive linguists (especially after Lakoff & Johnson, 1980), it is now recognized that, rather than an aberration or an extraordinary occurrence in language, metaphors are basic for our apprehension Vol.4(2)(2016): [165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183] of the world, especially when it comes to abstract concepts (Gibbs, 1996).The lexicon of economics contains frequent metaphorical extensions of common words (Fuertes-Olivera & Velasco-Sacristán, 2001) and features both lexicalized or "dead" metaphors alongside with metaphors which never cease to appear, such as MONEY IS SOLID 1 (Silaški & Kilyeni, 2014), animal imagery in the conceptualization of inflation (Silaški & Đurović, 2010) or neologisms based on the metaphor THE ECONOMY IS BIOLOGY (Resche, 2002).Many authors have also pointed out the need for the analysis of metaphors from a contrastive point of view, either for language learning, for translation purposes, or simply to point out the cultural differences faced by learners.Regarding comparative studies, contributions have been made, for example, on metaphors in Spanish and English financial reporting (Charteris-Black & Ennis, 2001), or more specifically, on the MONEY IS A LIQUID metaphor in English, Serbian and Romanian (Silaški & Kilyeni, 2011).Even within varieties of English there are works, for example, contrasting colour metaphors in Hong Kong vs British business corpora (Lan & MacGregor, 2009: 11-15). As regards language teaching, many have argued for the inclusion of metaphors in both general and ESP courses.Concerning the former, the interest in metaphor spurred by Lakoff and Johnson's Metaphors We Live By (1980) led to studies on the effectiveness of metaphors; Deignan, Gabryś, and Solska (1997) have pointed out their ability to promote autonomous learning among advanced students, while Lazar (1996Lazar ( , 2003) ) provides various examples of exercises facilitating inferencing at various levels, Beréndi, Csábi, and Kövecses (2008) have empirically proved how conceptual metaphors and metonymies facilitate the learning of figurative idioms, and Boers (2000b) has shown how a structured awareness of source domains helps towards the retention of unfamiliar idiomatic expressions.The main argument supporting the introduction of metaphor in syllabi is put forward by Danesi (1993), who argues that foreign learners tend to lack "conceptual fluency", i.e. while they may master the formal structures of the target language, they usually continue to "think in terms of their native conceptual system" (Danesi, 1993: 491). All this research has led to metaphor being made a part of general English learning materials (and, in general, of any type of teaching; see Low, 2008: 216).As a result, great progress has been made since the times when teaching materials seemed to "shy away from any kind of utilization of metaphor" (Danesi, 1993: 197).On the one hand, specific language teaching handbooks have been designed based on figurative language (Lazar, 2003); on the other, vocabulary learning 1 In this paper, we have used the traditional typographical conventions for metaphors, i.e. small capitals (e.g.RESPONSIBILITY IS WEIGHT).In-text references to words corresponding to metaphorical expressions are between inverted commas if they are in English ("under") or in italics if they are in other languages (bewijslast).In the exercises proposed, bold type is used for emphasis, instead of italics, which may appear naturally in legal texts (for instance, ordre public, which appears in English legal instruments in italics), or inverted commas, which are also found when a legal text refers to another source. As for Languages for Specific Purposes (LSP), Boers (2000a) pointed out the general need to enhance metaphor awareness in specialized reading, while the inclusion in ESP economics courses has been justified by Charteris-Black (2000), who provides corpus-based support (see also Silaški, 2011;White, 2003).Beyond economics, other practitioners have proved the need to include metaphors in engineering (Roldán-Riejos & Úbeda Mansilla, 2005) or even the specific language of wine tasting, or "winespeak" (Caballero & Suárez-Toste, 2008). However, in legal language teaching, probably because it is a less developed field (in August 2016, a Google scholar search with "legal English" + ELT returned 507 results, as compared to 4,110 for "business English" + ELT), there seem to be no specific studies regarding figurative language in legal English, nor was there any paper on legal English metaphors in the 2009 special issue of Ibérica, the journal of the European Association of Languages for Specific Purposes.Such is the status quo in which our proposal is put forward: we shall argue the case for the inclusion of metaphor in the teaching of legal English.In the following section, a brief review of the literature on metaphor and legal language will be provided; then, a number of suggestions and potential exercises will be presented in order to increase awareness of figurative language in legal language teaching. METAPHOR IN LEGAL LANGUAGE A review of the literature on metaphor and legal language seems to echo the situation we described above on whether metaphor was present (at all), and if so, was acceptable in legal language; in fact, one can come across categorical statements such as Tiersma's (1999: 128) "because of the seriousness of the topic [law], we can safely assume that humor, irony, figurative usage, and similar literary devices will be avoided".However, in the face of the undeniable presence of metaphors in legal language, the debate rather seems to focus on whether they are desirable or not.On the one hand, as has been the case in other areas (for opponents of metaphor in medical discourse see Gotti, 2015: 11), there are those who considered it "undesirable", such as that by Judge Cardozo of the US Supreme Court, who explicitly said in 1927 that "Metaphors in law are to be narrowly watched, for starting as devices to liberate thought, they end often by enslaving it" (Berkey v. Third Avenue Railway Co 244 N.Y.602), or, for instance, Anderson (1991Anderson ( : 1214Anderson ( -1215)): Vol. 4(2)(2016): [165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183] It [Metaphor] is useful because it is evocative, but it may evoke different ideas in different readers.It liberates the author from some of the rigidity of exposition, but also from the demands of precision and clarity.The subtlety that makes metaphor the poet's boon can be the lawyer's bane; while poetry aims to stir a personal and individual response, the law instead strives for the universal or at least the general.Legal metaphors are invaluable when they are not too imprecise or ambiguous for the task at hand.When they convey different messages to different people, however, they produce confusion, misunderstanding, and frustration. On the other hand, most legal scholars have come to accept metaphors as an important component of legal reasoning, simply because of the all-pervasive presence of metaphor in any form of human communication (see, for instance, Murray, 1984;Winter, 2008).Indeed, metaphors say a lot about how we approach legal concepts, especially considering that law, as aptly pointed out by Orts (2015: 30), is "an ideological artefact".When it is said, for instance, that "fundamental rights and fundamental legal principles are enshrined in Article 6 of the Treaty" (the emphasis is ours), it is said about such rights and principles: (a) that they are sacred, comparable to deities, since a 'shrine' is a holy place consecrated to a deity, and, etymologically, (b) that society is a building whose foundations include such rights and principles.In fact, the word "foundations" is one of those cases which one might no longer perceive as a metaphor, but it has been argued that most of legal language is metaphorical, at least etymologically, including apparently nonmetaphorical terms like "appeal", "prove" or "case" (Watt, 2013).Indeed, the use of the preposition "under" with legal instruments (e.g."under Section XXX of the Act", "under the present regulations") is of a metaphorical nature, as has been aptly remembered by Larsson (2014), and even those who oppose metaphor in legal language inadvertently use metaphors: Larsson (2013: 366) points out judge Cardozo who, while warning against metaphor, uses images such as "liberate" and "enslave" as applied to thought.In general, those who criticize the use of metaphor do so only in certain contexts, or regarding specific metaphors, e.g.Oldfather (1994) criticizes the use of baseball metaphors in judicial opinions. Once the importance of metaphor for law, its language and its translation has been established, the following section shall explore our experience with the teaching of legal metaphors and present some exercises to enhance learners' awareness of metaphorical language in legal vocabulary. A PILOT TEST: LEGAL METAPHORS IN EUROPEAN LANGUAGES In order to obtain initial information on the real linguistic difficulties and implications of legal metaphors, a brief, informal test was conducted on a group of 13 European non-English-speaking judges and prosecutors during a course in English for Criminal Cooperation organized by the European Judicial Training Network (EJTN) in Lisbon including, amongst others, speakers of French, German, Dutch, Italian, Portuguese and Spanish.During the test, the participants were given the sentences below, all of them containing a metaphorical expression, and asked to translate them into their respective languages, the initial purpose being to see whether the equivalent metaphor in their languages was similar to the one in English.One of the purposes of this test was to gauge the possibility of negative transfer in metaphors (see, for instance, James, 2010).The contexts were the following (the metaphorical expressions are shown in bold type here, but were not revealed as such to the participants): 1.In criminal cases, the burden of proof is placed on the prosecution.2. The prosecutor must prove the defendant's guilt beyond a reasonable doubt. The 1962 Convention provides that if an offense is time-barred in the Requested Party, extradition shall not be granted.4. Visas are allowed under the Schengen agreement but under certain conditions. 5. Prosecutors had unlawfully threatened him with a heavier sentence unless he agreed to surrender himself for trial in the US. 6.There is a new legal framework for extradition.7. The person concerned should be heard on the arguments, which he invokes against his extradition.8.The principle of speciality is one of the traditional tools in the extradition framework included in the European Convention on Extradition.9.He was not informed about the charges against him, which was a reason not to extradite him.10.Under the EU Arrest Warrant mechanism, pending a decision, the executing authority hears the person concerned. In this respect, similar results have been obtained in other experiments (e.g.Crawford Camiciottoli, 2005, who found that most economic metaphors are shared by audiences in Britain and Italy). In our opinion, this supports our case for the inclusion of metaphors in our legal English ESP courses, since they constitute "familiar ground" and may act as confidence-boosters in what otherwise is a fairly demanding variety of the language.For this reason, in the following sections we shall propose a number of specific exercises in order to raise metaphor awareness and expand the lexical resources of legal professionals.As we shall explore in some of the exercises we will propose further on, "similar" does not mean "equal", and our vocabulary work must emphasize the need for accuracy and the avoidance of variability. A FEW PROPOSALS TOWARDS INCREASING METAPHOR AWARENESS IN LEGAL ESP COURSES Vol. 4(2)(2016): [165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183] developed on the basis of those proposed for general English and Polish speakers by Deignan et al. (1997: 356): Immediately after this exercise, or as an initial exercise if the group is a multilingual one, a similar fragment may be proposed containing metaphors.Ideally, the example chosen should be very familiar to the learners, which would serve a double purpose: on the one hand, it would prove relevant to their professional practice, and on the other, it would show them to what extent they have been exposed to, and using, figurative language without realizing.The following example was given to the judges and prosecutors attending a course on the language of human rights, based on the fact that most of the metaphors were common to their native languages, the purpose being to make them aware that both the English expressions and their counterparts in each language are of a figurative nature: Exercise 2 What is special about the words and expressions in this fragment from the Tampere conclusions of October 1999 regarding immigration policy? The European Union needs a comprehensive approach to migration addressing political, human rights and development issues in countries and regions of origin and transit.This requires combating poverty, improving living conditions and job opportunities, preventing conflicts and consolidating democratic states and ensuring respect for human rights, in particular rights of minorities, women and children.To that end, the Union as well as Member States are invited to contribute, within their respective competence under the Treaties, to a greater coherence of internal and external policies of the Union.Partnership with third countries concerned will also be a key element for the success of such a policy, with a view to promoting co-development. Vol. 4(2)(2016): 165-183 Another awareness-creating exercise, which may also foster cross-cultural reflection, is the following one, modelled after Lazar (2003: 9): Working in pairs, discuss these questions, and compare the answers to what is said in your respective native languages. a) How is to follow procedure the same as to follow someone in the street?b) How is access to justice the same as access to a building?c) If a judge is deaf, can he or she hear a case?d) When jurors are given directions, do they go anywhere?e) Why do claimants ask for a remedy, if they are not sick?f) How can a statutory requirement be inflexible if it is an abstract concept?g) When it is said that legalizing drugs would put us on a slippery slope, are we likely to have a fall or suffer a physical injury?h) If the burden of proof is on the prosecution, do they need to be physically strong to carry it? A problem for teaching metaphor is that the explanation may also be based on a metaphor, as pointed out by Deignan (2003); also, the same identification between source and target domain (e.g.A CORPORATION IS A HUMAN BEING) may lead to different metaphors, such as "legal person", "in the company's hands", etc.With this in mind, an initial type of exercise may be developed in which the purpose is not the specific wording of a metaphor, but the underlying identification of the abstract concepts, or in other words, the source domain.In general, and as we pointed out earlier, this task may boost the learners' confidence, since, at least in Western cultures, the legal metaphors are very similar (which eliminates one of the obstacles mentioned by Danesi [2003: 77], for whom asymmetry between conceptual frameworks is inimical to "naturalness" in student discourse).For this purpose, the following two exercises try to help learners to identify specific source domains: Exercise 4 The following sentences, all from US Supreme Court cases, contain words and expressions related to war, fighting and struggling.Which are they?The first one has been done for you.a) Our starting place was not the same as that of advocates seeking the aid of the courts in the struggle against race discrimination.b) Accepting a case for review includes the existence of a conflict between the decision of which review is sought and a decision of another appellate court.c) The defendant fought some of the land-use and trespass citations.d) Plaintiffs previously defeated in state court filed suit in a Federal District Court.e) In clashes of governmental authority there was small risk that the state courts would find for the Federal Government.f) The Court confronted Nebraska's argument that the procedure was safer.g) The defence attacked the verdicts on appeal as inconsistent and urged a reversal of the convictions.h) The plaintiff was armed with all the information that he needed to file a federal complaint.i) Guzek's defense rested in part upon an alibi.j) The measure that invades privacy is the subject to a Fourth Amendment challenge. Vol. 4(2)(2016): 165-183 Once an initial understanding of metaphor has been gained -trying to restrict technical language to a minimum -and after a brief explanation to learners of what metaphors are and the fact that they are all based on identifications between two fields (e.g.REASONING IS MOVEMENT), the following exercise explicitly asks them to classify expressions into metaphorical patterns: , and immediately clothe him with all the privileges of a citizen in every other State?e) Furthermore, no "clear notice" prop is needed in this case given the twin pillars on which the Court's judgment securely rests.f) It does not follow that the rights can be disregarded so long as the trial is, on the whole, fair.g) Oregon v. Elstad, 470 U. S. 298, reflects a balanced and pragmatic approach to enforcing the Miranda warning.h) Regulations approved under Montana all flow from these limited interests.i) Such "wilful misconduct" is best read to be included within the realm of conduct that may constitute an "accident" under Article 17. j) The CDC will have the burden of demonstrating that its policy is narrowly tailored with regard to new inmates as well as transferees.k) The challenge lies in ensuring that the flood of non-meritorious claims does not submerge and effectively preclude consideration of the allegations with merit.l) The DSL, by placing sentence-elevating factfinding within the judge's province, violates a defendant's right to trial by jury.m) The Government insists that Jenkins found paralegal fees recoverable under the guise of "attorney's fee[s]".n) The issue is whether the sentencing jury had been unable to give effect to [Cole's] mitigating evidence within the confines of the statutory 'special issues.'o) The judgment does not constitute a forbidden intrusion on the field of free expression.p) The starting point in discerning congressional intent, however, is the existing statutory text.q) The structure of the statute also suggests that subsection (iii) is not limited to the intentional discharge of a firearm.r) Thus cloaked in the "purpose" of the Commerce Clause, the rule against discrimination that the Court applies to decide this case exists untethered from the written Constitution.s) To resolve these challenges a hearing officer must make a decision based on whether the child "received a free appropriate public education."t) Turning a blind eye to federal constitutional error that benefits criminal defendants, allowing it to permeate in varying fashion each state Supreme Court's jurisprudence, would change the uniform "law of the land". Vol. Once the metaphorical domains have been established, it is explained to the students that the identification made by the metaphor results in a given expression or collocation.With this information, a further exercise may concentrate on the specific lexical embodiment of each metaphor: Vol. 4(2)(2016): 165-183 Exercise 6 Please fill in the gaps in the following sentences with the words given, and then match the metaphorical identification to the expression resulting from it.The first one has been done for you. Missing words: burden, clash, core, distance, fuel, gratuitous, outweigh, propagate, proportionality, undermine Metaphors: AN (CONFLICTING LAWS ARE VEHICLES COLLIDING WITH EACH OTHER) b) It was held that it was not clear that the need to satisfy the public's concern to know the truth may ______________ the need to protect national security.(____________________________________) c) …it thus enables everyone to participate in the free political debate which is at the very _________ of the concept of a democratic society.(____________________________________) d) The purpose of the report could not objectively be regarded as having been to ___________ racist ideas and opinions.(___________________) e) There was no proof that the description of events given in the articles was totally untrue or calculated to ________ a defamation campaign.(____________________________________) f) The suspect, a known right-wing extremist, was also suspected of attempts to ___________ democratic society.(___________________________) g) In the Court's view the editorial could be considered polemical but did not constitute a ___________ personal attack, as the author gave an objective explanation. (____________________________________) h) A journalist had been convicted of failing to impart fair information by quoting excerpts from an article that questioned the honesty of a body of civil servants, where the journalist did not __________ himself from the comments.(____________________________________).i) A judgment that Article 10 had not been violated was delivered in the McVicar case, concerning the _________ of proof placed on a journalist and his conviction of defaming a sportsman by accusing him of using illegal performance-enhancing drugs.(___________________________) j) As the case involved a restriction of freedom of expression in a matter of public interest, the Court carefully considered the ____________ of the measures imposed.(____________________________________) However, it is also necessary to address the specific wording of each metaphor, given the highly idiomatic (and therefore, often invariable) nature of some metaphorical expressions.It is true that there are rare exceptions where there may be lexical flexibility in metaphorical expressions, e.g. it is possible to call evidence illegally gathered "the fruit of the rotten tree", but also of the "poison tree" and even of the "poisonous tree", but in general, metaphorical expressions, like many Vol.4(2)(2016): [165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183] idioms in general, often do not allow for lexical flexibility (see, for instance, Gibbs, Nayak, Bolton, & Keppel, 1989;Glucksberg, 1993: 19-23); for instance, "under" may not be replaced by "beneath" in *beneath Section XX of the Act, or "barred" may not be replaced by "prohibited" or "banned" in *time-prohibited or *time-banned.This is where two types of exercises may be prepared: the first type would be simple matching or gap-filling tasks, but also specific error prevention exercises.The matching task could take the following shape: Regarding the error prevention exercise, a similar task may be proposed where the word may be based on the underlying metaphor, but the lexical embodiment is wrong.Both this one and the previous task help learners to learn the specific form used, but also in general to understand that these are fixed idiomatic expressions Vol.4(2)(2016): [165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183] (the students are told that words may not be replaced by synonyms, etc.), and as such may not be modified lexically or syntactically: Exercise 8 In the following sentences, the words in bold type are wrong, although these words are based on the same comparison.Replace the word by the correct one.The first one has been done for you. a) When you use the law in order to make life miserable for other people and not to protect yourself, you use law as a(n) gun _sword_, rather than a(n) armour __________.b) Someone who does not trust legal procedures and decides to act against other people who may have caused them harm are wielding __________ the law into their own arms __________.c) If a jury is completely sure that a defendant is guilty, they consider that he/she is guilty outside __________ a reasonable doubt d) He has been loaded __________ with aggravated murder.e) The original decision was inverted __________ by the appeal court.f) The victims agreed not to push _________ charges if offered compensation.g) Sentencing directions ___________ help judges decide the appropriate sentence for a criminal offence.h) The European Courts have prohibited discrimination on lands ________ of sexual orientation. Exercise 9 Rewrite the following sentences using the words given, considering the metaphors in each case.The first one has been done for you.Another issue which greatly influences the approach is the type of audience at which the exercises (and the materials) are addressed.Given the great specialization of the legal profession, a careful selection of the metaphors might be performed, since, for instance, some metaphors might be irrelevant to lawyers specializing in family law. Vol. 4(2)(2016): 165-183 However, even in those cases, a two-level approach may be used, in which introductory exercises might show general legal metaphorical expressions which might be known to all legal professionals regardless of the area (e.g."the long arm of the law", "take the law into your own hands"), and then a specialized exercise might deal with those specific to each area.For instance, a course addressed at lawyers specializing in copyright law might present the metaphors pointed out by Larsson (2013), or another dealing with cybercrime might work on metaphors dealing with THE INTERNET IS A PHYSICAL SPACE ("cyberspace", "domain", "deep web", "dumpster diving"), CYBERCRIME IS PHYSICAL AGGRESSION ("cyberattack", "cyberbullying", "brute force attack", "logic bomb", "mail bombing"), etc. CONCLUSIONS The starting point of our paper was the need to translate theory into practice: if it is now accepted by both linguists and legal scholars that legal language is metaphorical and that metaphors are basic to understanding law and language, it naturally follows that metaphors must be integrated into LSP courses and language learners must be aware of the figurative component of legal language.In our case, the inclusion of metaphors in the syllabus of intensive courses in English for legal cooperation was perceived as beneficial in all cases: where they coincided in different languages, they provided "familiar ground" making learners more comfortable with legal English; where they did not coincide, it ensured correct acquisition with reduced negative transfer; and in both cases, an awareness of figurative language helped the learners to structure their input in comparison with their native languages. In the case of legal metaphors, both our pilot study and the usage of the above activities with legal practitioners from various European countries seem to show that the conceptual basis of Western law is largely multilingual, and that most usual metaphors seem to coincide: therefore, language work should concentrate on non-variability and specific wording, e.g.ensuring correct word choices ("burden", not "charge" or "weight") and correct collocations ("burden" collocates with "heavy", "carry", "bear"). Also, in those courses whose purpose is twofold, i.e. language and content, reflection on metaphor contributes to a better understanding of legal concepts, and also to dialogue and discussion in multinational classes.We tend to agree with Danesi (1993) for whom metaphorical encoding is largely unconscious, and therefore it is necessary to create an awareness of such content.Thus, it has been found that learners enjoy becoming aware of lexicalized metaphors, such as the traditional imagery underlying the English legal system including identifications such as LAW IS A PERSON, ACTIONS ARE MOTIONS, CONTROL IS UP/THINGS CONTROLLED ARE DOWN, RIGHTS (AND OTHER LEGAL RULES) ARE PATHS, RATIONAL ARGUMENT IS WAR and RIGHTS ARE POSSESSIONS (Winter, 1989) or contrasts between languages (TRYING CASES IS Vol.4(2)(2016): 165-183 HEARING in English vs. TRYING CASES IS SEEING in other languages).Judges and prosecutors in the aforementioned courses rapidly became familiar, through the exercises proposed, with the correct metaphors in English, but also commented with each other on their respective national metaphors, which supported one of the aims of these courses entitled "English for judicial cooperation" (for more discussion on these courses and English as a lingua franca in European legal cooperation, see Campos, 2010). Regarding difficulties encountered and further research avenues, it must be emphasized that the selection of metaphors is a potentially problematic issue, since some metaphors may pertain to general or argumentative discourse, and not specifically to legal language (e.g.IMPORTANCE IS WEIGHT).Thus, when preparing English for Legal Purposes materials, time and space constraints should be considered, and specific criteria might be applied to metaphor selection.For instance, in general law courses, lexicographical repertoires might be the guiding criterion ("is the expression included in legal dictionaries?"),whereas corpora and/or native legal experts could be used in order to decide inclusion in specialized courses.Also, new insights might be gained by expanding the source materials for metaphors to more informal materials, such as academic journals, in order to check for potential variation, which would in turn be relevant for training purposes (e.g. if the exercises are addressed at academics desiring to write papers in English), or by exploring metaphor quantitatively in legal corpora (following the metholology developed, for instance, by Breeze [2015]). Another interesting area for further research which has emerged during classroom sessions, half-way between ESP and comparative lexicography, is the prescriptivism vs. descriptivism debate, i.e. whether the "correct" metaphor in some languages is the "genuine" one, or the one that has become usual because of the influence of English.For instance, in English doubt is conceived as a BOUNDARY, and hence the expression "beyond a reasonable doubt".In some Western languages, a coexistence can be observed of this notion (Port.além da dúvida razoável, It. oltri ogni ragionevole dubbio, Sp. más allá de cualquier duda razonable) with the more "traditional" DOUBT IS A THREE-DIMENSIONAL SPACE, as seen in Port.fora de qualquer dúvida razoável, It. fuori da ogni ragionevole dubbio, Sp. fuera de cualquier duda razonable.In this area, it might be interesting, through the development of diachronic corpora, to see what the evolution of the expressions has been in these languages, and to what extent the English metaphor has become the prevailing one. , all from Supreme Court opinions, contain metaphorical expressions.Classify them according to the underlying imagery.The first ones in each category have been done for you.a) 28 U. S. C. §1254(1)'s grant of appellate jurisdiction does not give this Court license to depart from an established review standard.b) A statute dealing with a narrow, precise, and specific subject is not submerged by a later enacted statute covering a more generalized spectrum.Because the reasoning of Cooley and State Freight Tax has been rejected entirely, they provide no foundation for today's decision.d) Does the constitution of the United States act upon him whenever he shall be made free under the laws of a State […] IDEA IS A (FLESHY) FRUIT WITH SEEDS AN IDEA IS A LIVING BEING WHICH CAN REPRODUCE ITSELF CONFLICTING LAWS ARE VEHICLES COLLIDING WITH EACH OTHER FINDING EVIDENCE IS HEAVY HUMAN ACTIONS ARE VEHICLES IMPORTANT THINGS ARE HEAVIER THAN LESS IMPORTANT THINGS SOCIETY IS A BUILDING UNNECESSARY THINGS HAVE NO MONETARY VALUE TO AGREE IS TO BE TOGETHER, TO DISAGREE IS TO BE SEPARATED WHEN SOMETHING IS SUITABLE, IT HAS THE RIGHT SIZE a) The existence of regulations relating specifically to publications of foreign origin would seem, in the Court's view, to clash head on with the wording of paragraph 1 of Article 10 of the Convention. the sentences, taken from the Council of Europe Convention on Cybercrime, choose the correct word from the options given, considering the metaphorical identification in brackets in each case.Only one answer is correct.1.A Party may reserve the right not to impose criminal liability under paragraphs 1 and 2 of this article in limited circumstances, provided that other effective _____________ (CRIME IS A DISEASE) are available.a) cures b) medicines c) remedies 2. Each Party shall adopt such legislative and other measures as may be necessary to ensure that legal __________ (CORPORATIONS ARE HUMAN BEINGS) can be held liable for a criminal offence established in accordance with this Convention.Party shall ______ (LEGAL MEASURES ARE CHILDREN) such legislative and other measures as may be necessary to establish as criminal offences under its domestic law.other measures as may be necessary to ensure that the criminal offences established in accordance with Articles 2 through 11 are punishable by effective, ______________ (FAIRNESS IS APPROPRIATE SIZE) and dissuasive sanctions, which include deprivation of liberty.each Party shall consider the _______ (EFFECTS ARE PHYSICAL BLOWS) of the powers and procedures in this section upon the rights, responsibilities and legitimate interests of third parties.Party shall adopt such measures as may be necessary to establish jurisdiction _________ (CONTROL IS UP, THINGS CONTROLLED ARE DOWN) the offences referred to in Article 24the receiving Party accepts the information subject to the conditions, it shall be _______ (OBLIGATIONS ARE PHYSICAL RESTRAINTS) by them.to the provisions of paragraph 2, this assistance shall be governed by the conditions and procedures provided for __________ (CONTROL IS UP, THINGS CONTROLLED ARE DOWN) domestic law. a) The law treats everybody equally.eyes (THE LAW IS A HUMAN BEING + JUDGING IS LOOKING) ____We are all equal in the eyes of the law _____________________ b) A legislative or executive act can be challenged because it is unconstitutional.grounds (ACTIONS ARE BUILDINGS, ARGUMENTS AND IDEAS ARE LAND) __________________________________________________________ c) The Constitutional court has assessed the importance of all the arguments.weighed (IMPORTANCE IS WEIGHT) __________________________________________________________ d) The current human rights doctrine is the result of the decisions of the European courts.shaped (IDEAS ARE TRIDIMENSIONAL OBJECTS) __________________________________________________________ e) The court's ruling does not coincide with the previous case law.departs (LEGAL TRADITIONS ARE JOURNEYS) __________________________________________________________ f) The judgment was considered not valid by the Court of Appeal.set aside (LEGAL DECISIONS ARE PHYSICAL OBJECTS) __________________________________________________________ Exercise 1 Read the following text, extracted from a ruling from the Spanish Supreme Court (Tribunal Supremo). Is there something in common among the words and expressions in bold type? Las personas jurídicas de Derecho público no son titulares del derecho al honor que garantiza el art.18.1 de la C.E. Respecto de ellas, se predican otros valores que pueden ser tutelados por el legislador, como la dignidad, el prestigio y la autoridad moral.No obstante, las personas jurídicas privadas en un sentido amplio, que abarca a asociaciones, partidos políticos, sindicatos y fundaciones, sí gozarían de este derecho. English translation of the text, with figurative expressions in bold type:Legal persons in Public Law do not possess a right to honour as provided (literally: "guaranteed") by Article 18.1 of the Spanish Constitution.Concerning such persons, other values are provided (literally: "preached") which may be protected (literally: "guarded") by the legislator, such as dignity, good name and moral authority.However, private legal persons in a broad sense, including (literally: "covering") associations, political parties, trade unions and foundations, do possess (literally: "enjoy") such right. Does the constitution of the United States act upon him whenever he shall be made free under the laws of a State […], and immediately clothe him with all the privileges of a citizen in every 4(2)(2016): 165-183 Exercise 5 cont.
8,176
2016-12-01T00:00:00.000
[ "Law", "Linguistics" ]
Targeting Angiogenesis in Squamous Cell Carcinoma of the Head and Neck: Opportunities in the Immunotherapy Era Simple Summary Therapies for squamous cell carcinomas of the head and neck (SCCHN) have been rapidly evolving, initially with the inclusion of immunotherapy, but more recently with the consideration of anti-angiogenic therapies. Recent preclinical and clinical data reveal a strong correlation between vascular endothelial growth factor (VEGF) and the progression of SCCHN, with nearly 90% of these malignancies expressing VEGF. Our review article not only elaborates on the utility of anti-VEGF therapies on SCCHN but also its interaction with the immune environment. Furthermore, we detailed the current data on immunotherapies targeting SCCHN and how this could be coupled with anti-angiogenics therapies. Abstract Despite the lack of approved anti-angiogenic therapies in squamous cell carcinoma of the head and neck (SCCHN), preclinical and more recent clinical evidence support the role of targeting the vascular endothelial growth factor (VEGF) in this disease. Targeting VEGF has gained even greater interest following the recent evidence supporting the role of immunotherapy in the management of advanced SCCHN. Preclinical evidence strongly suggests that VEGF plays a role in promoting the growth and progression of SCCHN, and clinical evidence exists as to the value of combining this strategy with immunotherapeutic agents. Close to 90% of SCCHNs express VEGF, which has been correlated with a worse clinical prognosis and an increased resistance to chemotherapeutic agents. As immunotherapy is currently at the forefront of the management of advanced SCCHN, revisiting the rationale for targeting angiogenesis in this disease has become an even more attractive proposition. Clinical Evidence for Targeting VEGF in SCCHN Anti-angiogenic agents have gained significant importance as therapeutic options for various malignancies [1][2][3]. Biologically, tumor proliferation and growth depend on nutrient and blood delivery, mediated through new vessel formation, which is the process of angiogenesis [4]. Increased vascular density has been reported to be associated with tumor progression and metastases [2,5]. Therefore, therapies targeting pro-angiogenic factors have been a focus of interest in oncology over the past 2 decades [6,7]. Angiogenesis is a multi-step process involving the protease breakdown of basement membrane allowing for the migration and proliferation of endothelial cells, leading to the formation of a new lumen with a basement membrane, pericytes, a remodeled extracellular matrix, and ultimately anastomoses with blood flow [8]. These intricate processes and their inhibition likely play a major role in impacting the tumor microenvironment where immune cells often reside. In addition, tumor cells heavily depend on this mechanism for their own development and are unable to expand past 2-3 mm 3 given diffusion-dependent resources [5]. Since both immune-mediated factors and those that promote or inhibit angiogenesis coexist in the tumor microenvironment, exploring possible anti-tumor synergistic mechanisms targeting these two cancer-related processes (immunity and angiogenesis) seems attractive. Angiogenesis is largely instigated by the activation of tyrosine kinase receptors, notably vascular endothelial growth factor (VEGF), epidermal growth factor (EGF), platelet derived growth factor (PDGF), and fibroblast growth factor (FGF) [1,[9][10][11]. The upregulation of these angiogenic factors typically corresponds to increased vascularity, lymph node metastasis, inadequate response to cytotoxic chemotherapy, and advanced disease with poor prognosis [1,10]. Up to 90% of SCCHNs have been shown to express VEGF which promotes immunosuppression in different ways, namely by reducing T-cell extravasation across vessel walls, enhancing regulatory T-cell differentiation, stimulating dendritic PD-L1 expression which decreases T-cell activation, and finally, by directly inhibiting the differentiation of myeloid stem cells to mature immune regulators by binding their VEGF receptor 1 [4]. This ultimately raises the question of VEGF's role in tumorigenesis and its possible influence on prognosis in SCCHN. Several observational reports have attempted to correlate VEGF with clinical or pathologic findings in SCCHN. Tanigaki et al. examined the expression of VEGF-A and -C, and their receptors, Flt-1 and Flt-4, in biopsy specimens taken from 73 patients with tongue carcinoma by immunohistochemistry [12]. Multivariate analyses revealed VEGF-C expression to be an independent factor predicting lymph node metastasis [12]. There were notable differences between VEGF-C-positive and VEGF-C-negative cases in terms of predicting 5-year overall survival (51.7% vs. 94.2%, respectively) [12]. Notably, the 5-year survival rates for VEGF-C-positive and negative patients were 94% and 52%, respectively [12]. Cheng et al. similarly applied immunohistochemistry to examine the expression of VEGF in 100 specimens of oral cavity carcinomas, including 66 oral epithelial dysplasia and 36 normal mucosae [13]. There was a gradual increase in VEGF through the different dysplasia grades from normal mucosa to invasive carcinoma, indicating that VEGF expression is at least a possible predictor of tumor progression [13]. They also showed a correlation between VEGF levels and lymph node metastases (p = 0.022) as well as worse survival (p = 0.016) and advanced clinical stage (p = 0.046) [13]. In a similar fashion, Seibold et al. investigated VEGF and its receptor tyrosine kinase 1 (FLT-1) in patients with locally advanced squamous cell carcinoma who had been treated with adjuvant radiotherapy or chemoradiotherapy, which showed a correlation between VEGF expression and loco-regional control (LRC), metastasis-free survival, and overall survival (OS) [14]. However, other studies have shown mixed results in terms of outcomes and survival. One 30-patient study of patients with laryngeal cancer showed an association of VEGF expression with lymph node involvement but not with treatment outcomes [15]. Similarly, a 40-patient analysis of SCCHN correlated VEGF expression with staging, but no statistically significant connection existed with disease-free survival or OS [16]. Notably, a meta-analysis evaluating five different biomarkers, including VEGF, in oral tongue squamous cell carcinoma with regard to their prognostic significance on OS yielded insufficient and inconclusive results [17]. There are four general categories of anti-angiogenic agents: ligand-directed antibodies, receptor-directed antibodies, small molecule inhibitors, and immunomodulatory agents [4]. While there are no Food and Drug Administration (FDA)-approved anti-angiogenic agents for SCCHN, several studies have used VEGF and VEGFR inhibitors in the treatment of SCCHN. While certain tyrosine kinase inhibitors (TKIs) have shown some activity against angiogenesis in preclinical studies, this did not consistently translate into meaningful clinical activity in SCCHN. Sorafenib and sunitinib are TKIs with activity against multiple receptors and have shown moderate response to SCCHN in phase II trials [18][19][20][21]. However, many adverse side effects, most commonly fatigue (32%) and grade 3-5 bleeding (16%), were commonly seen with sunitinib [20,21]. Axitinib was studied in a phase II trial with an overall low response rate (6.7%) but an encouraging disease-control rate of 77% and an OS of 10.9 months [22]. It is important to clarify that while anti-angiogenic agents can treat malignancy, they have rarely been associated with curative potential as single agents. A combinatorial approach with cytotoxic therapy has yielded improved responses and disease control with these agents [23]. A combination approach with chemotherapy has been tested in advanced SCCHN in a phase III clinical trial E1305 comparing platinum therapy (cisplatin or carboplatin) plus either docetaxel or 5-fluorouracil (5-FU) with or without bevacizumab for patients with recurrent or metastatic SCCHN [24]. In this 403-patient cohort, the addition of bevacizumab led to an improved median progression-free survival (PFS) from 4.3 months to 6.0 months (HR 0.71; p = 0.0012) and an improved overall response rate (ORR) from 24.5 to 35.5% (p = 0.013) [24]. However, the median OS was 12.6 months with chemotherapy + bevacizumab versus 11 months with chemotherapy alone, without a statistically significant difference (HR 0.87, 95% CI 0.70-1.0, p = 0.22), but with higher observed treatment-associated toxicities in the bevacizumab arm, most notably grade 3-5 bleeding [24]. Despite the fact that the study did not meet its primary endpoint, it did show the clinical activity of anti-angiogenesis in SCCHN, namely in its ability to prolong PFS, and opened the door for further investigation of this approach. Something to note is that E1305 preceded the era of immunotherapy. Anti-angiogenic agents have also been investigated in combination with radiotherapy and epidermal growth factor inhibitors such as cetuximab [4,[25][26][27]]. The Immune Correlation with Anti-Angiogenesis Before investigating the effects of angiogenesis inhibition on the immune system, we must consider the consequences that powerful angiogenic regulators such as VEGF have in order to create an immunosuppressive environment by downregulating immune effector cells [9]. An example is the effect on natural killer cells (NKs), where VEGF causes reduced NK cytotoxicity leading to immunosuppression [28]. VEGF also inhibits dendritic cell (DC) maturation [9], notably by binding with VEGFR-2, attaching to the surface of DCs, and directly impeding nuclear factor-kB signaling [29]. Aside from preventing DC differentiation, VEGF obstructs DCs from presenting antigens to T cells by upregulating programmed death ligand-1 (PD-L1) expression, which in turn exerts an effect on T-cell activation [30]. Consequentially, VEGF blockade relieves the restrictions on DC migration and immune capacity via increased antigen presentation and may be a promoter of anti-tumor immunity [31]. In mouse models with glioblastoma, anti-VEGF resulted in the increased co-stimulatory expression of B7-1, B7-2, and MHC class II molecules, creating more advanced dendritic cell identity [32]. Similarly, pro-angiogenic molecules directly act on T lymphocytes by binding with VEGFR-2 and upregulating immune checkpoints such as PD-1 and cytotoxic T-lymphocyteassociated protein 4 (CTLA-4) [33]. This ultimately leads to the upregulation of regulatory T cells (Tregs) [33]. VEGF has also been shown to act directly on T lymphocytes, with the most notable effect being its inhibition of hematopoietic stem cell differentiation to CD8+ and CD4+ T cells [34][35][36]. This was effective in causing T-cell deficiency and atrophy of the thymus when examined in cancer patients and animal models with tumors [34]. In oral squamous cell carcinoma specifically, VEGF has been shown to enhance the secretion of prostaglandin E2, which interrupts T-cell activation [37]. Besides preventing T-cell adhesion to vessel wall and subsequent extravasation to the tumor site [30], VEGF also inhibits helper T-cell recruitment to the tumor site [38] and promotes immunosuppressive cells such as Tregs [30,39] and myeloid-derived suppressor cells (MDSCs) [40] by binding with VEGFR-2. It also achieves this upregulation of Tregs by combining with the co-receptor neuropilin 1 [41]. Pro-angiogenic molecules repress adhesion factors and chemokines such as CXC chemokine ligands 10/11, vascular cell adhesion molecule-1 (VCAM-1), intracellular adhesion molecule-1 (ICAM-1), and endothelial leukocyte adhesion molecule 1 (ELAM-1), which would normally attract NK and CD8+ T cells [42,43]. Therefore, anti-angiogenesis improves T-cell infiltration of the tumor environment by upregulating adhesion molecules on nearby vessels [44]. Overall, anti-angiogenic therapy therefore reprograms the microenvironment, favoring an upregulation of immunomodulators and more potent anti-tumor response. These findings provide sufficient evidence to support the hypothesis that the co-targeting of tumor-mediated angiogenesis through VEGF with pro-tumor-mediated immune factors may be a winning strategy in anti-cancer care, particularly in SCCHN, which relies on both elements. The Prospects of Combination of VEGF Inhibitors with Immunotherapy in SCCHN Immunotherapy has surfaced as a breakthrough in cancer treatment, notably for patients with recurrent or metastatic SCCHN [45,46]. Tumors typically express different immune checkpoint receptors as a means for immune evasion. By targeting these receptors, cancer immune evasion is reversed. In metastatic SCCHN, prior to the integration of immunotherapy, first-line systemic therapies consisted of a combination of cytotoxic agents with cetuximab, a chimeric IgG1 monoclonal antibody targeting human EGFR [25,47]. Even though the introduction of immune checkpoint inhibitors (ICIs) has revolutionized the treatment of recurrent or metastatic disease [48], most patients still succumb to their disease, and novel therapeutic combinatorial approaches are urgently needed. The tumor microenvironment (TME) is an essential factor for tumor survival and growth, showcasing the importance of therapies that threaten this environment [9]. Closely regulated by immune and inflammatory cells, cytokines, and the surrounding tissue and vessels as illustrated in Figure 1 [49], the TME is promoted by increased vascularity, resistance to host immune cells, and the ability to combat hypoxia [50,51]. Therefore, therapies targeting both the immune system and angiogenesis are appealing. These novel approaches may help promote the normalization of vasculature and an immune boosting rather than a suppressive environment [9], as delineated in Figure 2 [44]. There are multiple reasons for the success of this dual-modality approach. First, tumor cells actively promote angiogenic factors, which not only stimulate abnormal vascular structure but also certain chemokines and adhesion molecules, which selectively impede the infiltration of immune cells [52]. This setting restricts the effectiveness of immunotherapy [52]. Second, existing therapies for SCCHN that target one driver, such as ICIs and anti-angiogenic agents, may be limited when tumors paradoxically utilize other pro-tumor mediators that can counteract their efficacy [45,53]. For example, PD-L1 inhibition consequently increases the expression of other immune checkpoints such as TIM-3, potentially causing an adaptive resistance [54,55]. Similarly, anti-VEGF treatments generate hypoxia and acidosis via abnormal vessel pruning [45]. This environment leads to an increased expression of CCL28 and SDF-1, which induces an immunosuppressed TME by promoting tolerance in T regulatory cells, MDSCs, and TAMs [56,57]. TME hypoxia also compromises antigen-presenting cells and subsequent activation of T-cell response. This not only impacts the efficacy of anti-cancer agents but also leads to the dysfunction of immune effector cells and the recruitment of tumor-enhancing cells [58][59][60][61]. Cancers 2022, 14, x 5 of 11 Figure 1. A closer look at the cellular and molecular components of the tumor microenvironment from the immune cells and chemokines to the extracellular matrix and their associated remodeling molecules that shape the interplay between tumor cells and host immune cells along with pro-angiogenic factors, highlighting the potential targets for therapy [49]. There are multiple reasons for the success of this dual-modality approach. First, tumor cells actively promote angiogenic factors, which not only stimulate abnormal vascular structure but also certain chemokines and adhesion molecules, which selectively impede the infiltration of immune cells [52]. This setting restricts the effectiveness of immunotherapy [52]. Second, existing therapies for SCCHN that target one driver, such as ICIs and anti-angiogenic agents, may be limited when tumors paradoxically utilize other protumor mediators that can counteract their efficacy [45,53]. For example, PD-L1 inhibition consequently increases the expression of other immune checkpoints such as TIM-3, potentially causing an adaptive resistance [54,55]. Similarly, anti-VEGF treatments generate hypoxia and acidosis via abnormal vessel pruning [45]. This environment leads to an increased expression of CCL28 and SDF-1, which induces an immunosuppressed TME by Figure 1. A closer look at the cellular and molecular components of the tumor microenvironment from the immune cells and chemokines to the extracellular matrix and their associated remodeling molecules that shape the interplay between tumor cells and host immune cells along with proangiogenic factors, highlighting the potential targets for therapy [49]. There are multiple reasons for the success of this dual-modality approach. First, tumor cells actively promote angiogenic factors, which not only stimulate abnormal vascular structure but also certain chemokines and adhesion molecules, which selectively impede the infiltration of immune cells [52]. This setting restricts the effectiveness of immunotherapy [52]. Second, existing therapies for SCCHN that target one driver, such as ICIs and anti-angiogenic agents, may be limited when tumors paradoxically utilize other protumor mediators that can counteract their efficacy [45,53]. For example, PD-L1 inhibition consequently increases the expression of other immune checkpoints such as TIM-3, potentially causing an adaptive resistance [54,55]. Similarly, anti-VEGF treatments generate hypoxia and acidosis via abnormal vessel pruning [45]. This environment leads to an increased expression of CCL28 and SDF-1, which induces an immunosuppressed TME by Given the close interaction between angiogenic factors and immune response, the combination of anti-angiogenic therapy and ICIs has become an attractive strategy to combat the resistance mechanisms that tumor cells utilize to evade the effects of therapy. By improving the penetration of concurrent therapies, targeting both the TME and the tumor itself, and by helping build resistance to immune-evading tumor strategies, anti-angiogenic therapy and immunotherapy provide a promising opportunity for the future of SCCHN therapy. Numerous active clinical trials exist to assess this presumed synergistic effect, as noted in Table 1. NCT03650764 is a prospective phase I/II trial studying pembrolizumab with ramucirumab, a VEGF inhibitor, in recurrent/metastatic SCCHN [62]. Another similar phase II clinical trial, NCT04440917, is testing another PD-1 inhibitor, camrelizumab, with VEGFR inhibitor apatinib in locally advanced SCCHN [63]. A phase II clinical trial, NCT03468218, is evaluating the combination of pembrolizumab and cabozantinib (a TKI targeting multiple receptors, including VEGFR2) in recurrent or metastatic SCCHN [64]. While the results have not yet been published, we are encouraged by the clinical trial results in renal cell carcinoma comparing nivolumab plus cabozantinib with sunitinib monotherapy. In this study, the median PFS for nivolumab plus cabozantinib was 16.6 months (95% CI, 12.5-24.9), while for sunitinib, it was 8.3 months (95% CI, 7.0-9.7); this trend was consistently seen for OS and objective response rate (ORR) [65]. Multiple other studies have examined the utility of bevacizumab in combination with immune and/or chemotherapy. NCT03818061 is a phase II multicenter study assessing the effects of atezolizumab (a PD-L1 inhibitor) and bevacizumab in recurrent or metastatic SCCHN on ORR [66]. Other trials compare bevacizumab with cetuximab, which may not only work through receptor blockade but also via the immune-mediated activity of cetuximab [67]. A phase II trial, NCT00409565, evaluated the ORR of bevacizumab with cetuximab in patients with recurrent or metastatic SCCHN [68]. Published results revealed a significant reduction in tumor vascularization, with an ORR of 16%, a disease control rate of 73%, and a generally well-tolerated response with grade 3-4 adverse events in less than 10% [69]. Three specific phase II trials are evaluating bevacizumab + cetuximab +/− chemoradiation: NCT00968435 with a combination of bevacizumab, cisplatin, cetuximab, and intensity-modulated radiation therapy (IMRT) to determine 2-year PFS for locally or regionally advanced SCCHN [70], NCT00703976 evaluating bevacizumab, cetuximab, pemetrexed, and radiation therapy (RT) for similar outcome and disease population [71], and NCT01588431 with combination induction therapy with bevacizumab, cetuximab, and chemotherapy (docetaxel, cisplatin) followed by radiation, cisplatin, cetuximab, and bevacizumab +/− surgery depending on response [72]. Other trials, including the phase Ib/II trial NCT0250109673, are assessing lenvatinib (a multi-kinase inhibitor including VEGFR) in combination with pembrolizumab in a host of solid tumors, including SCCHN both in the first-line as well as in post-immunotherapy failure. Along with assessing the potential benefits of combination therapy, we must consider the associated toxicities of anti-angiogenic therapy, which range from cardiovascular to thromboembolic [73]. Some known side effects include hypertension with associated proteinuria and reversible posterior leukoencephalopathy, endocrine dysfunction, and gastrointestinal perforation [74]. Anti-VEGF/VEGFR agents are also associated with thromboembolism in 5% of cases as well as with hemorrhage in others [73]. As these agents are largely TKIs, some effects are secondary to off-target tyrosine kinase inhibition, namely hypothyroidism, diarrhea, and fatigue [75]. A meta-analysis published by Ranpura et al. detailed the increased risk of adverse events associated with bevacizumab compared with chemotherapy alone in regard to mortality (2.9% vs. 2.2%; RR = 1.33; 95% CI 1.02-1.73) and fatal events (3.3% vs. 1%; RR = 3.49; 95% CI 1.82-6.66) [76]. The most common fatal events were noted as bleeding (23.5%), gastrointestinal perforation (7.1%), and neutropenia (12.2%), without a correlation between mortality and the type of cancer or bevacizumab dose [76,77]. Conclusions While the use of combination therapy forms an intriguing forefront for the treatment of recurrent and metastatic SCCHN, we have yet to understand how immunotherapy and anti-angiogenic therapy interact with each other to create an anti-tumor effect. Further investigations need to appreciate both the benefits and the risks posed by inhibiting these alternative therapeutic pathways and ultimately how they impact the TME. In conclusion, the approval of immunotherapy as an effective modality in the treatment of SCCHN has ushered a new era in combinatorial therapeutic approaches for this disease. Very high on the list of candidate targeted agents are angiogenesis inhibitors. Here, we attempted to provide a rationale for the need to pursue these combinations in SCCHN. Along those lines, results from the enrolling studies in recurrent metastatic SCCHN are eagerly awaited and may provide more insight into refining these approaches for a wider patient population through better clinical as well as biomarker-based patient selection.
4,615.8
2022-02-25T00:00:00.000
[ "Biology" ]
Model-free optimization of power/efficiency tradeoffs in quantum thermal machines using reinforcement learning Abstract A quantum thermal machine is an open quantum system that enables the conversion between heat and work at the micro or nano-scale. Optimally controlling such out-of-equilibrium systems is a crucial yet challenging task with applications to quantum technologies and devices. We introduce a general model-free framework based on reinforcement learning to identify out-of-equilibrium thermodynamic cycles that are Pareto optimal tradeoffs between power and efficiency for quantum heat engines and refrigerators. The method does not require any knowledge of the quantum thermal machine, nor of the system model, nor of the quantum state. Instead, it only observes the heat fluxes, so it is both applicable to simulations and experimental devices. We test our method on a model of an experimentally realistic refrigerator based on a superconducting qubit, and on a heat engine based on a quantum harmonic oscillator. In both cases, we identify the Pareto-front representing optimal power-efficiency tradeoffs, and the corresponding cycles. Such solutions outperform previous proposals made in the literature, such as optimized Otto cycles, reducing quantum friction. However, the optimal control of such devices, necessary to reveal their maximum performance, is an extremely challenging task that could find application in the control of quantum technologies and devices beyond QTMs.The difficulties include: (i) having to operate in finite time, the state can be driven far from equilibrium, where the thermal properties of the system are model-specific; (ii) the optimization is a search over the space of all possible time-dependent controls, which increases exponentially with the number of time points describing the cycle; (iii) in experimental devices, often subject to undesired effects such as noise and decoherence [18], we could have a limited knowledge of the actual model describing the dynamics of the QTM. A further difficulty (iv) arises in QTMs, since the maximization of their performance requires a multi-objective optimization.Indeed, the two main quantities that describe the performance of a heat engine (refrigerator) are the extracted power (cooling power) and the efficiency (coefficient of performance).The optimal strategy to maximize the efficiency consists of performing reversible transformations [19] which are, however, infinitely slow, and thus deliver vanishing power.Conversely, maximum power is typically reached at the expense of reduced efficiency.Therefore, one must seek optimal trade-offs between the two. In general, aside from variational approaches, there is no guarantee that these regimes and cycles are optimal.Recently, reinforcement-learning (RL) has been used to find cycles that maximize the power of QTMs without making assumptions on the cycle structure [69].However, this approach requires a model of the system and the knowledge of the quantum state of the system, which restricts its practical applicability.This calls for the development of robust and general strategies that overcome all above-mentioned difficulties (i-iv). We propose a RL-based method with the following properties: (i) it finds cycles yielding near Pareto-optimal trade-offs between power and efficiency, i.e. the collection of cycles such that it is not possible to further improve either power or efficiency, without decreasing the other one.(ii) It only requires the heat currents as input, and not the quantum state of the system.(iii) It is completely model-free.(iv) It does not make any assumption on the cycle structure, nor on the driving speed.The RL method is based on the Soft Actor-Critic algorithm [70,71], introduced in the context of robotics and videogames [72,73], generalized to combined discrete and con- FIG. 1. Schematic representation of a quantum thermal machine controlled by a computer agent.A quantum system (gray circle) can be coupled to a hot (cold) bath at inverse temperature βH (βC), represented by the red (blue) square, enabling a heat flux JH(t) (JC(t)).The quantum system is controlled by the computer agent through a set of experimental control parameters ⃗ u(t), such as an energy gap or an oscillator frequency, that control the power exchange P (t), and through a discrete control d(t) = {Hot, Cold, None} that determines which bath is coupled to the quantum system. We prove the validity of our approach finding the full Pareto-front, i.e. the collection of all Pareto-optimal cycles describing optimal power-efficency tradeoffs, in two paradigmatic systems that have been well studied in literature: a refrigerator based on an experimentally realistic superconducting qubit [6,49], and a heat engine based on a quantum harmonic oscillator [43].In both cases we find elaborate cycles that outperform previous proposal mitigating quantum friction [43,49,55,66,[93][94][95], i.e. the detrimental effect of the generation of coherence in the instantaneous eigenbasis during the cycle.Remarkably, we can also match the performance of cycles found with the RL method of Ref. [69] that, as opposed to our model-free approach, requires monitoring the full quantum state and only optimizes the power. Setting: Black-box Quantum Thermal Machine We describe a QTM by a quantum system, acting as a "working medium", that can exchange heat with a hot (H) or cold (C) thermal bath characterized by inverse temperatures β H < β C (Fig. 1).Our method can be readily generalized to multiple baths, but we focus the description on two baths here. We can control the evolution of the quantum system and exchange work with it through a set of timedependent continuous control parameters ⃗ u(t) that enter in the Hamiltonian H[⃗ u(t)] of the quantum system [96], and through a discrete control d(t) = {Hot, Cold, None} that determines which bath is coupled to the system.J H (t) and J C (t) denote the heat flux flowing out respectively from the hot and cold bath at time t. Our method only relies on the following two assumptions: 1. the RL agent can measure the heat fluxes J C (t) and J H (t) (or their averages over a time period ∆t); In contrast to previous work [69], the RL optimization algorithm does not require any knowledge of the microscopic model of the inner workings of the quantum system, nor of its quantum state; it is only provided with the values of the heat fluxes J C (t) and J H (t).These can be either computed from a theoretical simulation of the QTM [69], or measured directly from an experimental device whenever the energy change in the heat bath can be monitored without influencing the energetics of the quantum system (see e.g.experimental demonstrations [6][7][8][9]).In this sense, our quantum system is treated as a "black-box", and our RL method is "model-free".Any theoretical model or experimental device satisfying these requirements can be optimized by our method, including also classical stochastic thermal machines.The timescale T is finite because of energy dissipation and naturally emerges by making the minimal assumption that the coupling of the quantum system to the thermal baths drives the system towards a thermal state within some timescale T .Such a timescale can be rigorously identified e.g.within the weak system-bath coupling regime, and in the reaction coordinate framework that can describe non-Markovian and strong-coupling effects [97].In a Markovian setting, T is related to the inverse of the characteristic thermalization rate. The thermal machines we consider are the heat engine and the refrigerator.Up to an internal energy contribution that vanishes after each repetition of the cycle, the instantaneous power of a heat engine equals the extracted heat: and the cooling power of a refrigerator is: The entropy production is given by where we neglect the contribution of the quantum system's entropy since it vanishes after each cycle. Machine Learning Problem Our goal is to identify cycles, i.e. periodic functions ⃗ u(t) and d(t), that maximize a trade-off between power and efficiency on the long run.Since power and efficiency cannot be simultaneously optimized, we use the concept of Pareto-optimality [98,99].Pareto-optimal cycles are those where power or efficiency cannot be further increased without sacrificing the other one.The Paretofront, defined as the collection of power-efficiency values delivered by all Pareto-optimal cycles, represents all possible optimal trade-offs.To find the Pareto-front, we define the reward function r c (t) as: where P (t) is the power of a heat engine (Eq. 1) or cooling power of a refrigerator (Eq.2), P 0 , Σ 0 are reference values to normalize the power and entropy production, and c ∈ [0, 1] is a weight that determines the trade-off between power and efficiency.As in Ref. [69], we are interested in cycles that maximize the long-term performance of QTMs; we thus maximize the return ⟨r c ⟩ (t), where ⟨•⟩(t) indicates the exponential moving average of future values: Here κ is the inverse of the averaging timescale, that will in practice be chosen much longer than the cycle period, such that ⟨r c ⟩ (t) is approximately independent of t. For c = 1, we are maximizing the average power ⟨r 1 ⟩ = ⟨P ⟩ /P 0 .For c = 0, we are minimizing the average entropy production ⟨r 0 ⟩ = − ⟨Σ⟩ /Σ 0 , which corresponds to maximizing the efficiency.For intermediate values of c, the maximization of ⟨r c ⟩ describes trade-offs between power and efficiency (see "Optimizing the entropy production" in Materials and Methods for details).Interestingly, if convex, it has been shown that the full Pareto-front can be identified repeating the optimization of ⟨r c ⟩ for many values of c [98,100]. Deep reinforcement learning for black-box quantum thermal machines In RL, a computer agent must learn to master some task by repeated interactions with some environment. Here we develop an RL approach where the agent maximizes the return (5) and the environment is the QTM with its controls (Fig. 2A).To solve the RL problem computationally, we discretize time as t i = i∆t.By timediscretizing the return (5), we obtain a discounted return whose discount factor γ = exp(−κ∆t) determines the averaging timescale and expresses how much we are interested in future or immediate rewards (see "Reinforcement Learning Implementation" in Materials and Methods for details).At each time step t i , the agent employs a policy function π(a|s) to choose an action a i = {⃗ u(t i ), d(t i )} based on the state s i of the environment.Here, the policy function π(a|s) represents the probability of choosing action a, given that the environment is in state s, ⃗ u(t) are the continuous controls over the quantum system, and d(t i ) ∈ {Hot, Cold, None} is a discrete control that selects the bath the system is coupled to.All controls are considered to be constant during time step of duration ∆t.The aim of RL is to learn an optimal policy function π(a|s) that maximizes the return. In order to represent a black-box quantum system whose inner mechanics are unknown, we define the con-trol history during a time interval of length T as the observable state: where N = T /∆t.Therefore, the state of the quantum system is implicitly defined by the sequence of the agent's N recent actions. To find an optimal policy we employ the soft actorcritic algorithm, that relies on learning also a value function Q(s, a), generalized to a combination of discrete and continuous actions [70][71][72][73].The policy function π(a|s) plays the role of an "actor" that chooses the actions to perform, while a value function Q(s, a) plays the role of a "critic" that judges the choices made by the actor, thus providing feedback to improve the actor's behavior.We further optimize the method for a multi-objective setting by introducing a separate critic for each objective, i.e. one value function for the power, and one for the entropy production.This allow us to vary the weight c during training, thus enhancing convergence (see "Reinforcement Learning Implementation" in Materials and Methods for details). We learn the functions π(a|s) and Q(s, a) using a deep NN architecture inspired by WaveNet, an architecture that was developed for processing audio signals [101] (See Figs.2B-C).We introduce a "convolution block" to efficiently process the time-series of actions defining the state s i .It consists of a 1D convolution with kernel size and stride of 2, such that it halves the length of the input.It is further equipped with a residual connection to improve trainability [102] (see "Reinforcement Learning Implementation" in Materials and Methods for details).The policy π(a i |s i ) is described by a NN that takes the state s i as input, and outputs parameters µ and σ describing the probability distribution from which action a i is sampled (Fig. 2B).The value function Q(s i , a i ) is computed by feeding (s i , a i ) into a NN, and outputting Q(s i , a i ) (Fig. 2C).Both π(a i |s i ) and Q(s i , a i ) process the state by feeding it through multiple convolution blocks (upper orange boxes in Figs.2B and 2C), each one halving the length of the time-series, such that the number of blocks and of parameters in the NN is logarithmic in N .Then a series of fully-connected layers produce the final output. The policy and value functions are determined by minimizing the loss functions in Eqs.(39) and (49) using the ADAM optimization algorithm [103].The gradient of the loss functions is computed off-policy, over a batch of past experience recorded in a replay buffer, using backpropagation (see "Reinforcement Learning Implementation" in Materials and Methods for details). Pareto-optimal cycles for a superconducting qubit refrigerator We first consider a refrigerator based on an experimentally realistic system: a superconducting qubit coupled to two resonant circuits that behave as heat baths [49] (Fig. 3A).Such a system was experimentally studied in the steady-state in Ref. [6].The system Hamiltonian is given by [49,55,63]: where E 0 is a fixed energy scale, ∆ characterizes the minimum gap of the system, and u(t) is our control parameter.In this setup the coupling to the baths, described by the commonly employed Markovian master equation [104][105][106][107], is fixed, and cannot be controlled.However, the qubit is resonantly coupled to the baths at different energies.The u-dependent coupling strength to the cold (hot) bath is described by the function γ u ), respectively (Fig. 3F).As in Ref. [63], the coupling strength is, respectively, maximal at u = 0 (u = 1/2), with a resonance width determined by the "quality factor" Q C (Q H ) (see "Physical model" in Materials and Methods for details).This allows us to choose which bath is coupled to the qubit by tuning u(t). In Fig. 3 we show an example of our training procedure to optimize the return ⟨r c ⟩ at c = 0.6 using N = 128 steps determining the RL state, and varying c during training from 1 to 0.6 (Fig. 3C).In the early stages of the training, the return ⟨r c ⟩ i , computed as in Eq. ( 28) but over past rewards, and the running averages of the cooling power ⟨P cool ⟩ i and of the negative entropy production − ⟨Σ⟩ i all start off negative (Fig. 3B), and the corresponding actions are random (left panel of Fig. 3D).Indeed, initially the RL agent has no experience controlling the QTM, so random actions are performed, resulting in heating the cold bath, rather than cooling it, and in a large entropy production.However, with increasing steps, the chosen actions exhibit some structure (Fig. 3D), and the return ⟨r c ⟩ i increases (Fig. 3B).While both the power and the negative entropy production initially increase together, around step 100k we see that − ⟨Σ⟩ i begins to decrease.This is a manifestation of the fact that power and entropy production cannot be simultaneously optimized.Indeed, the agent learns that in order to further increase the return, it must "sacrifice" some entropy production to produce a positive and larger cooling power.In fact, the only way to achieve positive values of ⟨r c ⟩ i is to have a positive cooling power, which inevitably requires producing entropy.Eventually all quantities in Fig. 3B reach a maximum value, and the corresponding final deterministic cycle (i.e. the cycle generated by policy switching off stochasticity, see "Reinforcement Learning Implementation" in Materials and Methods for details) is shown in Fig. 3E as thick black dots. For the same system, Ref. [63] proposed a smoothed trapezoidal cycle u(t) oscillating between the resonant peaks at u = 0 and u = 1/2 and optimized the cycle time (Fig. 3E, dashed line).While this choice outperformed a sine and a trapezoidal cycle [49], the cycle found by our RL agent produces a larger return (Fig. 3B).The optimal trapezoidal cycle found for c = 0.6 is shown in Fig. 3E Superconducting Qubit Refrigerator Fig. 4 compares optimal cycles for different trade-offs between cooling power and coefficient of performance η cool , the latter defined as the ratio between the average cooling power, and the average input power.This is achieved by repeating the optimization for various values of c.To demonstrate the robustness of our method, the optimization of ⟨r c ⟩ was repeated 5 times for each choice of c (variability shown with error bars in Fig. 4A, and as separate points in Fig. 4B).The RL method substantially outperforms the trapezoidal cycle by producing larger final values of the return ⟨r c ⟩ at all values of c (Fig. 4A), and by producing a better Pareto front (Fig. 4B).The RL cycles simultaneously yield higher power by more than a factor of 10, and a larger η cool , for any choice of the power-efficiency trade-off.The model-free RL cycles can also deliver the same power at a substantially higher COP (roughly 10 times larger) as compared to the cycle found with the RL method of Ref. [69], which only optimizes the power.This is remarkable since, as opposed to the current model-free method, the method in Ref. [69] has access to the full quantum state of the system, and not only to the heat currents (see "Comparing with other methods" in Materials and Methods for details).This also shows that a large efficiency improvement can be achieved by sacrificing very little power.As expected, the period of the RL cycles increases as c decreases and the priority shifts from high power to high η cool (Figs.4C-F, black dots).However, the period is much shorter than the corresponding optimized trapezoidal cycle (dashed line), and the optimal control sequence is quite unintuitive, even going beyond the resonant point at u = 1/2.As argued in [49,55,63], the generation of coherence in the instantaneous eigenbasis of the quantum system, occurring because [ Ĥ(u 1 ), Ĥ(u 2 )] ̸ = 0 for u 1 ̸ = u 2 , causes power losses that increase with the speed of the cycle.We find that we can interpret the power enhancement achieved by our cycle as a mitigation of such detrimental effect: indeed, we find that trapezoidal cycles operated at the same frequency as the RL cycle generate twice as much coherence as the RL cycles (see "Generation of coherence" in Materials and Methods for details).In either case, cycles with higher power tend to generate more coherence. Given the stochastic nature of RL, we also compared the cycles obtained across the 5 independent training runs, finding that cycles are typically quite robust, displaying only minor changes (see Fig. 8 of Methods for four cycles found in independent training runs corresponding to Figs. 4C-F). Pareto-optimal cycles for a quantum harmonic oscillator engine We now consider a heat engine based on a collection of non-interacting particles confined in a harmonic potential [43] (Fig. 5A).The Hamiltonian is given by where m is the mass of the system, w 0 is a reference frequency and p and q are the momentum and position operators.The control parameter u(t) allows us to change the frequency of the oscillator.Here, at every time step we let the agent choose which bath (if any) to couple to the oscillator.The coupling to the baths, characterized by the thermalization rates Γ α , is modeled using the Lindblad master equation as in Ref. [43] (see "Physical model" in Materials and Methods for details).In contrast to the superconducting qubit case, c is held constant during training.Fig. 5 reports the results on the optimal trade-offs between extracted power and efficiency η heat , the latter defined as the ratio between the extracted power and the input heat, in the same style of Fig. 4. In this setup, we compare our RL-based results to the well-known Otto cycle.The authors of Ref. [43] study this system by optimizing the switching times of an Otto cycle, i.e. the duration of each of the 4 segments, shown as a dashed lines in Figs.5D-E details). The RL method produces cycles with a larger return and with a better power-efficiency Pareto-front with re-spect to the Otto cycle (Fig. 5B,C).The cycles found by the RL method significantly outperforms the Otto engine in terms of delivered power.For c = 1, a high-power cycle is found (Fig. 5D and corresponding blue dots in Figs.5B-C) but at the cost of a lower efficiency than the Otto cycles.However, at c = 0.5, the RL method finds a cycle that matches the maximum efficiency of the Otto cycles, while delivering a ∼ 30% higher power (Fig. 5E and corresponding blue dots in Figs.5B-C) Remarkably, our model-free RL method also finds cycles with nearly the same power as the RL method of Ref. [69], but at almost twice the efficiency (see "Comparing with other methods" in Materials and Methods for details).As in Fig. 4, we see that a very small decrease in power can lead to a large efficiency increase. Interestingly, as shown in Figs.5D-E, the cycles found by the RL agent share many similarities with the Otto cycle: both alternate between the hot and cold bath (orange and blue portions) with a similar period.However, there are some differences: at c = 1, the RL cycle ramps the value of u while in contact with the bath, eliminating the unitary stroke (Fig. 5D).Instead, at c = 0.5, the RL agent employs a unitary stroke that is quite different respect to a linear ramping of u (Fig. 5E, green dots).As in the superconducting qubit case, the enhanced performance of the RL cycle may be interpreted as a mitigation of quantum friction [43,93]. Also in this setup, we verified that the discovered cycles are quite robust across the 5 independent training runs, displaying only minor changes (see Fig. 9 of Methods for two cycles found in independent training runs corresponding to Figs. 5D-E). DISCUSSION We introduced a model-free framework, based on Reinforcement Learning, to discover Pareto-optimal thermodynamic cycles that describe the best possible tradeoff between power and efficiency of out-of-equilibrium quantum thermal machines (heat engines and refrigerators).Our algorithm only requires monitoring the heat fluxes of the QTM, making it a model-free approach.It can therefore be used both for the theoretical optimization of known systems, and potentially for the direct optimization of experimental devices for which no model is known, and in the absence of any measurement performed on the quantum system.Using state-of-the-art machine learning techniques, we demonstrate the validity of our method applying it to two different prototypical setups.Our black-box method discovered elaborate cycles that outperform previously proposed cycles and are on par with a previous RL method that observes the full quantum state [69].Up to minor details, the cycles found by our method are reproducible across independent training runs.Physically we find that Otto cycles, commonly studied in literature, are not generally optimal, and that optimal cycles balance a fast operation of the cycle, with the mitigation of quantum friction. Our method paves the way for a systematic use of RL in the field of quantum thermodynamics.Future directions include investing larger systems to uncover the impact of quantum many-body effects on the performance of QTMs, optimizing systems in the presence of noise, and optimizing trade-offs that include power fluctuations [99,[108][109][110]. METHODS In this section we provide details on the optimization of the entropy production, on the reinforcement learning implementation, on the physical model used to describe the quantum thermal machines, on the training details, on the convergence of the method, on the comparison with other methods, and on the computation of the generation of coherence during the cycles.We also provide access to the full code that was used to generate the results presented in the manuscript, and the corresponding data. Optimizing the entropy production Here we discuss the relation between optimizing the power and the entropy production, or the power and the efficiency.We start by noticing that we can express the efficiency of a heat engine η heat and the coefficient of performance of a refrigerator η cool in terms of the averaged power and entropy production, i.e. where ν = heat, cool, η is the Carnot coefficient of performance, and where we defined β heat ≡ β C and β cool ≡ β C − β H .We now show that, thanks to this dependence of η ν on ⟨P ν ⟩ and ⟨Σ⟩, if a cycle is a Paretooptimal trade-off between high power and high efficiency, then it is also a Pareto-optimal trade-off between high power and low entropy-production up to a change of c.This means that if we find all optimal trade-offs between high power and low entropy-production (as we do with our method if the Pareto-front is convex), we will have necessarily also found all Pareto-optimal trade-offs between high power and high efficiency. Mathematically, we want to prove that the cycles that maximize for some value of c ∈ [0, 1], also maximize the return in Eq. ( 5) for some (possibly different) value of c ∈ [0, 1]. To simplify the proof and the notation, we consider the following two functions where P (θ) and η(θ) represent the power and efficiency of a cycle parameterized by a set of parameters θ, a > 0 and b > 0 are two scalar quantities, and is obtained by inverting Eq. ( 9).We wish to prove the following.Given some weights a 1 > 0 and b 1 > 0, let θ 1 be the value of θ that locally maximizes G(a 1 , b 1 , θ).Then, it is always possible to identify positive weights a 2 > 0, b 2 > 0 such that the same parameters θ 1 (i.e. the same cycle) is a local maximum for F (a 2 , b 2 , θ).In the following, we will use that and that the Hessian H (Σ) of Σ(P, η) is given by Proof: by assumption, θ 1 is a local maximum for G(a 1 , b 1 , θ).Denoting with ∂ i the partial derivative in (θ) i , we thus have Now, let us compute the derivative in θ of F (a 2 , b 2 , θ 1 ), where a 2 > 0 and b 2 > 0 are two arbitrary positive coefficients.We have ) Therefore, if we choose a 2 and b 2 such that thanks to Eq. ( 15) we have that meaning that the same parameters θ 1 that nullifies the gradient of G, nullifies also the gradient of F at a different choice of the weights, given by Eq. ( 17).The invertibility of Eq. ( 17) (i.e. a non-null determinant of the matrix) is guaranteed by Eq. ( 13).We also have to make sure that if a 1 > 0 and b 1 > 0, then also a 2 > 0 and b 2 > 0. To do this, we invert Eq. ( 17), finding It is now easy to see that also the weights a 2 and b 2 are positive using Eq. ( 13). To conclude the proof, we show that θ 1 is a local maximum for F (a 2 , b 2 , θ) by showing that its Hessian is negative semi-definite.Since, by hypothesis, θ 1 is a local maximum for G(a 1 , b 1 , θ), we have that the Hessian matrix is negative semi-definite.We now compute the Hessian where and H (Σ) is the Hessian of Σ(P, η) computed in P (θ 1 ) and η(θ 1 ).Since we are interested in studying the Hessian of F (a 2 , b 2 , θ 1 ) in the special point (a 2 , b 2 ) previously identified, we substitute Eq. ( 19) into Eq.( 21), yielding We now prove that H (F ) ij is negative semi-definite since it is the sum of negative semi-definite matrices.By hypothesis H (G) ij is negative semi-definite.Recalling Eq. ( 13) and that b 1 > 0, we now need to show that Q ij is positive semi-definite.Plugging Eq. ( 14) into Eq.( 22) yields where We now show that if R ij is positive semi-definite, then also Q ij is positive semi-definite.By definition, Q ij is positive semidefinite if, for any set of coefficient a i , we have that ij a i Q ij a j ≥ 0. Assuming R ij to be positive semi-definite, and using that [ν] , η > 0, we have where we define x i ≡ ∂ i η a i .We thus have to prove the positivity of R ij .We prove this showing that it is the sum of 3 positive semi-definite matrices.Indeed, the first term in Eq. ( 25), 2P/η, is proportional to a matrix with 1 in all entries.Trivially, this matrix has 1 positive eigenvalue, and all other ones are null, so it is positive semi-definite.At last, S ij and its transpose have the same positivity, so we focus only on S ij .S ij is a matrix with all equal columns.This means that it has all null eigenvalues, except for a single one that we denote with λ.Since the trace of a matrix is equal to the sum of the eigenvalues, we have λ = Tr[S] = i S ii .Using the optimality condition in Eq. ( 15), we see that each entry of S is positive, i.e. S ij > 0. Therefore λ > 0, thus S is positive semi-definite, concluding the proof that H (F ) ij is negative semi-definite. To conclude, we notice that we can always renormalize a 2 and b 2 , preserving the same exact optimization problem.This way, a value of c ∈ [0, 1] can be identified. Reinforcement Learning Implementation As discussed in the main text, our goal is to maximize the return ⟨r c ⟩ (t) defined in Eq. ( 5).To solve the problem within the RL framework, we discretize time as t i = i∆t.At every time-step t i , the aim of the agent is to learn an optimal policy that maximizes, in expectation, the time-discretized return ⟨r c ⟩ i .The time-discrete reward and return functions are given by: Eq. ( 28) is the time-discrete version of Eq. ( 5), where the discount factor γ = exp(−κ∆t) determines the averaging timescale and expresses how much we are interested in future or immediate rewards.To be precise, plugging Eq. ( 27) into Eq.( 28) gives ⟨r c ⟩(t) (up to an irrelevant constant prefactor) only in the limit of ∆t → 0. However, also for finite ∆t, both quantities are time-averages of the reward, so they are equally valid definitions to describe a long-term trade-off maximization. As in Ref. [69], we use a generalization of the soft-actor critic (SAC) method, first developed for continuous actions [70,71], to handle a combination of discrete and continuous actions [72,73].We further tune the method to stabilize the convergence in a multi-objective scenario.We here present an overview of our implementation of SAC putting special emphasis on the differences with respect to the standard implementation.However, we refer to [70][71][72][73] for additional details.Our method, implemented with PyTorch, is based on modifications and generalizations of the SAC implementation provided by Spinning Up from OpenAI [111].All code and data to reproduce the experiments is available online (see Data Availability and Code Availability sections). The SAC algorithm is based on policy iteration, i.e. it consists of iterating multiple times over two steps: a policy evaluation step, and a policy improvement step.In the policy evaluation step, the value function of the current policy is (partially) learned, whereas in the policy improvement step a better policy is learned by making use of the value function.We now describe these steps more in detail. In typical RL problems, the optimal policy π * (s|a) is defined as the policy that maximizes the expected return defined in Eq. ( 28), i.e.: where E π denotes the expectation value choosing actions according to the policy π.The initial state s 0 = s is sampled from µ π , i.e. the steady-state distribution of states that are visited by π.In the SAC method, balance between exploration and exploitation [112] is achieved by introducing an Entropy-Regularized maximization objective.In this setting, the optimal policy π * is given by where α ≥ 0 is known as the "temperature" parameter that balances the trade-off between exploration and exploitation, and is the entropy of the probability distribution P .Notice that we replaced the unknown state distribution µ π with B, which is a replay buffer populated during training by storing the observed one-step transitions (s k , a k , r k+1 , s k+1 ).Developing on Ref. [69], we generalize such approach to a combination of discrete and continuous actions in the following way.Let us write an arbitrary action a as a = (u, d), where u is the continuous action and d is the discrete action (for simplicity, we describe the case of a single continuous action, though the generalization to multiple variables is straightforward).From now on, all functions of a are also to be considered as functions of u, d.We decompose the joint probability distribution of the policy as where π D (d|s) is the marginal probability of taking discrete action d, and π C (u|d, s) is the conditional probability density of choosing action u, given action d (D stands for "discrete", and C for "continuous").Notice that this decomposition is an exact identity, thus allowing us to describe correlations between the discrete and the continuous action.With this decomposition, we can write the entropy of a policy as where correspond respectively to the entropy contribution of the discrete (D) and continuous (C) part.These two entropies take on values in different ranges: while the entropy of a discrete distribution with |D| discrete actions is non-negative and upper bounded by log(|D|), the (differential) entropy of a continuous distribution can take on any value, including negative values (especially for peaked distributions).Therefore, we introduce a separate temperature for the discrete and continuous contributions replacing the definition of the optimal policy in Eq. ( 30) with where α C ≥ 0 and α D ≥ 0 are two distinct "temperature" parameters.This is one of the differences with respect to Refs.[69][70][71].Equation ( 35) defines our optimization objective.Accordingly, we define the value function Q π (s, a) of a given policy π as Its recursive Bellman equation therefore reads As in Ref. [70,71], we parameterize π C (u|d, s) as a squashed Gaussian policy, i.e. as the distribution of the variable ξ ∼ N (0, 1), (38) where µ(d, s) and σ(d, s) represent respectively the mean and standard deviation of the Gaussian distribution, N (0, 1) is the normal distribution with zero mean and unit variance, and where we assume that U = [u a , u b ].This is the so-called reparameterization trick. We now describe the policy evaluation step.In the SAC algorithm, we learn two value functions Q ϕi (s, a) described by the learnable parameters ϕ i , for i = 1, 2. Q ϕ (s, a) is a function approximator, e.g. a neural network.Since Q ϕi (s, a) should satisfy the Bellman Eq. ( 37), we define the loss function for Q ϕi (s, a) as the mean square difference between the left and right hand side of Eq. ( 37), i.e. where Notice that in Eq. ( 40) we replaced Q π with min j=1,2 Q ϕtarg,j , where ϕ targ,j , for j = 1, 2, are target parameters which are not updated when minimizing the loss function; instead, they are held fixed during backpropagation, and then they are updated according to Polyak averaging, i.e. where ρ polyak is a hyperparameter.This change was shown to improve learning [70,71].In order to evaluate the expectation value in Eq. ( 40), we use the decomposition in Eq. (32) to write where we denote a ′ = (u ′ , d ′ ).Plugging Eq. ( 42) into Eq.( 40) and writing the entropies explicitly as expectation values yields We then replace the expectation value over u ′ in Eq. ( 43) with a single sampling u ′ ∼ π C (•|d ′ , s ′ ) (therefore one sampling for each discrete action) performed using Eq.(38).This corresponds to performing a full average over the discrete action, and a single sampling of the continuous action. We now turn to the policy improvement step.Since we introduced two separate temperatures, we cannot use the loss function introduced in Refs.[70,71].Therefore, we proceed in two steps.Let us define the following function where Q π old (s, a) is the value function of some given "old policy" π old , and π is an arbitrary policy.First, we prove that if a policy π new satisfies for all values of s, then π new is a better policy than π old as defined in Eq. (35).Next, we will use this property to define a loss function that implements the policy improvement step.Equation (45) implies that We now use this inequality to show that π new is a better policy.Starting from the Bellmann equation ( 37) for Q π old , we have Eq.( 47).Using a strategy similar to that described in Refs.[70,112], in Eq. ( 47) we make a repeated use of inequality (46) and of the Bellmann equation for Q π old (s, a) to prove that the value function of π new is better or equal to the value function of π old . Let π θ (a|s) be a parameterization of the policy function that depends on a set of learnable parameters θ.We define the following loss function Thanks to Eqs. ( 44) and (45), this choice guarantees us to find a better policy by minimizing L π (θ) with respect to θ.In order to evaluate the expectation value in Eq. ( 48), as before we explicitly average over the discrete action and perform a single sample of the continuous action, and we replace Q π old with min j Q ϕj .Recalling the parameterization in Eq. (38), this yields We have defined and shown how to evaluate the loss functions L Q (ϕ) and L π (θ) that allow us to determine the value function and the policy [see Eqs. ( 39), ( 43) and (49)].Now, we discuss how to automatically tune the temperature hyperparameters α D and α C .Ref. [71] shows that constraining the average entropy of the policy to a certain value leads to the same exact SAC algorithm with the addition of an update rule to determine the temperatures.Let HD and HC be respectively the fixed average values of the entropy of the discrete and continuous part of the policy.We can then determine the corresponding temperatures α D and α C minimizing the following two loss functions As usual, we evaluate the entropies by explicitly taking the average over the discrete actions, and taking a single sample of the continuous action.To be more specific, we evaluate L D by computing and L C by computing and replacing the expectation value over u with a single sample. To summarize, the SAC algorithm consists of repeating over and over a policy evaluation step, a policy improvement step, and a step where the temperatures are updated.The policy evaluation step consists of a single optimization step to minimize the loss functions L Q (ϕ i ) (for i = 1, 2), given in Eq. (39), where y(r, s ′ ) is computed using Eq.(43).The policy improvement step consists of a single optimization step to minimize the loss function L π (θ) given in Eq. ( 49).The temperatures are then updated performing a single optimization step to minimize L D (α D ) and L C (α C ) given respectively in Eqs. ( 51) and (52).In all loss functions, the expectation value over the states is approximated with a batch of experience sampled randomly from the replay buffer B. We now detail how we parameterize π(a|s) and Q(s, a).The idea is to develop an efficient way to process the state that can potentially be a long time-series of actions.To this aim, we introduce a "convolution block" as a building element for our NN architecture.The convolution block, detailed in Fig. 6, takes an input of size (C in , L in ), where C in is the number of channels (i.e. the number of parameters determining an action at every time-step) and L in is the length of the time-series, and produces an output of size (C out , L out = L in /2), thus halving the length of the time-series.Notice that we include a skip connection (right branch in Fig. 6) to improve trainability [102]. Using the decomposition in Eq. ( 32) and the parameterization in Eq. (38), the quantities that need to FIG. 6.Schematic representation of the convolution block that takes as input a 1D time-series of size (Cin, Lin), where Lin is the length of the series and Cin is the number of channels, and produces an output of size (Cout, Lin/2).In this image Lin = 4.The output is produced by stacking a 1D convolution of kernel size and stride of 2, and a non-linearity (left branch).A residual connection (right branch), consisting only of linear operations, is added to improve trainability.The architecture of the neural network that we use for the policy function is shown in Fig. 7A.The state, composed of the time-series s i = (a i−N , . . ., a i−1 ) which has shape (C in , L in = N ), is fed through a series of ln 2 (N ) convolutional blocks, which produce an output of length (C out , L = 1).The number of input channels C in is determined by stacking the components of ⃗ u (which, for simplicity, is a single real number u in this appendix) and by using a one-hot encoding of the discrete actions.We then feed this output, together with the last action which has a privileged position, to a series of fully connected NNs with ReLU activations.Finally, a linear network outputs W (d|s), µ(d, s) and log(σ(d, s)), for all d = 1, . . ., |D|. The probabilities π D (d|s) are then produced applying the softmax operation to W (d|s).We parameterize the value function Q ϕ (s, u, d) as in Fig. 7B.As for the policy function, the state s is fed through ln 2 (N ) stacked convolution blocks which reduce the length of the input to (C out , L = 1).This output, together with the action u, is fed into a series of fullyconnected layers with ReLU activations.We then add a linear layer that produces |D| outputs, corresponding to the value of Q(s, u, d) for each d = 1, . . ., |D|. At last, we discuss a further change to the current method that we implemented in the superconducting qubit refrigerator case to improve the converge.This idea is the following.The return ⟨r c ⟩ is a convex combination of the power and of the negative entropy production.The first term is positive when the system is delivering the desired power, while the second term is strictly negative.Therefore, for c close to 1, the optimal value of the return is some positive quantity.Instead, as c decreases, the optimal value of the return decreases, getting closer to zero (this can be seen explicitly in Figs.4A and 5B).However, a null return can also be achieved by a trivial cycle that consists of doing nothing, i.e. of keeping the control constant in time.Indeed, this yields both zero power, and zero entropy production.Therefore, as c decreases, it becomes harder and harder for the RL agent to distinguish good cycles from these trivial solutions.We thus modify our method to allow us to smoothly change the value of c during training from 1 to the desired final value, which allows to tackle an optimization problem by "starting from an easier problem" (c = 1), and gradually increasing its difficulty.This required the following modifications to the previously described method. We introduce two separate value functions, one for each objective (P for the power, and Σ for the entropy production) where represent respectively the normalized average power and average entropy production during each time-step.Since the value functions in Eq. ( 53) are identical to Eq. ( 36) up to a change of the reward, they separately satisfy the same Bellmann equation as in Eq. ( 37), with r 1 replaced respectively with r (P) 1 and r (Σ) 1 .Therefore, we learn each value functions minimizing the same loss function L Q given in Eq. ( 39), with r i replaced with r (P) 1 or r (Σ) 1 .Both value functions are parameterized using the same architecture, but separate and independent parameters.We now turn to the determination of the policy.Comparing the definition of r i given in the main text with Eq. ( 54), we see that r i+1 = cr i+1 .Using this property, and comparing Eq. (36) with Eq. ( 53), we see that Therefore, we learn the policy minimizing the same loss function as in Eq. ( 49), using Eq. ( 55) to compute the value function.To summarize, this method allows us to vary c dynamically during training.This requires learning two value functions, one for each objective, and storing in the replay buffer the two separate rewards r (P) i and r (Σ) i .At last, when we refer to "final deterministic cycle", we are sampling from the policy function "switching off the stochasticity", i.e. choosing continuous actions u setting ξ = 0 in Eq. ( 38), and choosing deterministically the discrete action with the highest probability. Physical model As discussed in the main text, we describe the dynamics of the two analyzed QTMs employing the Lindblad master equation that can be derived also for nonadiabatic drivings [107], in the weak system-bath coupling regime performing the usual Born-Markov and secular approximation [104][105][106] and neglecting the Lambshift contribution.This approach describes the timeevolution of the reduced density matrix of the quantum system, ρ(t), under the assumption of weak system-bath interaction.Setting ℏ = 1, the master equation reads where Ĥ[⃗ u(t)] is the Hamiltonian of the quantum system that depends explicitly on time via the control parameters ⃗ u(t), [•, •] denotes the commutator, and , known as the dissipator, describes the effect of the coupling between the quantum system and bath α = H, C. We notice that since the RL agent produces piece-wise constant protocols, we are not impacted by possible inaccuracies of the master equation subject to fast parameter driving [113], provided that ∆t is not smaller than the bath timescale.Without loss of generality, the dissipators can be expressed as [105,106] where λ α [d(t)] ∈ {0, 1} are functions that determine which bath is coupled the quantum system, Â(α) k,⃗ u(t) are the Lindblad operators, and γ (α) k,⃗ u(t) are the corresponding rates.In particular, λ H (Hot) = 1, λ C (Hot) = 0, while λ H (Cold) = 0, λ C (Cold) = 1, and λ H (None) = λ C (None) = 0. Notice that both the Lindblad operators and the rates can depend on time through the value of the control ⃗ u(t).Their explicit form depends on the details of the system, i.e. on the Hamiltonian describing the dynamics of the overall system including the bath and the system-bath interaction.Below, we provide the explicit form of Â(α) k,⃗ u(t) and γ k,⃗ u(t) used to model the two setups considered in the manuscript.We adopt the standard approach to compute the instantaneous power and heat currents [114] that guarantees the validity of the first law of thermodynamics ∂U (t)/(∂t) = −P (t) + α J α (t), the internal energy being defined as In the superconducting qubit refrigerator, we employ the model first put forward in Ref. [49], and further studied in Refs.[55,63].In particular, we consider the following Lindblad operators and corresponding rates (identifying k = ±): where |g u(t) ⟩ and |e u(t) ⟩ are, respectively, the instantaneous ground state and excited state of Eq. ( 7).The corresponding rates are given by γ where ∆ϵ u(t) is the instantaneous energy gap of the system, and ) is the noise power spectrum of bath α.Here ω α , Q α and g α are the base resonance frequency, quality factor and coupling strength of the resonant circuit acting as bath α = H, C (see Refs. [49,63] for details).As in Ref. [63], we choose ω C = 2E 0 ∆ and ω H = 2E 0 ∆ 2 + 1/4, such that the C (H) bath is in resonance with the qubit when u = 0 (u = 1/2).The width of the resonance is governed by Q α .The total coupling strength to bath α, plotted in Fig. 3F, is quantified by In the quantum harmonic oscillator based heat engine, following Ref.[43], we describe the coupling to the baths through the Lindblad operators Â(α) −,u(t) = âu(t) and corresponding rates γ where we identify k = ±.âu(t) = (1/ √ 2) mω 0 u(t) q + i/ mω 0 u(t) p and â † u(t) are respectively the (control dependent) lowering and raising operators, Γ α is a constant rate setting the thermalization timescale of the system coupled to bath α, and n(x) = [exp(x) − 1] −1 is the Bose-Einstein distribution. Training details We now provide additional practical details and the hyper parameters used to produce the results of this manuscript. In order to enforce sufficient exploration in the early stage of training, we do the following.As in Ref. [111], for a fixed number of initial steps, we choose random actions sampling them uniformly withing their range.Furthermore, for another fixed number of initial steps, we do not update the parameters to allow the replay buffer to have enough transitions.B is a first-in-first-out buffer, of fixed dimension, from which batches of transitions (s k , a k , r k+1 , s k+1 , a k+1 ) are randomly sampled to update the NN parameters.After this initial phase, we repeat a policy evaluation, a policy improvement step and a temperature update step n updates times every n updates steps.This way, the overall number of updates coincides with the number of actions performed on the QTM.The optimization steps for the value function and the policy are performed using the ADAM optimizer with the standard values of β 1 and β 2 .The temperature parameters α D and α C instead are determined using stochastic gradient descent with learning rate 0.001.To favor an exploratory behavior early in the training, and at the same time to end up with a policy that is approximately deterministic, we schedule the target entropies HC and HD .In particular, we vary them exponentially during each step according to Ha (n steps ) = Ha,end + ( Ha,start − Ha,end ) exp(−n steps / Ha,decay ), (62) where a = C, D, n steps is the current step number, and Ha,start , Ha,end and Ha,decay are hyperparameters.In the superconducting qubit refrigerator case, we schedule the parameter c according to a Fermi distribution, i.e. c(n In the harmonic oscillator engine case, to improve stability while training for lower values of c, we do not vary c during training, as we do in the superconducting qubit refrigerator case.Instead, we discourage the agent from never utilizing one of the two thermal baths by adding a negative reward if, withing the last N = 128 actions describing the state, less than 25 describe a coupling to either bath.In particular, if the number of actions N α where d = α, with α = Hot, Cold is less than 25 in the state time-series, we sum to the reward the following penalty This penalty has no impact on the final cycles where N α is much larger than 25. All hyperparameters used to produce the results of the superconducting qubit refrigerator and of the harmonic oscillator heat engine are provided respectively in Tables I and II, where c refers to the weight at which we are optimizing the return. Convergence of the RL approach The training process presents some degree of stochasticity, such as the initial random steps, the stochastic sampling of actions from the policy function, and the random sampling of a batch of experience from the replay buffer to compute an approximate gradient of the loss functions.We thus need to evaluate the reliability of our approach. As shown in the main text, specifically in Figs. 4 and 5, we ran the full optimization 5 times.Out of 65 trainings in the superconducting qubit refrigerator case, only 4 failed, and out of the 55 in the harmonic oscillator engine, only 2 failed, where by failed we mean that the final return was negative.In such cases, we ran the training an additional time. Figs. 4A and 5B display an error bar corresponding to the standard deviation, at each value of c, computed over the 5 repetitions.Instead, in Figs.4B and 5C we display one black dot for each individual training.As we can see, the overall performance is quite stable and reliable. At last, we discuss the variability of the discovered cycles.The cycles shown in Figs.4C-F and 5D-E were chosen by selecting the largest return among the 5 repetitions.In Figs. 8 and 9 we display cycles discovered in the last of the 5 repetition, i.e. chosen without any post-selection.They correspond to the same setups and parameters displayed in Figs.4C-F and 5D-E.As we can see, 5 out of the 6 displayed cycles are very similar to the ones displayed in Figs.4C-F and 5D-E, with a very slight variability.The only exception is Fig. 8B, where the cycle has a visibly shorter period and amplitude than the one shown in Fig. 4D.Despite this visible difference in the cycle shape, the return of the cycle shown in Fig. 8B is 0.382 compared to 0.385 of the cycle shown in Fig. 4B. We therefore conclude that, up to minor changes, the cycles are generally quite stable across multiple trainings. Comparing with other methods In Figs. 4 and 5 we compare the performance of our method respectively against optimized trapezoidal cycles, and optimized Otto cycles.In both cases, we also maximize the power using the RL method of Ref. [69].We now detail how we perform such comparison. In the refrigerator based on a superconducting qubit, we consider the trapezoidal cycle proposed in Ref. [49,63], i.e. we fix with a = 2, and we optimize ⟨r c ⟩ with respect to frequency Ω.In the heat engine case based on a quantum harmonic oscillator, we fix an Otto cycle as described in Ref. [43], i.e. a trapezoidal cycle consisting of the 4 strokes shown in Figs.5D-E as a dashed line, and we optimize over the duration of each of the 4 strokes.In particular, we first performed a grid search in the space of these four durations for c = 1.After identifying the largest power, we ran the Newton algorithm to further maximize the return.We then ran the Newton algorithm for all other values of c.The comparison with Ref. [69] was done using the source code provided in Ref. [69], and using the same exact hyperparameters that were used in Ref. [69]. In particular, in the case of the refrigerator based on a superconducting qubit, we re-ran the code using the hyperparameters reported in Table 1, column "Figs.3, 4", of the Methods section of Ref. [69], and we trained for the same number of steps (500k).We then evaluated its power and coefficient of performance evaluating the deterministic policy (which typically has a better performance).In the heat engine case based on a quantum harmonic oscillator, we evaluated the performance of the cycle reported in Fig. 5a,c of Ref. [69], whose training hyperparameters are reported in Table 1, column "Fig.5a", of the Methods section of Ref. [69]. Generation of coherence In order to quantify the coherence generated in the instantaneous eigenbasis of the Hamiltonian in the refrigerator based on a superconducting qubit, we evaluated the time average of relative entropy of coherence [115], defined as C(ρ(t)) = S(ρ diag.(t)) − S(ρ(t)), (66) where is the density matrix, in the instantaneous eigenbasis |g u(t) ⟩ and |e u(t) ⟩, with the off-diagonal terms canceled out. We compute the time-average of the relative entropy of coherence generated by the final deterministic cycle found by the RL agent, and compare it to the coherence generated by a trapezoidal cycle operated at the same speed, i.e. with the same period.As we can see in Table III, the trapezoidal cycles generate twice as much coherence as the RL cycles shown in Figs.4C-F, i.e. corresponding to c = 1, 0.8, 0.6, 0.4. CODE AND DATA AVAILABILITY The code used to generate all results is available on GitHub (https://github.com/PaoloAE/paper_rl_blackbox_thermal_machines).All raw data that was generated with the accompanying code and that was used to produce the results in the manuscript is available on Figshare (https://doi.org/10.6084/m9.figshare.19180907). 1 FIG. 3 . FIG. 3. Training of the superconducting qubit refrigerator model to optimize ⟨rc⟩ at c = 0.6.(A): Schematic representation of the energy levels of the qubit (horizontal black lines) that are controlled by u(t).The gray arrow represents the input power, while the colored arrows represent the heat fluxes.(B): Return ⟨rc⟩ i computed over past rewards (black curve), running average of the cooling power ⟨P cool ⟩ i /P0 (green curve), and of the negative entropy production − ⟨Σ⟩ i /Σ0 (orange curve), as a function of the training step.The dashed line represents the value of the return found optimizing the period of a smoothed trapezoidal cycle.(C): Value of the weight c as a function of the step.It is varied during training from 1 to the final value 0.6 to improve convergence.(D): Actions chosen by the agent, represented by the value of u, as a function of step, zoomed around the three black circles in panel (B).(E): Final deterministic cycle found by the agent (thick black dots) and smoothed trapezoidal cycle (thin dashed line) whose return is given by the dashed line in panel (B), as a function of time.(F): coupling strength γ (C) u (blue curve) and γ (H) u (red curve) as a function of u (on the y-axis).The parameters used for training are N = 128, gH = gC = 1, βH = 10/3, βC = 2βH, QH = QC = 4, E0 = 1, ∆ = 0.12, ωH = 1.028, ωC = 0.24, U = [0, 0.75], ∆t = 0.98, γ = 0.997, P0 = 6.62 • 10 −4 and Σ0 = 0.037. as a dashed line (see "Comparing with other methods" in Materials and Methods for details). FIG. 4 . FIG.4.Results for the optimization of the superconducting qubit refrigerator model.(A): final value of the return ⟨rc⟩, as a function of c, found using the RL method (black and blue points), and optimizing the period of a trapezoidal cycle (red dots).The error bars represent the standard deviation of the return computed over 5 independent training runs.(B): corresponding values of the final average cooling power ⟨P cool ⟩ and of the coefficient of performance η cool found using the RL method (black and blue dots), optimizing the trapezoidal cycle (red dots), and using the RL method of Ref.[69] (purple cross).Results for each of the 5 repetitions are shown as separate points to visualize the variability across multiple trainings.(C-F): final deterministic cycles identified by the RL method (thick black dots), as a function of time, corresponding to the blue points in panels (A) and (B) (respectively for c = 1, 0.8, 0.6, 0.4 choosing the training run with the largest return).The dashed line represents the trapezoidal cycle that maximizes the return for the same value of c [not shown in panel (F) since no cycle yields a positive return].The parameters used for training are chosen as in Fig. 3. 1 FIG. 5 . FIG.5.Results for the optimization of the harmonic oscillator heat engine model.(A): Schematic representation of the energy levels of the particles (black horizontal lines) trapped in a harmonic potential (parabolic curve) whose amplitude is controlled by u(t).The gray arrow represents the extracted power, while the colored arrows represent the heat fluxes.(B): final value of ⟨rc⟩, as a function of c, found using the RL method (black and blue dots), and optimizing the Otto cycle (red dots).The error bars represent the standard deviation of the return computed over 5 independent training runs.(C): corresponding values of the average power ⟨P heat ⟩ /P0 and of the efficiency η heat found using the RL method (black and blue dots), optimizing the Otto cycle (red dots), and using the RL method of Ref.[69] (purple cross).Results for each of the 5 repetitions are shown as separate points to visualize the variability across multiple trainings.(D-E): final deterministic cycle identified by the RL method (thick dots), as a function of time, corresponding to the blue points in panels (B) and (C) (respectively c = 1, 0.5 choosing the training run with the largest return).The color corresponds to the discrete choice d = {Hot, Cold, None} (see legend).The dashed line represents the Otto cycle that maximizes the return for the same value of c.The parameters used for training are N = 128, Γ (H) = Γ (C) = 0.6, βH = 0.2, βC = 2, w0 = 2, U = [0.5, 1] (to enable a fair comparison with Ref.[43]), ∆t = 0.2, γ = 0.999, P0 = 0.175 and Σ0 = 0.525. FIG. 7 . FIG. 7. Neural network architecture used to parameterize the policy π(⃗ u, d|s) (panel A) and to parameterize the value function Q(s, ⃗ u, d) (panel B). Coherence generated by the final deterministic cycles identified by the RL method (RL column) and generated by a trapezoidal cycle operated at the same speed (Trapez.column) at the values of c shown in the first column.These values correspond to the cycles shown in Figs.4C-F. TABLE I . Hyperparameters used in numerical calculations relative to the superconducting qubit refrigerator that are not reported in the caption of Fig.3.
14,511
2022-04-10T00:00:00.000
[ "Physics" ]
Intracellular Trafficking and Synaptic Function of APL-1 in Caenorhabditis elegans Background Alzheimer's disease (AD) is a neurodegenerative disorder primarily characterized by the deposition of β-amyloid plaques in the brain. Plaques are composed of the amyloid-β peptide derived from cleavage of the amyloid precursor protein (APP). Mutations in APP lead to the development of Familial Alzheimer's Disease (FAD), however, the normal function of this protein has proven elusive. The organism Caenorhabditis elegans is an attractive model as the amyloid precursor-like protein (APL-1) is the single ortholog of APP, and loss of apl-1 leads to a severe molting defect and early larval lethality. Methodology/Principal Findings We report here that lethality and molting can be rescued by full length APL-1, C-terminal mutations as well as a C-terminal truncation, suggesting that the extracellular region of the protein is essential for viability. RNAi knock-down of apl-1 followed by drug testing on the acetylcholinesterase inhibitor aldicarb showed that loss of apl-1 leads to aldicarb hypersensitivity, indicating a defect in synaptic function. The aldicarb hypersensitivity can be rescued by full length APL-1 in a dose dependent fashion. At the cellular level, kinesins UNC-104/KIF-1A and UNC-116/kinesin-1 are positive regulators of APL-1 expression in the neurons. Knock-down of the small GTPase rab-5 also leads to a dramatic decrease in the amount of apl-1 expression in neurons, suggesting that trafficking from the plasma membrane to the early endosome is important for apl-1 function. Loss of function of a different small GTPase, UNC-108, on the contrary, leads to the retention of APL-1 in the cell body. Conclusions/Significance Our results reveal novel insights into the intracellular trafficking of APL-1 and we report a functional role for APL-1 in synaptic transmission. Introduction Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by the deposition of b-amyloid plaques, loss of cholinergic neurons and accumulation of neurofibrillary tangles within the brain. Plaques are primarily composed of the amyloid-b peptide derived from cleavage of the amyloid precursor protein (APP). Despite the discovery of dominant mutations in APP that lead to the development of Familial Alzheimer's Disease, the normal functional role of this protein within the neuron is still unclear. Past studies have implicated APP in cell adhesion, synaptogenesis, cell migration, signaling, apoptosis, axonal transport as well as development of the neuromuscular junction suggesting that APP is not restricted to a single function (See [1] for review). APP is a type I trans-membrane protein that is conserved from C. elegans to humans. APP knock-out mice are viable and fertile and have mild defects in locomotor activity, forelimb grip strength, behavior and long term potentiation (LTP) [2,3,4,5]. The subtlety of these phenotypes is thought to be due to functional redundancy with the two other members of the APP family, APLP1 and APLP2 as loss of APLP2 along with one of the other two APP homologs results in early postnatal lethality in mice [6,7]. Because of the redundancy of these homologs, using the mammalian system to study the function of APP has proven challenging. The C. elegans model offers a simplification of the mammalian system in that APL-1 is the only APP ortholog in the nematode and a null mutation leads to early larval lethality [8]. APL-1 is structurally similar to its mammalian counterpart and shares three major regions of homology: the N-terminal E1 and E2 domains and the highly conserved intracellular C-terminal domain [9]. APL-1 does not contain the amyloid-b sequence, similar to the functionally redundant mammalian APP homologs APLP1 and APLP2 [10,11,12]. In this study, we use C. elegans to investigate the normal functional role of APL-1. We report that APL-1 is necessary for viability, molting and regulation of neurotransmission. Full length rescue of the synaptic transmission defect is dose dependent, while the N-terminus of APL-1 is sufficient to rescue the molting and lethality phenotypes. At the cellular level, proper localization and protein levels of APL-1 throughout the neuron are dependent on the kinesin transporters UNC-104/KIF1A and UNC-116/kinesin-1 as well as the small GTPase RAB-5 and UNC-108/Rab2, indicating their likely role in APL-1 vesicle transport and endocytosis. RNAi RNAi was performed by feeding as previously described [14]. RNAi clones were isolated from the Ahringer RNAi library (Gene Service) by streaking clones onto plates containing 10 mg/ml tetracycline and 100 mg/ml ampicillin. Before use, all RNAi clones isolated from the library were validated by sequencing. Cultures were grown overnight in LB containing 100 mg/ml ampicillin and used to seed NGM plates containing 1 mM IPTG and 50 mg/ml ampicillin. Development and Brood Size Assays Ten to twelve young adult worms were placed on individual RNAi plates or NGM plates and allowed to lay eggs overnight. The eggs were collected onto new plates and counted. After 48 hours the worms were scored for their developmental stage. For brood size assays, ten L4 worms were placed on individual NGM plates. Every 24 hours the number of eggs the worms produced were counted and the mothers transferred to new plates until no more eggs were produced. Brood size assays were analyzed by one-way ANOVA with Bonferroni post-hoc test. Movement Assays Body bends per minute were obtained by placing late L4 worms individually on NGM plates and screening 24 hrs later. Body bends were counted over a 3 min time period and then divided to calculate the average number of body bends in one minute. One body bend is completed when the point behind the pharynx reaches the opposite apex of the sinusoidal curve. Measurements were statistically analyzed by one-way ANOVA with Bonferroni post-hoc test. Plasmid Construction All constructs were generated, unless otherwise described, by amplifying target sequences with Phusion High-Fidelity DNA Polymerase (Finnzymes) using primers with overhanging restriction sites. The PCR products were digested with their respective restriction enzymes and ligated to the destination vector backbone. For apl-1 expression studies and rescue experiments, worm apl-1 genomic coding DNA was amplified from cosmid C42D8 (Sanger Institute) including 4.4 kb of sequence upstream from the start codon. This fragment was ligated into L3781 (Addgene plasmid 1590; Fire Lab C. elegans Vector Kit, 1999 plate) in frame with the GFP sequence at the C-terminal end of apl-1 to generate papl-1::apl-1::gfp. The C-terminal truncation construct papl-1::apl-1DIC::gfp was generated by amplifying the coding region of apl-1 excluding the sequence for the last 36 amino acids. This PCR product was cloned into L3781 followed by cloning the apl-1 promoter upstream. Constructs containing papl-1::apl-1DYENP-TY::gfp, papl-1::apl-1::T658A::gfp, and papl-1::T658E::gfp were generated from the original papl-1::apl-1::gfp full length construct using site-directed mutagenesis to introduce the individual mutations. For human rescue studies, human APP cDNA containing an mRFP fusion was amplified from N1-APP-RFP and cloned into the L3781 expression vector that had been cut with XmaI and NheI to remove the coding region of GFP. The apl-1 promoter was amplified from C42D8 then added to the APP::RFP clone to generate papl-1::APP::RFP. APLP1 and APLP2 cDNA sequences were amplified from clones 3865417 and 2820109 (Open Biosystems) respectively and also cloned into the L3781 vector with the GFP tag on the 39 end of the gene followed by the insertion of the apl-1 promoter. Plasmids for colocalization purposes were made by amplifying mCherry from the vector pCFJ90 (Addgene plasmid 19327; [15]) and cloning the sequence upstream of the start codon of either rab-5 or unc-108 cDNA. For rab-5, the mCherry sequence was cloned into pBZ103 containing phsp16/2::rab-5 followed by cloning the apl-1 promoter to generate the final construct containing papl-1::mCherry::rab-5. The unc-108 construct was developed by inserting the amplified mCherry sequence into pAOLO174 containing punc-108::unc-108(Q65L). The Q65L mutation was removed using site-directed mutagenesis to generate the WT sequence. pBZ103 and pAOLO174 were generous donations from the Zhou lab at Baylor College of Medicine. All constructs were completely sequenced to verify accuracy of promoters and coding regions. All primers used for cloning are referenced in File S1. Transgenic Strains Transgenic strains were generated through microinjection of DNA constructs into the worm gonad as previously described [16]. For expression studies, papl-1::apl-1::gfp (40 ng/ml) was co-injected with the marker construct pRF4 (75 ng/ml), which contains rol-6(su1006). The rescue constructs described above were individually injected into the gonads of the apl-1(tm385)/lon-2(e678) strain. F1 progeny containing the array were individually cloned onto new plates and the F2 progeny of the lines were analyzed to determine if rescue of the L1 lethality occurred by absence of both long worms, indicating loss of the balancer, and arrested L1 progeny. Presence of the tm385 deletion was tested by PCR. The full length apl-1 10 ng/ml expressing strain was integrated by UV irradiation, outcrossed 3X with the lon-2(e678) strain to maintain the tm385 deletion then crossed to the rrf-3(pk1426) background for RNAi studies. For colocalization experiments, constructs were co-injected into N2 worms. Rescue strain construct concentrations are listed in Table 1 with each being co-injected with the pmyo-3::CFP marker (L4816; Fire Lab C. elegans Vector Kit, 1999 plate) (20 ng/ml) and pCR2.1 added to a final injection concentration of 100 ng/ml. For colocalization experiments the following injection concentrations were used: papl-1::apl-1::gfp (20 ng/ml), papl-1::mCherry::rab-5 (20 ng/ml), punc-108::mCherry::unc-108 (5 ng/ml); pRF4 (75 ng/ml). Strains created in this study are listed in File S1. Representative strains are listed where multiple strains were generated at the same concentration. Aldicarb Assay L4 hermaphrodites were placed on control (L4440) or apl-1 RNAi plates and allowed to mature and lay eggs overnight. Young adults from the F1 progeny were placed onto NGM plates containing 1 mM aldicarb (PS734, Sigma-Aldrich) in the presence of food then scored semi-blind for response to a harsh touch every 10 minutes for 2 hours. Animals unable to respond to the touch were scored as paralyzed. Adult RNAi aldicarb experiments were performed 48 hrs after young adult animals were placed on the control (L4440) or apl-1 RNAi plates. All rescue and mutant strains were tested as young adults. Strains were compared statistically by one-way ANOVA with Bonferroni post-hoc test. Protein Extraction and Western Blotting Worms were synchronized by bleaching as previously described [17] and three 10 cm plates containing L4 stage worms were harvested. Samples were washed with TE, pelleted by centrifugation then placed at 280uC for at least 15 min. The pellet was thawed and all liquid replaced with 50 ml RIPA buffer containing protease inhibitors (Roche). Samples were sonicated twice for 10 seconds each then centrifuged at 10,5006g for 10 min at 4uC. Supernatants were collected and protein concentrations were measured by plate reader using a detergent-compatible colorimetric protein assay (Bio-Rad). Samples were then combined with 2X loading buffer and boiled at 80uC for 10 min prior to loading. SDS-PAGE was performed by loading 20 mg of protein sample into a 5% SDSpolyacrylamide gel. Samples were transferred onto nitrocellulose membrane and then the membrane was blocked for 2 hrs in 5% milk diluted with PBST. Membranes were probed with antibodies against GFP 1:5000 (ab290; Abcam) or C. elegans c-tubulin 1:1000 (ab50721;Abcam) overnight in blocking solution. Following incubation with primary antibodies, membranes were washed 3X for 10 min in PBST, incubated with 2u anti-Rabbit HRP-conjugated antibody 1:5000 (Vector Laboratories) for 1 hr then washed again 3X for 10 min in PBST. Bands were detected using the Amersham ECL chemiluminescence reagent (GE Healthcare Life Sciences) and band density quantified using ImageJ software (National Institutes of Health). Band density was normalized to the loading control and compared using the Student's t-test. Fluorescence Microscopy and Quantification Imaging was performed by placing live animals anesthetized with 20 mM sodium azide on a 2% agarose pad. Images were obtained using a Zeiss Axioscope 2 Plus upright microscope equipped with an Axiocam MRm camera and Axiovision 4.1 software. Pictures were acquired with 100X or 63X lens. In the case of fluorescence quantification, head neurons of 10-20 worms per genotype were imaged with identical exposure times. Each neuronal cell body was imaged at its widest diameter in the plane of focus. Fluorescence was quantified using ImageJ. Control pictures were taken on the same day to account for differences in bulb intensity. Genotypes were compared using the Student's ttest. Colocalization experiments were performed by taking images on an Olympus IX50 inverted microscope. Images underwent deconvolution using Metamorph software and then overlayed using ImageJ. Colocalization quantification was performed as previously described [18] using intensity correlation analysis with the following modifications. The neuronal cell body was set as the region of interest and then the intensity correlation quotient (ICQ) was quantified using ImageJ as described by the McMaster Biophotonics Facility. In brief, an ICQ of 0 indicates random staining between the two fluorescent images while 0,ICQ#0.5 indicates colocalization and 0$ICQ.20.5 occurs during segregated staining. For further description of the analysis refer to Li et al. (2004) [18]. ICQ values of the neurons were statistically analyzed using the one-sample t-test to compare the mean ICQ values against a random staining value of 0. RNA Extraction and qRT-PCR Worms were synchronized by bleaching and harvested at the L4 stage (,6,000 per sample) by rinsing worms off the plate using TE. Samples underwent a freeze/thaw cycle 4X between liquid nitrogen and a 37uC waterbath followed RNA isolation using the Qiagen RNeasy Lipid Tissue kit method with the addition of the RNase free DNase steps (Qiagen). cDNA was generated by reverse transcription using the Superscript III First Strand kit (Invitrogen) with the input of equal concentrations (1000 ng/ml) of RNA as measured by NanoDrop (NanoDrop Technologies). PCR primers to test apl-1 expression were designed using Primer Express 2.0 (Applied Biosystems) (apl-1 Fwd ACTCACAGTGTCAGACCGTACCA, apl-1 Rev GTGCGGGACTTGAAGAGCTT) and ama-1 was used as the endogenous control (ama-1 qPCR f1 CACGCGTT-CAGTTTGGAATTC, ama-1 qPCR r1, AACTCGACATGAGC-CACTGACA). Dilutions of the cDNA samples were mixed with SYBR Green following the Power SYBR Green PCR Master Mix protocol (Applied Biosystems). Quantitative real-time PCR (qRT-PCR) was performed using the ABIprism 7000 and data collected using the 7000 System SDS Software (Applied Biosystems). Primer efficiencies were originally validated using the standard curve method with all subsequent experiments using a 1:50 dilution of cDNA and results analyzed using the comparative C t method. Bars represent the mean of a triplicate containing a single biological sample with error bars calculated from the sample and endogenous control standard deviations (STD = !((Sample STD) ' 2+(Housekeeping Gene STD) ' 2)). Characterization of apl-1 Expression To determine the expression pattern of apl-1, transgenic worms were generated expressing an apl-1::gfp fusion protein driven by the endogenous apl-1 promoter. APL-1::GFP fluorescence is detected in the cell bodies and processes of nerve ring interneurons (Figure 1 A,e) and the ventral cord (Figure 1 A,g). apl-1::gfp is also expressed in Table 1. Summary of injection constructs and their rescue of apl-1(tm385) lethality. socket cells and amphids present in the head. Strong expression is seen in junctional cells such as the pharyngeal intestinal valve (Figure 1 A,e), which tethers the pharynx to the intestine, and the uterine seam junction in adults (Figure 1 A,h), which provides the structural connection between the epidermis and the uterus. APL-1 can be weakly detected in many epidermal epithelia including hyp7 (Figure 1 A,f), the hypodermal syncitium surrounding the worm, as well as vulval cells, rectal valve cells, pharyngeal arcade cells, and tail hypodermis. Expression is prominent in the excretory cell (Figure 1 A,h), a long H-shaped cell implicated in fluid balance. APL-1 was notably absent from body wall muscle and intestine. These expression patterns indicate that apl-1 is active in cells with high levels of structural components such as synapses, junctional epithelial cells and cells with apical basal polarity. apl-1 Loss of Function and Genetic Rescue Similar to the molting defect caused by apl-1 null mutations, knock-down of apl-1 using RNAi on the RNAi sensitive rrf-3(pk1426) strain led to defective molting starting at the L3/L4 molt and continuing in the L4/YA molt ( Figure 1B). A variety of molting phenotypes were seen which ranged from loose cuticle around the head and tail (7.5%, n = 173), internal pinching of the worm body at or just posterior to the head (4.6%), degradation around the mouth area (9.8%) or a cuticle plug around the mouth (38.1%) ( Figure S1A, B). All of the worms had a very transparent appearance that, when examined at higher magnification, appeared as empty spaces in the worm spanning the length of the body. apl-1 knock-down also led to delayed development, as most of the population after two days was in the L4 stage while the majority of the control worms had completed development to adulthood ( Figure S1C). Furthermore, we observed that worms on apl-1 RNAi exhibited sluggish movement, failing to move normally even when touched. The apl-1(tm385) allele is a deletion that removes 646 base pairs including exon 3, which deletes 42 amino acids leading to a frame shift of the downstream sequence, resulting in a premature stop PLoS ONE | www.plosone.org codon ( Figure S2A). Worms homozygous for the apl-1(tm385) deletion are L1 lethal and exhibit internal vacuolization, degradation and loose cuticle phenotypes ( Figure 1C) similar to previously reported null mutations [8], indicating that the tm385 lesion is also null. We attempted to rescue the lethality of this mutant using constructs containing either full length apl-1, mutations within the highly conserved C-terminal domain, a Cterminal truncation of apl-1, or human APP, APLP1 or APLP2 (summarized in Table 1). All constructs were driven by the apl-1 promoter and fused to a C-terminal GFP. Rescue constructs included a deletion of the highly conserved YENPTY motif which is known to bind to many different adaptor proteins or mutations of the conserved Thr668 residue (Thr658 in APL-1) which is a phosphorylation site that can regulate the localization and binding partners of APP [19,20,21,22,23]. The Thr658 site was mutated either to Ala (T658A) or Glu (T658E) to mimic the dephosphorylated and constitutively phosphorylated protein respectively. In addition, we created a C-terminal truncation construct (DIC) by removing the last 36 amino acids of the protein leaving the transmembrane sequences intact rather than expressing only the soluble ectodomain of APL-1, attempting to maintain proper membrane anchoring and correct processing of the protein. To avoid any potential complications associated with the C-terminal GFP fusion used to monitor APL-1 expression, we also performed rescue experiments in parallel using full-length apl-1 lacking the GFP tag. Previous genetic rescue studies showed that the full length APL-1, the N-terminal sequences of APL-1, as well as the nonoverlapping E1 or E2 fragments of APL-1 were sufficient to rescue apl-1 null lethality [8]. These strains were generated by injecting very high concentrations of the DNA constructs (50-100 ng/ml), which consequently led to strong over-expression of apl-1 and functional impairments including delayed development, sluggish movement and smaller brood sizes. This ectopic over-expression may bypass any intracellular trafficking or processing requirement needed for rescue under physiological conditions. We attempted to limit these potential over-expression side effects by injecting much lower concentrations of the expression vectors (ranging from 10-20 ng/ml). We found that the most highly expressing array on the N2 background displayed normal movement and brood sizes with only a slight developmental delay ( Figure S2), indicating that the effect of over-expression is minimal at the DNA concentrations we used for injection. When expressed on the apl-1(tm385) background, we found that full length apl-1 along with all of the C-terminal mutation constructs were able to rescue the lethality and molting defects caused by the tm385 deletion ( Figure 1D, Table 1). Interestingly, while injection of apl-1 DIC at 10 ng/ml rescued the lethality and molting, the construct was unable to rescue at the higher concentration of 20 ng/ml. The reason for this dose-dependent rescue is not clear as it is not seen in any of the other constructs. In order to determine if human APP or one of its mammalian homologs APLP1 or APLP2 could act as functional homologs to apl-1, each of these genes expressed by the apl-1 promoter were injected individually into apl-1(tm385) heterozygous worms to test for rescue of the apl-1 lethality. Similar to the previous report of APP being unable to rescue apl-1 null lethality [8], none of the human genes were able to rescue the tm385 lethality, either expressed separately or together (Table 1). apl-1 Deficiency Leads to Aldicarb Hypersensitivity Independent of the Molting Phenotype Mammalian studies have revealed that mice lacking both APP and APLP2 display impaired synaptic structure and function at the peripheral cholinergic neuromuscular junction [24]. To examine whether apl-1 knock-down has a similar effect on neurotransmission, worms were tested for their response to the acetylcholinesterase inhibitor aldicarb, which blocks the breakdown of acetylcholine in the synaptic cleft, leading to constant stimulation of postsynaptic receptors and paralysis over time [13,25]. Worm mutants with excess or depleted acetylcholine become hypersensitive or resistant to aldicarb respectively. We found that apl-1 RNAi treated worms exhibited hypersensitivity to aldicarb (Figure 2A, B). In order to address the question of whether this phenotype may be a secondary effect due to the molting defect also seen in these worms, we bypassed the molting stages by placing RNAi sensitive young adult worms that have completed the molt cycle on apl-1 RNAi and subjected them to aldicarb testing. These worms were also hypersensitive to aldicarb ( Figure 2C, D), indicating a direct effect of APL-1 on neurotransmission, independent of molting. Next, we examined whether the transgenically expressing full length APL-1 rescue strains were able to rescue this neurotransmission defect. We tested full length strains made by injecting different concentrations of DNA (20 ng/ml and 10 ng/ml), and therefore having different levels of transgenic protein expression confirmed by both qRT-PCR and Western blot ( Figure 3A, S4A). Interestingly, only the strain that expressed the higher concentration of APL-1 was able to rescue the aldicarb hypersensitive phenotype while the lower expressing strain did not ( Figure 3B, C). These results show a dosage dependent effect of APL-1. This effect was not due to the presence of the GFP tag, as rescue strains with a similar level of expression without the tag retained the aldicarb hypersensitivity ( Figure S3). Since the rescue of aldicarb hypersensitivity, but not molting or development, by full length APL-1 is dose dependent, this result further strengthens the notion that APL-1 directly mediates synaptic transmission independent of molting. To test whether the C-terminus of apl-1 is required for the regulation of neurotransmission, we tested the aldicarb sensitivity of the DIC and the DYENPTY rescue strains and found that the C-terminal mutants exhibited similar aldicarb hypersensitivity as compared to full length APL-1 expressed at comparable levels ( Figure S4). These data provide indirect support against a potent role of the highly conserved C-terminal domain in APL-1 mediated synaptic transmission. UNC-104/KIF1A, UNC-116/Kinesin 1 and RAB-5 Positively Regulate APL-1 Expression While the trafficking of APP has been extensively studied in neurons, the movement of APL-1 through the cell is still unknown. APP is normally trafficked in a kinesin-dependent anterograde fashion from the cell body to the nerve terminal [26]. Due to APL-1's strong expression in neurons, we decided to test whether APL-1 can be transported by two of the major kinesins involved in anterograde transport of synaptic proteins. The first kinesin we investigated is the worm homolog of KIF1A, UNC-104. UNC-104/KIF1A is responsible for transporting synaptic vesicles and dense core vesicles to sites of synaptic transmission [27,28,29]. In order to test if APL-1 transport depends on this neuronal kinesin, we crossed the APL-1::GFP transgenic rescue strain to the hypomorphic mutant unc-104(e1265). Interestingly, rather than observing an accumulation of the GFP fluorescence in the cell body, which traditionally results from the reduction of UNC-104 mediated vesicle transport [27], we found a dramatic decrease in the fluorescence from apl-1::gfp on the unc-104(e1265) background, as measured by fluorescence intensity in a set of three head interneurons that consistently expressed apl-1 ( Figure 4A, B) Addition-ally, APL-1 fluorescence was absent from the processes of the neurons, quantified from a specific dorsal process that is consistently visible on the N2 background ( Figure 4C). Western blotting from L4 worms also detects a drop in APL-1 protein expression on the unc-104(e1265) background ( Figure 4D, E). qRT-PCR comparing apl-1 expression between N2 and unc-104(e1265) backgrounds showed no differences, suggesting that the loss of APL-1::GFP is occurring at the protein level ( Figure 4F). Using a similar approach, we tested the ability of UNC-116/ kinesin-1 to transport APL-1 by crossing the APL-1::GFP transgenic strain to the hypomorphic unc-116(e2310) background. Kinesin-1 has been found to play a prominent role in the transport of APP along the axon [26]. Again we saw a reduction in the amount of APL-1::GFP expression on the unc-116 mutant background, although to a lesser extent as that on the unc-104 background ( Figure S5). However, there was a complete loss of APL-1 expression along the dorsal axon in every nematode analyzed on the unc-116 background. These results suggest that APL-1 localization is dependent on both kinesins. In addition to being transported by kinesins, APP has previously been shown to localize to the early endosome by its strong colocalization with Rab5 positive compartments in preparations of nerve terminals from rat forebrain and PC12 cells [30,31]. As a small GTPase, Rab5 regulates endosomal trafficking of vesicles from the plasma membrane to the early endosome [32]. To determine whether APL-1 was also present in RAB-5 compartments in the worm we generated strains co-expressing APL-1::GFP and mCherry::RAB-5. We found that these proteins colocalized within a subset of puncta ( Figure 5A). This was reconfirmed by observing consistently positive intensity correlation quotient (ICQ) values in the different neurons analyzed ( Figure 5B). To determine whether loss of rab-5 had an effect on the localization of APL-1, we used RNAi to knock-down rab-5 expression in an integrated APL-1::GFP expression strain on the RNAi sensitive rrf-3(pk1426) background. Loss of rab-5 led to a dramatic decrease in the amount of APL-1::GFP in neurons as well as a complete loss of APL-1 in the dorsal process ( Figure 5C, D). By contrast, knock-down of two other small GTPases, rab-7 or rab-10, did not affect APL-1 expression, suggesting that RAB-5 compartments specifically are important for the localization of APL-1 ( Figure 5E). unc-108 Mutations Lead to Altered Intracellular Localization of APL-1 UNC-108 is a small GTPase expressed in neurons and engulfing cells that localizes to the Golgi and early endosome [33,34]. UNC-108 has been found to be involved in the maturation of dense core vesicles (DCVs), a distinct vesicular population containing peptide hormones and neuropeptides [34,35,36]. APL-1 likely undergoes fast axonal transport in a vesicular population, therefore we wanted to investigate if UNC-108 is required for the packaging of APL-1 into vesicles destined for anterograde transport. To study this possibility we crossed the hypomorphic mutant unc-108(n3263) to the APL-1::GFP expressing strain. We found that one of the head inter-neurons appeared to have a back-up of protein in the cell body ( Figure 6A, B). This aggregation of protein in a distinct compartment was also seen in ventral cord neurons. We then performed colocalization experiments by generating strains expressing APL-1::GFP and mCherry::UNC-108. Similar to the colocalization with RAB-5, APL-1 is found in overlapping puncta with UNC-108 in neurons, demonstrating APL-1 localization to the same compartment ( Figure 6C). The ICQ values of the different neuronal populations were consistently positive, showing that the proteins colocalize together at a similar frequency as with RAB-5 ( Figure 6D). These data suggest that UNC-108 is required for the localization of APL-1 and ultimately its transport. N-terminus of APL-1 Is Required for Worm Survival and Molting We have shown that loss of apl-1 contributes to defects in at least two systems, one of which is molting. The L1 lethality seen in the apl-1(tm385) strain is likely due, at least in part, to the molting defect, which is recapitulated by apl-1 RNAi. Worms with apl-1 knocked-down share the loose cuticle and internal vacuolization phenotypes of apl-1(tm385) L1s but exhibit these phenotypes later in development. The clear appearance of these worms may be due to inappropriate release of proteases involved in the molting process, loss of proper adhesion between tissues, and/or loss of fat stores due to starvation from the loose cuticle blocking food intake. Whether the sluggish movement defect seen in adult apl-1 knockdown worms is due to the molting defect, starvation or a neurotransmission defect remains to be seen. The phenotypes shown in this RNAi study are more severe than those described previously [37], possibly due to our use of an RNAi sensitive strain to ensure neuronal knock-down of apl-1. The particular molting defect seen with apl-1 RNAi is indicative of a failure to undergo ecdysis, or shedding of the old exoskeleton [38]. Another single-pass trans-membrane protein possessing a similar loose cuticle molting defect is the LDL receptor-related protein (LRP-1), which is the C. elegans ortholog of LRP-2/megalin and likely functions in cholesterol uptake and homeostasis [38,39,40]. A null mutation in LRP-1, like APL-1, leads to arrest and lethality although at later larval stages [40]. These similarities suggest that lrp-1 and apl-1 may operate in the same or similar pathways to control the molting process. Since we found that APL-1 does not require its C-terminal domain for rescue of the molting defect and soluble, secreted APP has been shown to bind to an LRP homolog [41], an attractive hypothesis would be that the Nterminal domain of APL-1 is shed and released at regulated periods followed by binding to LRP-1 to mediate proper ecdysis at each of the four molts. Our rescue results and previous studies support the notion that only the N-terminus of APL-1 is required to rescue the lethality seen in the apl-1 null strain [8]. In mice, expression of APP that has been truncated either at the a-cleavage site or had the last 15 amino acids removed could ameliorate APP knock-out phenotypes such as reduced body and brain weight, defective LTP and spatial learning, and loss of grip strength [42]. These findings combined support an important function of the N-terminus of APP in the mammalian system as well as in C. elegans. While the behavior defects in the Drosophila APPL null can be rescued by expression of human APP [43], we were unable to rescue the apl-1 null lethality in C. elegans by expressing any of the human homologs of APP. Several notable differences between the fly and the worm homolog of APP could account for the differences in cross species rescue. Unlike APL-1, expression of the fly homolog is confined to neurons and loss of APPL does not affect Drosophila viability or fertility [43,44]. Furthermore, different domains in APL-1 and APPL are required for their respective functions. The entire APPL protein is required for its proper function in the fly, whereas in C. elegans the APL-1 N-terminus is the critical domain needed to rescue the lethal apl-1 null mutant. APPL over-expression induced synaptic bouton formation could be prevented by deletion of the Cterminal domain or the N-terminal E1 and E2 domains, showing that the holoprotein is needed to mediate this function [45]. That APP with a C-terminal truncation expressed in APPL deficient fly lines could no longer induce axonal arborization seen when expressing full length APP further highlights the importance of the C-terminus for proper function of the protein in flies [46]. Since APL-1 function requires the N-terminus, it is not surprising that the C. elegans system cannot use APP with its minimally conserved N- Loss of apl-1 Leads to Neurotransmission Defects C. elegans and mice share many homologs that are involved in synaptic structure and function. Therefore we predicted that C. elegans would be an excellent model to study the importance of APL-1 in neurotransmission. In mice, APP/APLP2 null animals have enhanced nerve sprouting, reduced numbers of synaptic vesicles, defects in neurotransmitter release as well as a large number of defective synapses [24]. Similar to the mammalian system, we reveal here that loss of apl-1 expression leads to defective neurotransmission. We did not observe any overt defect in general neuronal structure in apl-1(tm385) lethal L1s, therefore defects in the development of the neuronal network are not likely to contribute to the phenotype. Interestingly, we would predict that the defect on aldicarb would be resistance rather than hypersensitivity if the worms lacking APL-1 also have a reduction in synaptic vesicle number and decreased number of functional synapses. These differences may be due to the fact that the mammalian system uses purely cholinergic connections at the neuromuscular junction while worm movement is modulated by both GABAergic and cholinergic synapses. Aldicarb cannot distinguish between cholinergic or GABAergic defects, nor can we rule out contributions from dense core vesicles, which also modulate neurotransmission [47,48,49]. The hypersensitivity we see during apl-1 knock-down may be due to defects in some or all of these systems. Future studies will address whether the number and/or internal structure of the synapses in each of these systems are affected by loss of apl-1. However, we predict that defects found in synaptic number or structure, if any, will be subtle due to the lack of profound locomotor defects during apl-1 knock-down or in the apl-1 loss-of-function mutant. Another prospective pathway APL-1 may use to mediate synaptic transmission is the well studied EGL-30 G-protein coupled receptor pathway, which can modulate cholinergic signaling. Loss-of-function mutations in negative regulators of this G q a pathway including goa-1/G o a, eat-16/RGS7 and dgk-1, all lead to a hypersensitive phenotype on aldicarb [50]. In addition, APL-1 and APP share a conserved G o protein binding domain on its C-terminus [8,51]. While the C-terminus of APL-1 may not be required for performing proper molting, we cannot rule out an important role for this domain in neurotransmission since we could not create a strain that rescues the aldicarb hypersensitivity with the DIC construct. Also, the dosage dependent effect we see with full length APL-1 may be due to improper regulation of this pathway due to varying levels of interaction with G o . The fact that loss of apl-1 also leads to enhanced pharyngeal pumping [52] supports a regulatory role through the EGL-30 pathway as pharyngeal pumping is one of the many functions modulated by this G-protein [53]. We have found that the regulation of neurotransmission by APL-1 does not appear to be related to its regulation of molting. A molting defect was not seen in any of the rescue strains, or purposely avoided by performing apl-1 RNAi knock-down in adults, whereas the aldicarb hypersensitivity was present in these worms. This dual regulation could not be dissected by removing the C-terminal domain or YENPTY motif, since the full length APL-1 rescue of lethality at a lower expression level still could not rescue the aldicarb hypersensitivity. Together, these data support a model in which the function of APL-1 in molting is independent of its function in neurotransmission. Regulation of APL-1 Localization and Transport We have found that localization of APL-1 in neurons is regulated through the action of the kinesins UNC-104/KIF1A and UNC-116/kinesin-1 as well as the small GTPases RAB-5 and UNC-108/Rab2 (Figure 7). In mice, APP undergoes fast axonal transport to the nerve terminal through the action of the kinesin-1 transporter [26]. However, in worms, APL-1 localization is dependent on both UNC-104/KIF1A and UNC-116/Kinesin-1. Rather than causing a back-up of protein, loss of either of these kinesins led to a general loss of APL-1::GFP. This indicates that the protein is being broken down rather than being allowed to accumulate in the cell body. Neither of these hypomorphic mutants have a molting defect, possibly through compensation of APL-1 trafficking by the other transporter, or the decrease in function of the kinesin is not severe enough to prevent APL-1 from operating in the molting pathway. After being exposed to the cell surface, APP is rapidly internalized and sorted to the early endosome by the action of the small GTPase Rab5 [31]. Like APP, we have found that APL-1 movement through the endosomal pathway utilizes RAB-5 as loss of RAB-5 reduces the level of APL-1 within the neuron. With the loss of RAB-5 through RNAi, we speculate that APL-1 becomes trapped on the cell surface where it is subject to increased exposure to proteases on the plasma membrane, which may account for the diffusion of GFP signal. UNC-108/Rab2 is known for its role in COPI-mediated retrograde transport between the Golgi and ER [54]. However, in C. elegans, loss of unc-108 does not affect COPI transport, but rather leads to an accumulation of early endosomal compartments [55]. UNC-108 is also involved in the maturation of dense core vesicles by preventing loss of cargo to specific endosomal compartments [34,36]. We suspect that APL-1 accumulation in unc-108(n3263) mutants may be due to incorrect sorting of APL-1 into the proper vesicular population upstream of anterograde transport. It is still unknown whether APL-1 is present in dense core vesicles, although this subset of vesicles is primarily transported by UNC-104 [56], which also appears to be the transporter most involved in proper localization and expression levels of APL-1 (Figure 4). Since the localization of APL-1 is dependent upon the presence of functional UNC-108, and the two proteins colocalize, it is possible that APL-1 may play a role either within the DCVs, or actively operating with UNC-108 in the maturation of DCVs. This may be another plausible explanation for the ability of APL-1 to regulate synaptic transmission as dense core vesicle cargos have been found capable of modulating cholinergic signaling [49]. In summary, our results show that APL-1 regulates neurotransmission independently of its function in the molting process. APL-1 moves through the neuron in a similar fashion to APP, with the distinction that two kinesins are needed for anterograde transport and to maintain proper expression levels of APL-1. Like APP, this transport is followed by endocytosis through the action of RAB-5. The ability of UNC-108 to alter the localization of APL-1 points to a novel process by which APL-1 is regulated in the cell. Overall, we predict that transport of APL-1 within the neuron enables APL-1 to properly perform its multiple functions by introducing the protein to molecules that can cleave and regulate release of the critically important N-terminal portion of the protein. This has implications for the biology of APP and its homologs where the Ntermini of these proteins may also act as ligands to stimulate downstream pathways that modulate neurotransmission. Supporting Information File S1 Supplementary tables. . Model for APL-1 transport and function. As a transmembrane protein, APL-1 moves through the endoplasmic reticulum (ER) and Golgi followed by sorting into its appropriate vesicular population through the action of UNC-108. From there, proper APL-1 transport through the neuron depends on the motor proteins UNC-104 or UNC-116. Once APL-1 reaches the plasma membrane (PM) it can be endocytosed and transported to the early endosome through the action of RAB-5. APL-1 is then likely sent for degradation or recycled back to the plasma membrane. While APL-1 processing is unclear, release of the extracellular domain is necessary for survival and molting. Whether APL-1 is able to regulate neurotransmission at the plasma membrane or at some point upstream has yet to be determined.
9,048.2
2010-09-20T00:00:00.000
[ "Biology" ]
Sequential Learning-Based Energy Consumption Prediction Model for Residential and Commercial Sectors : The use of electrical energy is directly proportional to the increase in global population, both concerning growing industrialization and rising residential demand. The need to achieve a balance between electrical energy production and consumption inspires researchers to develop forecasting models for optimal and economical energy use. Mostly, the residential and industrial sectors use metering sensors that only measure the consumed energy but are unable to manage electricity. In this paper, we present a comparative analysis of a variety of deep features with several sequential learning models to select the optimized hybrid architecture for energy consumption prediction. The best results are achieved using convolutional long short-term memory (ConvLSTM) integrated with bidirectional long short-term memory (BiLSTM). The ConvLSTM initially extracts features from the input data to produce encoded sequences that are decoded by BiLSTM and then proceeds with a final dense layer for energy consumption prediction. The overall framework consists of preprocessing raw data, extracting features, training the sequential model, and then evaluating it. The proposed energy consumption prediction model outperforms existing models over publicly available datasets, including Household and Korean commercial building datasets. Introduction The precise prediction of energy consumption in residential and industrial sectors assists smart homes and grids to manage the demand of occupants efficiently and establish policies for energy preservation.Therefore, energy load forecasting for smart grids has become a hot research area and a top priority for smart city development [1].Smart grids are responsible for the distribution of power acquired from different sources at different levels depending on consumption and future demand [2].The overall chain of electrical energy consists of three stages-production at power plants, management/distribution at grids, and consumption in various sectors [3].Hence, the smart grid is the main hub acting as a supervisor to keep the balance or act as a bridge between production and consumption through using appropriate scheduling and management policies to avoid wasteful energy generation and financial loss [4].For this purpose, energy forecasting methods play a key role in maintaining stability and ensuring proper planning between producers and consumers [5].Similarly, the costs of unpredictability and noisy data acquired from metering devices sometimes result in wrong predictions, which cause severe economic damage.For instance, UK power authorities reported a 10-million-pound loss per year in 1984 due to a 1% increase in forecasting error [6].Therefore, numerous prediction models have been proposed that are mainly focused on reducing the prediction error rate and improving the quality of the power grids by optimizing energy use. Sustainable buildings and construction are making progress in terms of energy preservation, but developments remain out of step with the growth of the construction sector and the rising demand for energy services [7].Therefore, urban planners must adopt ambitious energy planning policies to ensure that future construction is carried out in a way that increases energy efficiency in buildings [8].In this regard, energy consumption prediction and demand response management play an important role in analyzing each influencing factor that leads to energy preservation and reduces its impact on the environment [9].Moreover, energy consumption prediction models can help in understanding the impact of energy retrofitting and energy supply programs because these models can be used to define energy requirements as a function of input parameters [10].These factors make the energy predictive models the most useful tool for energy managers, urban planners, and policymakers when establishing national or regional energy supply requirements.On a smaller scale, they can be used to determine changes in energy demand for specific buildings.Hence, policy decisions related to building-sector energy can be enhanced using these forecasting models in sustainable urban or smart city development projects [11]. Power consumption forecasting is a multivariate time series data analysis task that is affected by various factors such as weather and occupant behavior.These make it difficult for machine learning techniques to learn the data pattern sequences for energy forecasting [12].On the other hand, deep learning models have shown tremendous results in many complex domains such as image/video [13], audio [14], and text [15] processing applications and with prediction and estimation problems [16].During the last few years, researchers from these domains have developed hybrid deep models by integrating the features of multiple deep models or combing the architectures to achieve higher accuracy.Similarly, a number of different hybrid deep models have been developed for energy consumption prediction [17,18].However, there is still room for accuracy enhancement with minimum resource utilization.Therefore, in this study, we conducted a comparative analysis of sequential learning models to select the optimum proposed model.The key contributions of this study are summarized as follows: • A comparative study is conducted over sequential learning models to select the optimum combination with deep features for energy consumption prediction; The rest of the paper is organized as follows.Section 2 represents the related research for technical forecasting of energy consumption.Section 3 represents the technical details of the proposed framework, followed by experimental results in Section 4. Finally, the paper is concluded in Section 5 along with some future research directions. Literature Review Employed energy forecasting methods can be categorized into two classes-statistical and deep learning-based.Recently, comprehensive surveys on energy forecasting have been published by Fallah et al. [19], covering methods from 2001 to 2019, and Hussain et al. [20], covering the related methods from 2011 to 2020.However, in this paper, we explored only deep learning-based literature due to their tremendous contributions in forecasting models, especially for time series data.For instance, Kong et al. [21] analyzed resident behavior learning with long short-term memory (LSTM) to propose a short-term load forecasting (STLF) model.The basic theme of this paper was to overcome the challenging problem of variant behavior of the residential loads that hinder the precise prediction results.Similarly, Almalaq and Zhang [22] proposed a hybrid technique by integrated deep learning and genetic algorithm with LSTM for energy forecasting of residential buildings.Kim and Cho [23] presented a hybrid energy prediction model in which two layers of convolutional neural network (CNN) are used to extract the complex features and then a simple LSTM [24] for sequence learning is adopted followed by a dense layer for final prediction.This study is further improved by Khan et al. [17], who used an LSTM autoencoder (LSTM-AE) instead of a simple LSTM and reported that their model is more efficient in terms of time complexity. Another hybrid model is presented by Le et al. [25] in which deep features from CNN were forwarded to BiLSTM in both forward and backward directions.This study is further extended by Ullah et al. [26], who used a multi-layer BiLSTM for sequential learning.Wen et al. [27] integrated a deep recurrent neural network with LSTM for the forecasting of power load at solar-based microgrids.A swarm algorithm was then applied to the sequential data from LSTM for an optimized load dispatched by the connected grids.Kim and Cho [18] extracted features for energy consumption data using CNN and then forwarded these features to state expendable autoencoder for future consumption predictions based on 15-, 30-, 45-, and 60-min resolutions.Recently, Sajjad et al. [28] proposed a hybrid sequential learning model for energy forecasting by integrating CNN and gated recurrent units (GRU) into a unified framework for accurate energy consumption prediction. Energy forecasting has an important role in the formulation of successful policies to efficiently use natural resources.For instance, Rahman et al. [29] presented an approach for the prediction of the total energy consumption in India to assist the policymakers for energy management.Their proposed model is based on the simple regression model (SRM) and multiple linear regression (MLR) along with other techniques that give satisfying results.Similarly, Jain et al. [30] proposed a support vector regression (SVR) based machine learning approach for the energy prediction of the multi-family residential buildings in one of the dense city New York.Zheng et al. [31] presented a hybrid LSTM-based model along with the selection of similar days and empirical mode decomposition (EMD) for the short-term load prediction of the electricity.Chujai et al. [32] proposed autoregressive integrated moving average (ARIMA) and autoregressive moving average (ARMA) models for power consumption forecasting.The ARIMA model demonstrated efficient results for monthly power consumption forecasting, while the ARMA model has the advantages of daily and weekly forecasting.Kim et al. [23] combined CNN with LSTM and presented a hybrid CNN-LSTM neural network approach for energy prediction with a very small RMSE value. In real-time energy forecasting, a proper plan is needed to accomplish the demand of consumers and operate electrical appliances without any problems.For this management, Muralitharan et al. [33] proposed a model for the prediction of consumer demand based on CNN and genetic algorithm techniques, which reveal convincing results for short-term forecasting.Similarly, Aslam et al. [34] developed a trust-worthy energy management system by utilizing mixed-integer linear programming (MILP) and also established a friendly environment between consumers and energy generation.Bourhnane et al. [35] presented a model for energy forecasting and scheduling in smart buildings by integrating artificial neural network (ANN) and genetic algorithms.Further, they also tested the model in real-time, which produced incredible output for both short-and long-term forecasting.This study is further improved by Somu et al. [36], proposing a novel forecasting model by employing LSTM with a robust sine cosine algorithm for the prediction of heterogeneous data in an efficient way.Sometimes, smart sensor devices generated unusual data due to numerous weather conditions; therefore, Shao et al. [37] fine-tuned the support vector machine (SVM) by handling two extra parameters, including weather and air-conditioning system, to prove the model stability on critical input values.Another precise energy consumption prediction in real-time was achieved in a study by Ruiz et al. [38] in which clustering techniques were applied to select the optimal one for analyzing discriminative patterns from data.In addition, to extract temporal features from raw input data, Fang et al. [39] followed a hybrid approach by incorporating LSTM and domain adversarial neural network (DANN) that mainly focuses on relevant features.They verified the performance of transfer learning strategy and domain adaptability through various experiments.Short-and long-term energy forecasting strategies have a significant role in the energy sector because they meet the energy required on the consumer side.Therefore, Hu et al. [40] introduced a novel deep learning idea by combining non-linear and stacked hierarchy models to analyze and authenticate the model reliability.Summarizing the con and pros of the energy forecasting models in the literature, we conclude that in contrast to traditional machine learning approaches, the above-mentioned deep sequential learning models for energy show good performance in terms of reduced error rates.However, there still exist several sequential models that have not yet been explored.Hence, an optimum hybrid model is still in need to achieve better accuracy with a small amount of resource utilization. Proposed Framework Precise forecasting of energy consumption in commercial and residential buildings assists smart grids to efficiently manage the demand of occupants and conserve energy for the future.Several traditional sequential learning forecasting models have been developed for energy consumption forecasting that reveal inadequate performance due to the utilization of unclean data.These approaches face various problems while learning parameters from scratch, such as overfitting, and short-term memory difficulties, such as data increases or the association between variables, become more complex [41].These problems can be easily tackled through sequential learning models that have the ability to capture spatial and temporal patterns from smart meters data at once.Based on this assumption, we developed a novel forecasting framework that provides a useful way to overcome the energy forecasting problem.The overall dataflow of the proposed framework is divided into three steps, as shown in Figure 1.First, the total consumed energy data are obtained from smart meters/sensors that contain abnormalities due to external influence.Next, data cleansing techniques are applied to the collected data in preprocessing step for eliminating the abnormalities.In the final step, the preprocessed data are fed into the one-dimensional ConvLSTM for features encoding, followed by the BiLSTM network that efficiently decodes the feature maps and learns the sequence patterns.The proposed framework is evaluated on various resolutions of data, i.e., minutely, hourly, daily, and weekly for short-and long-term forecasting using common error evaluation metrics.A detailed description of each step of the proposed framework is provided in the following subsections. Data Acquisition and Preprocessing This section provides a detailed description of the data collection and preprocessing strategy.Recent studies have shown that the performance of trained artificial intelligence (AI) models depend on the input data.Therefore, if the smart meter's data are well polished and organized, they can assist in training any model of AI in a more convenient way.The consumed energy data obtained from meters installed on each floor of a residential building is stored in a raw, incomplete, and non-organized format.Moreover, sometimes the data contain abnormalities due to wire break, occupant's behavior, and weather condition.Hence, using these data directly for energy consumption forecasting degrades the overall performance of the model.Therefore, we first passed the obtained data to the preprocessing step in which missing values are handled by replacing subsequent values.The pre-and post-processing data distributions are shown in Figure 2; we removed noise from the data and normalize them via min-max process while the outliers are detected and removed using the standard deviation method.There are 1.25% missing values in the Household Dataset, which are filled with the corresponding values of the previous 24-h data. ConvLSTM for Data Encoding Fully connected LSTM is one of the effective approaches to manage the sequential correlations in data; however, it contains massive redundancy for spatial data, which is not able to handle spatiotemporal information [42].To tackle such a problem, we utilized the extended version of fully connected LSTM called ConvLSTM [43], which has a convolutional structure in input and state-to-state transition having the ability to preserve the spatial characteristics of the data.In this study, we arranged multiple ConvLSTM layers to build generalize encoding model that can be utilized for forecasting problems and for spatiotemporal, sequence-to-sequence prediction.For instance, fully connected LSTM handles the spatiotemporal data by converting it into a 1D vector that results in a vital loss in sequence information.In contrast, ConvLSTM takes input in a 3D format in which it keeps spatial sequential data in the last dimension.In addition, the next state of a specific cell is dependent on previous and input states that can be obtained by convolutional operators for both state-to-state and input-to-state transitions.ConvLSTM mainly contains encoding and forecasting networks that are formed by stacking multiple ConvLSTM layers mathematically; the whole process is represented in Equations ( 1)-( 5), and the internal architectures of LSTM and ConvLSTM are depicted in Figure 3a,b, respectively. where In the forecasting network, all the states have the same input dimensionality; therefore, all states are concatenated and passed into 1 × 1 convolutional layer to produce the final results, similar to the concept as followed in [44].The function of encoding LSTM is to condense the input sequence and hidden state tensor, whereas forecasting LSTM expands the hidden state that generates the final prediction.In ConvLSTM, the functionality and architecture are the same as LSTM, but the ConvLSTM takes input in 3D tensors fashion, and it preserves the spatial information [45].This network has strong representation ability due to multiple stacked ConvLSTM layers, which make it suitable for complex sequences. BiLSTM for Data Decoding While processing the complex and long sequences using forward-to-backward forms, recurrent neural networks (RNNs) usually face issues such as short-term memory and vanishing gradient problems [46,47].In addition, this technique is not appropriate for processing long-term sequencing because it ignores the significant information from the earlier input level [48].In backpropagation, the layers gradually stop learning due to changes that occur in the gradient and reduced numbers of weights.To fix these concerns, Hochreiter and Schmidhuber [49] proposed an extended version of RNN known as LSTM.The inner structure of LSTM contains various gates that properly handle and preserve crucial information.In each level of backpropagation, weights are evaluated that either retain or erase the information in memory.Furthermore, all the cell states are interconnected, and they communicate if one cell updates its information, which can be mathematically presented using Equations ( 6)- (10). where [50].Two layers of the network are concurrently processing the input data, with each one operating a particular function.More precisely, another two layers also operate on sequence data but in a different direction, and in the last step, the final outcomes of both layers are combined with the appropriate method [51].In this study, a hybrid model is proposed by integrating ConvLSTM [43] with BiLSTM [50] for energy data forecasting after extensive experiments and ablations study of various sequence learning models. One-Dimensional (1D) Convolutional Neural Network (CNN) In computer vision, 2D CNN models have shown an encouraging performance on both image and video data such as facial expression analysis [52], action recognition [53], movie/video summarization [54], violence detection [55], etc.The 2D model accepts input in the two-dimension format in which pixels of images with color channels are processed simultaneously known as feature learning [56].The same process can be applied in 1D sequential data but with variations in the input.Therefore, 1D CNNs are considered as an efficient approach for time series data to extract fixed-length feature vectors.In the case of non-linear tasks such as energy consumption prediction/forecasting, CNN utilizes the weight sharing concept that provides minimum error rate in terms of MSE [57].In this study, we use two 1D CNN and pooling layers for efficient encoding the sequences of energy data, as shown in Figure 4, where x1, x2, x3, . . .xn represent the input data, c1, c2, c3, . . .cn, indicate the 1D convolutional layers for generating feature maps, and p1, p2 illustrate pooling layers that are employed to reduce the feature maps dimensions. Experimental Evaluation This section provides details for the evaluation of the proposed model, including dataset description, evaluation metrics, ablation study, time complexity comparison, and comparative analysis with state-of-the-art models.Note that we use different resolutions of the data such as minutely, hourly, daily, and weekly for the comparative analysis of the proposed models.However, for comparison with state-of-the-art, we only consider the commonly used resolution, i.e., hourly.We implemented the proposed approach in Keras (2.3.1)library with TensorFlow (1.13.1) as a backend using python language (3.5.5).Besides, Windows 10 operating system with GeForce RTX 2070 SUPER is used to train the model for 50 epochs, using batch size 32 and optimization algorithm (Adam) with an initial learning rate of 0.001. Datasets Description The proposed models are evaluated on two publicly available datasets.The household power energy consumption prediction dataset [58] is obtained from the University of California, Irvine (UCI) repository, which is originally recorded during the years 2006-2010 from residential buildings in France.This dataset is available with one-minute samples, consisting of 2,075,259 instances with 1.25% of missing values.Similarly, the second dataset [17] is collected from the commercial buildings in South Korea consisting of 99,372 instances with 15-min samples.First, both datasets are passed from preprocessing step for data cleansing and normalization.Next, these datasets are arranged in four samples, i.e., minutely, hourly, daily, and weekly for both short-and long-term predictions.The common attributes along with respected units for both datasets are listed in Table 1, while statistics of both datasets are provided in Table 2. Table 1.Common attributes, units, and their description of the datasets used for the evaluation of the proposed model. Date DD/MM/YYYY The most important feature to indicate consumption of the power at specific days and months, where DD ranges from 1 to 31, MM from 1 to 12, and YYYY from 2006 to 2010. Time HH/MM/SS This feature is mostly used for short-term prediction, i.e., minutely and hourly, where HH ranges from 0 to 23, MM and SS from 1 to 60. Global Active Power (GAP) K-W This feature contains per minute data of total household average data. Global Reactive Power (GRP) K-W Each minute data of overall building average power reactive. Voltage (V) Volts Per-minute voltage level. Global Intensity (GI) Amp Overall average power intensity per minute. Evaluation Metrics Four common evaluation metrics are used to evaluate the proposed models and comparative analysis.These four evaluation matrices are mean squared error (MSE), mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE), which are mathematically expressed in Equations ( 11)-( 14), respectively.MSE is basically the average squared difference between estimated and actual values, which always gives a non-negative value, with values closer to zero considered better, while RMSE is the square root of MSE.MAE measures the errors between paired observations expressing the same phenomenon, while MAPE is a common measure to calculate a forecast error in time series analysis that reflects the percentage variation between the forecasted variables. where y and ŷ are the predicted and actual values, respectively. Comparison Based on Sequential Learning Models via Hold-Out Method To evaluate the sequence learning models for short-and long-term prediction, we conducted experiments for different resolutions of the data, i.e., minutely, hourly, daily, and weekly.Table 3 represents the results based on the minute resolution, in which ConvLSTM-BiLSTM obtained the least error rate for both datasets.The least error is indicated in bold, and the runner-up is represented by underlined text.For the Household Dataset, ConvLSTM-BiLSTM obtained 0.035%, 0.187%, 0.075%, and 30.75% error rates for MSE, RMSE, MAE and MAPE, respectively.On the other hand, the results for the Commercial Dataset are slightly better than the Household Dataset with 0.025%, 0.158%, 0.055%, and 28.55% values for MSE, RMSE, MAE, and MAPE, respectively.The runner-up model for each dataset is CNN-BiLSTM.Hence, it is evident that features extracted from ConvLSTM perform better than CNN.Similarly, Table 4 represents the results based on the hourly resolution; here, also the ConvLSTM-BiLSTM obtained the least error rate for both datasets except MAPE (38.06%) for the Commercial Dataset.The CNN-BiLSTM model is found to be the second-best model, which beats ConvLSTM-BiLSTM model in MAPE (32.44%) values for the Commercial Dataset, while encoder-decoder-BiLSTM (ED-BiLSTM) obtained the second least error for MAPE (36.48%) for the Commercial Dataset.Overall, the results of ConvLSTM-BiLSTM with the hourly resolution are still better than the rest of the sequential learning models.Next, the performance results of day resolution are presented in Table 5.For all the metrics, ConvLSTM-BiLSTM obtained the least error rate in each dataset.For instance, for the Household Dataset, ConvLSTM-BiLSTM obtained 0.035, 0.187, 0.175, and 18.35 for MSE, RMSE, MAE, and MAPE, respectively, whereas CNN-BiLSTM obtained the secondleast error on this dataset.Similarly, for the Commercial Dataset, ConvLSTM-BiLSTM still remains the best in terms of the least error rate, while the runner-up models are different for each metric.For instance, ED-BiLSTM obtained 0.255 and 0.312 for MSE and MAE, BiLSTM obtained 0.425 for RMSE, and CNN-BiLSTM obtained 25.55 for MAPE.Finally, we performed experiments for long-term prediction by keeping the weekly resolution, as shown in Table 6.The best prediction model on the weekly dataset is also ConvLSTM-BiLSTM that obtains the least error rate for both datasets.For instance, for the Household Dataset, ConvLSTM-BiLSTM obtained 0.028, 0.167, 0.155, and 20.15 for MSE, RMSE, MAE and MAPE, respectively, and 0.025, 0.158, 0.143, and 20.91 for the Commercial Dataset.In contrast, the second-least error is obtained by CNN-BiLSTM.To summarize all the results in one graph, we calculated the average of each resolution (i.e., minutely, hourly, daily, and weekly), as illustrated in Figure 5.The MAPE value is ranged between zero and one instead of percentage for better representation.It is clear from Figure 5 that ConvLSTM-BiLSTM is leading in each dataset and metrics in terms of least error rate, followed by CNN-BiLSTM, ED-BiLSTM, and BiLSTM as runner up, third, and fourth place, respectively. Comparison of the Sequential Learning Models Based on Cross-Validation Method To validate the proposed model further in terms of learning and forecasting at the same time, we conducted experiments using a cross-validation method.In the cross-validation method, the overall dataset is divided into K equal segments or fold and K iterations for training and testing, which is conducted in such a way that each segment is kept for testing one by one, and the remaining K-1 segments are used for training.Finally, average accuracy is calculated over all iterations.In our case, we selected K = 10 (i.e., 10-fold validation) for the experiments over the household power energy prediction dataset [58].Table 7 represents the overall results for different sequential models over various data resolutions.Here also, ConvLSTM-BiLSTM obtained the least error rates on each data resolution, compared to other sequential models, while CNN-BiLSTM remains the runner-up model, except for the weekly resolution.The ED-BiLSTM remains the runner-up for weekly resolution in MSE and RMSE metrics, obtaining 0.103 and 0.322 error rates, respectively.Hence, the reported results based on the cross-validation method provide evidence that ConvLSTM-BiLSTM is the most effective combination in terms of learning and forecasting among the other models.Figure 6 illustrates the average results of overall resolution for each model, in which the MAPE value is presented in the range of zero and one instead of percentage for better presentation. Comparative Analysis Based on Time Complexity of the Sequential Models This section presents the time complexity analysis between the sequential learning models proposed in this study over two different platforms, i.e., central processing unit (CPU) and graphics processing unit (GPU).Table 8 represents the time complexity of the training and testing sessions in seconds (s) over the Household Dataset.For this comparison, we considered two data resolutions (i.e., day and week), through which it can be analyzed that low-resolution data comparatively have low-time complexity and viceversa.It is clear from Table 8 that BiLSTM achieved the overall lowest, while ED-BiLSTM achieved the maximum time complexity.However, ConvLSTM-BiLSTM achieved the best trade-off between time complexity and accuracy.This section presents a comparative analysis of the proposed prediction model with seven recent state-of-the-art hybrid models based on hourly sampled data of the Household Dataset, as shown in Table 9.All the methods in comparison extract features using simple CNN and then forward the extracted features to different sequential learning models for energy consumption predictions.For instance, LSTM [23], auto encoder (AE) [18], Multi-layer Bidirectional LSTM [26], Bidirectional LSTM [25], LSTM followed by AE [17], GRU [28], and CNN with multilayer bidirectional gated recurrent unit (CNN-MB-GRU) [59].For this comparison, we select the best-proposed model from Section 4.3, i.e., ConvLSTM-BiLSTM, which uses ConvLSTM as an encoder and bidirectional LSTM as a decoder.The proposed model outperforms state-of-the-art models in MSE and RMSE with the least error rate of 0.10 and 0.32, respectively.The proposed model reduced the error rate up to 0.08 and 0.1 points compared to the runner-up model CNN-MB-GRU [59] with MSE and RMSE values of 0.18 and 0.42, respectively.However, the least error rate for MAE is achieved by CNN-MB-GRU [59] with 0.29, while the proposed and CNN-LSTM-AE models remain runner up with a difference of 0.02.The proposed model achieved 30.05 error rate for MAPE metrics and remains runner up with a very little difference.The least error rate is achieved by CNN-MultiLayer-BiLSTM [26] with 29.10 and the difference with proposed model is only 0.95.Hence, the overall results demonstrate the superiority of the proposed model over state-of-the-art based on Household Dataset.Lastly, Figure 7 illustrates the prediction results of the proposed sequential learning models on hourly resolution data for both datasets. Conclusions In this paper, we provided a comparative analysis of various sequential learning models and selected the optimum one as the proposed model after extensive experimental findings.The proposed hybrid architecture for energy prediction is developed by integrating ConvLSTM and BiLSTM models.In detail, the proposed framework consisted of three main steps.First, the preprocessing step is applied to the input data for data cleansing such as normalization and missing values adjustment.Next, the preprocessed data are forwarded to the proposed hybrid model for training, in which ConvLSTM is used to extract and encode the spatial characteristics of the data, while BiLSTM is used to decode and learn the sequential patterns.Finally, the models are tested for both shortand long-term predictions using four resolutions, i.e., minutely, hourly, daily, and weekly, based on two datasets.In the comparative analysis, the proposed model achieved the least error rates against recent state-of-the-art energy prediction models.In the future, we aim to develop efficient prediction models that can be deployed over resource-constrained devices for smart metering and smart home appliances' energy management. Figure 1 . Figure 1.The proposed framework for power energy consumption prediction comprises three main steps-Step 1: the smart microgrids generate power energy and supply it to residential buildings/smart factories where smart meters measure the consumed energy; Step 2: smart meters' data are significantly influenced by environmental factors that generate abnormalities; therefore, data cleansing schemes are applied as preprocessing step; and Step 3: train the model with refined data in which ConvLSTM and BiLSTM layers are used for encoding and decoding the numerous resolutions of data to obtain a minimum error rate. Figure 2 . Figure 2. Household Dataset representation (a) before and (b) after the preprocessing step. Figure 5 . Figure 5. Average of the resolution-based error rate on hold-out validation method; (a) Household Dataset and (b) Commercial Dataset. Figure 7 . Figure 7. Prediction results of the proposed sequential models on hourly resolution data; (a) Household Dataset and (b) Commercial Dataset. and W ho depict the weight matrices and i t , o t , f t , represent input, output, and forget gates while c t and h t represent the latest cell output and hidden state, respectively.Another sequence learning model is BiLSTM that is an advanced version of RNN proposed by Paliwal and Schuster Table 2 . Statistics of the datasets including max, min, standard deviation, and average values for the used feature. Table 3 . Performance of the proposed models on the minutely resolution. Table 4 . Performance of the proposed models on the hourly resolution. Table 5 . Performance of the proposed models on day resolution. Table 6 . Performance of the proposed models on the weekly resolution. Table 7 . Performance of the proposed models using cross-validation method for various data resolutions. Figure 6.Average resolution-based error rate obtained using cross-validation method based on Household Dataset. Table 8 . Comparative analysis of the sequential models based on time complexity in seconds (s) over Household Dataset. Table 9 . Comparative analysis of the proposed model with state-of-the-art models based on hourly data resolution of Household Dataset.
6,976.2
2021-03-11T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Position-Enhanced Multi-Head Self-Attention Based Bidirectional Gated Recurrent Unit for Aspect-Level Sentiment Classification Aspect-level sentiment classification (ASC) is an interesting and challenging research task to identify the sentiment polarities of aspect words in sentences. Previous attention-based methods rarely consider the position information of aspect and contextual words. For an aspect word in a sentence, its adjacent words should be given more attention than the long distant words. Based on this consideration, this article designs a position influence vector to represent the position information between an aspect word and the context. By combining the position influence vector, multi-head self-attention mechanism and bidirectional gated recurrent unit (BiGRU), a position-enhanced multi-head self-attention network based BiGRU (PMHSAT-BiGRU) model is proposed. To verify the effectiveness of the proposed model, this article makes a large number of experiments on SemEval2014 restaurant, SemEval2014 laptop, SemEval2015 restaurant, and SemEval2016 restaurant data sets. The experiment results show that the performance of the proposed PMHSAT-BiGRU model is obviously better than the baselines. Specially, compared with the original LSTM model, the Accuracy values of the proposed PMHSAT-BiGRU model on the four data sets are improved by 5.72, 6.06, 4.52, and 3.15%, respectively. INTRODUCTION In natural language processing (NLP), the purpose of sentiment analysis (Pang and Lee, 2008) is to divide the texts into two or more sentiment categories (such as positive, neutral, and negative) based on the meaningful information from some texts. The aspect-level sentiment classification (ASC) is an important fine-grained sentiment classification. Its aim is to predict sentiment polarities of different aspect terms in a sentence (Thet et al., 2010). For example, in the sentence: "The environment of this restaurant is beautiful and the food is delicious, but the service is terrible, " the sentiment polarities of the aspect terms "environment, " "food, " and "service" are positive, positive, and negative, respectively. Since the traditional sentiment analysis only consider the polarities of sentiment for sentences (Mullen and Collier, 2004), the ASC is more complicated than traditional sentiment classification. In machine learning models, a series of features, e.g., a set of words and sentiment dictionaries (Jiang et al., 2011;Zhang and Lan, 2015), were set up to train classifiers, such as SVM and KNN. Their classification effect heavily depended on the features' quality. Another more important models are deep learning models . Because they did not deliberately design feature engineering, they can be effectively applied to automatically achieve the task of the ASC (Tang et al., 2016b). In recent, the recurrent neural network (RNN) (Socher et al., 2011;Nguyen and Shirai, 2015; and its variant models have been widely used in ASC tasks. These models can capture the relationships between sequences. Lai et al. (2015) used a two-way loop structure to obtain text information. Compared with traditional window-based neural networks, their method reduced more noise. Their method also retained the word order in a large range when it learned text expressions. For targeted sentiment classification, Gan et al. (2020) put forward a sparse attention mechanism based on a separable dilated convolution network. Their method is superior to the existing methods. Tang et al. (2016a) proposed a target-dependent longterm short-term memory network (TD-LSTM). This network is modeled by the contexts before and after the target word. By combining the information of the two LSTM hidden layer states, they further achieved the ASC tasks. Compared with the RNN model, the performances of these RNN variant models have small improvements on the ASC task. For specific aspect terms in a sentence, the RNN model paid little attention to its contextual information. Based on visual attention (Mnih et al., 2014), the attention mechanism is extensively borrowed in neural networks (Luong et al., 2015;Yin et al., 2015;Liu and Lane, 2016). A lot of attention-based neural network models (Yin et al., 2015;Wang et al., 2016;Ma et al., 2017;Zeng et al., 2019) are proposed to solve ASC tasks. For a sentence, the attention mechanism makes the neural network model pay more attention to the sentiment descriptions of specific aspects, i.e., the sentiment polarities of aspect words, while ignoring other noise words that are not related to the aspect words. Xu et al. (2020) proposed a multi-attention network. They used the global and local attention modules to obtain the interactive information of different granularities between aspect words and contexts. Chen et al. (2017a) proposed a recurrent attention network model on memory for sentiment classification. Their model is established on cognition grounded data. The proposed cognition-based attention mechanism can be applied in sentence-level and document-level sentiment analysis. Based on the attention mechanism and LSTM networks, Ma et al. (2017) proposed an interactive attention network (IAN) model. Their model obtained good performance on SemEval 2014. When the aspect terms contain more than one word, their method may lead to the loss of useful information. The self-attention mechanism (Letarte et al., 2018) could make sentiment analysis models pay more attention to the useful information of aspect terms in the context and the internal structure of sentences. It improved the performance of neural network models. Xiao et al. (2020) used multi-head self-attention to get the semantic and interactive information in sentences. They further proposed a multi-head self-attention based gated graph convolutional network model. Their model can effectively achieve aspectbased sentiment classification. Leng et al. (2021) modified the transformer encoder to propose the enhanced multi-head selfattention. Through this attention, the inter-sentence information can be encoded. Combining with the enhanced multi-head selfattention and BiLSTM or BiGRU, they proposed a sentiment analysis model which performed better than some baselines in some evaluation indices. Therefore, the attention mechanism is becoming more and more important in the ASC task. In addition, the position information between the aspect terms and their contexts has been confirmed that it was capable of improving the accuracy of the ASC (Chen et al., 2017a;Gu et al., 2018). For the RNN model (Liu and Lane, 2016;, the calculation at the current moment depends on the result at the previous moment. This will result in a lack of contextual semantic information for aspect words. Zhou et al. (2019) used R-Transformer to get this semantic information. They further combined the self-attention mechanism and position relationship to propose the position and self-attention mechanism-based R-Transformer network (PSRTN) model for the ASC. Their experiment results are better than some baseline models. It is, thus, clear that the position information needs to consider in the context attention calculation. Based on the above observations, this article proposes a position-enhanced multi-head self-attention based BiGRU (PMHSAT-BiGRU) model which integrates the position influence vector, multi-head self-attention mechanism, and bidirectional gated recurrent unit (BiGRU). This model considers three influence factors for the ASC task: the keywords in aspect terms, the position relationship between aspect terms and context, and semantic information of the context. In order to avoid noise words and make better use of the keywords in the aspect, it uses a self-attention mechanism to calculate the attention scores of the aspect words and each word in the sentence. To better obtain the semantic information of the context, it also uses multi-head attention to learn the relevant information from different representation subspaces. Finally, the PMHSAT-BiGRU model will be evaluated on the SemEval2014 restaurant, SemEval2014 laptop, SemEval2015 restaurant, and SemEval2016 restaurant dataset. Abundant experiments will verify its effectiveness on the ASC task. In general, the main contributions of this article are as follows: (1) Based on the position information between the aspect terms and context, a positional information vector is designed. It uses the relative position method to participate in the calculation of the attention weight. (2) To get a contextual representation of the specific aspect terms, a self-attention mechanism is used to calculate the words' weights in aspect terms. The multihead attention mechanism is employed to represent the semantic information of the context in different representation subspaces. (3) A PMHSAT-BiGRU model is proposed. Considering that three main factors, including the keywords in aspect terms, the position relationship between aspect terms and context, and the semantic information of the context for a sentence, affect the ASC, the PMHSAT-BiGRU model integrates the position influence vector, multi-head selfattention mechanism, and BiGRU. (4) Extensive experiments on four datasets including SemEval2014 restaurant, SemEval2014 laptop, SemEval2015 restaurant, and SemEval2016 restaurant data sets are conducted. The performance of the PMHSAT-BiGRU model is evaluated by using the Accuracy (Acc) and Macro-Average F1 (Macro-F1). The rest of this article is organized as follows. Section 2 introduces the related work of the ASC. Section 3 elaborates the proposed PMHSAT-BiGRU model. In section 4, we carry out a large number of experiments to prove the validity of the proposed model. Finally, we make the summary and forecast to the full text in section 5. RELATED WORK The ASC focuses on the sentiment polarities of aspect terms in a sentence. Since neural network models (Santos and Gattit, 2014;Zhang et al., 2018;Chen and Huang, 2019) are superior to the machine learning methods (Mullen and Collier, 2004;Jiang et al., 2011;Zhang and Lan, 2015) in sentiment classification, many new research results are based on neural networks. On the basis of the RNN (Mikolov et al., 2010;Akhtar et al., 2020), Hochreiter et al. explored the long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) and the gated recurrent unit (GRU) (Dey and Salemt, 2017). These models could solve the gradient descent and explosion problems. Tang et al. (2016a) integrated the information of the target words and context words to establish the sentence semantically. They presented two improved LSTM models, i.e., the target-dependent LSTM and target-connection LSTM. These models are significantly superior to the original LSTM model. Jiang et al. (2011) took the content, sentiment lexicon and context into consideration to improve the target-dependent sentiment classification for Twitter. Tan et al. (2020) proposed an aligning aspect embedding method to train aspect embeddings for the ASC. The embeddings are applied to the gated convolutional neural networks (CNNs) and attention-based LSTM. Their experiment results showed that the model with the aspect embedding obtained better performance than other baseline models. Xue and Li (2018) proposed Gated Tanh-Rectified Linear Unit (ReLU) Units. They further built a new CNN model with this mechanism to predict the sentiment polarities of aspect terms. The training time of the model was faster than other baseline models. The attention mechanism and position information are also considered in different neural network models for the ASC. Wang et al. (2016) designed a novel attention mechanism to capture the vital part of sentences with different aspect terms. Based on this mechanism, they presented an ATAE-LSTM model to effectively achieve the binary and 3-class prediction problems in the ASC. Considering the explicit memory, position, and context attentions, Tang et al. (2016b) designed deep memory networks. To a certain extent, their models achieved good performance on the ASC tasks. Liu et al. (2015); Chen et al. (2017b) introduced position information into attention mechanism to handle tasks of question answering and machine translation. The performance of the two tasks was obviously improved. Although these models have provided a good performance on the ASC tasks, the neural network models with position relationships and multi-head self-attention mechanism have yet to be studied for the ASC. PMHSAT-BiGRU FOR THE ASC In this section, we will minutely describe the PMHSAT-BiGRU model (refer to Figure 1), including the task definition, position modeling, word representation, BiGRU, attention mechanism, sentiment classification, and model training. Task Definition For a sentence with N-words and M aspect terms, let aspect term i, sentence be the aspect-sentence pair for the aspect term i, i = 1, 2, · · · , M. Then, using aspect term i, sentence as an input of the ASC, the sentiment category positive, neural, negative will be predicted for the aspect term i in the sentence. For example, the sentence "Great food but the service was dreadful!" involves two aspect terms, namely [food] and [service]. The sentence will generate two aspect-sentence pairs including food, sentence and service, sentence as the inputs of the ASC, then expectation outputs of the aspect terms [food] and [service] are positive and negative, respectively. Position Modeling In the ASC task, the sentiment polarity of a particular aspect will be severely affected by adjacent context words in a sentence. Inspired by Shaw et al. (2018), we employ relative position to model the position information of the aspect words in the corresponding sentence. For a sentence with aspect terms, the position indices of the words contained in an aspect term are marked as "0, " and the position indices of other words will be expressed as the relative distances from the current aspect term. Therefore, the position index of a word for the sentence is the following: where a start and a end respectively represent the start and end indices of the aspect term; and p i represents the relative distance from the ith word to the aspect term in the sentence. According to these indices from the first word to the last word in the sentence, a position index sequence with the sentence length of N is p = [p 1 , p 2 , · · · , p N ] for an aspect term. For example, in the sentence "The seafood menu is interesting and quite reasonably priced.", there are two aspect terms "seafood menu" and "priced." Then the position index sequences of "seafood menu" and "priced" are expressed as p = [1,0,0,1,2,3,4,5,6] and p = [8,7,6,5,4,3,2,1,0], respectively. By looking up the position embedding matrix P ∈ R d p ×N , the corresponding position embeddings are obtained, where d p is the dimension of the position embedding, and N is the length of the sentence. Then, the position embeddings are randomly initialized and updated during the training process. After transforming the position indices into position embeddings, the embeddings can model the different weights of words with different distances. In the example above, the sentiment word "interesting" is more important than the words "quite reasonably" for the aspect term "seafood menu." It implies that when an aspect term needs to predict its sentiment polarity, the words with relatively small distances and sentiment polarities are more important than other words. Word Representation By word embedding technology, each word is embedded into a unique word vector with the information of the word itself in the vector space. To obtain the word embedding, we will apply Glove (Pennington et al., 2014) pre-trained at Stanford University. In the following, all word embeddings are denoted by E ∈ R d w ×|V| , where d w represents the dimension of the word embeddings and |V| represents the size of the vocabulary. All aspect embeddings are expressed as A ∈ R d a ×|L| , where d a is the dimension of aspect embeddings, and |L| is the size of aspect terms. For a sentence with N-words [w 1 , w 2 , · · · , w N ], if it contains an aspect term [a 1 , a 2 , ..., a M ] with M words, then the sentence embedding and the aspect embedding will be obtained by finding the embedding matrix E and A, respectively. Bidirectional Gated Recurrent Unit Recurrent neural network has been successfully applied in the field of the NLP. However, the standard RNN often faces the problem of gradient disappearance or gradient explosion. As a special RNN, LSTM adjusts the cell state through three gated mechanisms at each time step, better solving the problem of the long dependence. Compared with the one-way LSTM, BiLSTM can learn more contextual information. It establishes the context dependence in the forward and reverse directions. Concretely, the forward LSTM processes sentences from the left to the right, and the reverse LSTM processes sentences from the right to the left. From this, it gets two hidden representations, and then connects the forward hidden state and backward hidden state of each word as the final representation. In contrast with the LSTM, the GRU, which uses two gated mechanisms to adjust cell state and has fewer parameters and lower computational complexity has relatively better performance than LSTM in the NLP. Specifically, at time t, we obtain the embedding vector w t ∈ R d w of the current input word from E and the aspect embedding vector v a ∈ R d a from A, then the current hidden layer vector h t in GRU is updated by the following: where z and r are the update gate and reset gate, respectively; the sigmoid function σ (·) is used to control the retention of useful information and the discarding of the useless information; represent the weight matrices and biases learned in the GRU training process; ⊙ denotes an element multiplication; and [w t , v a ] stands for the splicing vector of the word embedding w t and the aspect embedding v a . Then, the hidden vector [h 1 , h 2 , ..., h N ] of the sentence with the length N is regarded as the final context word representation. In the following, we will adopt the BiGRU to obtain the contextual representation of a sentence. Compared with the one- Attention Mechanism The attention mechanism can help the model focus on the important parts of a sentence in the ASC tasks. In particular, the multi-head attention mechanism allows the model to learn some relevant information in different representation subspaces. Furthermore, the self-attention mechanism can learn the word dependency relationships within the sentence and then capture the internal structure of the sentence. This mechanism can process in parallel, reducing the complexity of calculations. In view of these advantages, the overall semantics of a sentence can be represented by the multi-head self-attention mechanism . Based on the last hidden layer state h t i output by BiGRU, the current context representation can be represented as h t 1 , h t 2 , · · · , h t N . Then, feeding them into the multihead self-attention, a new representation s t for the sentence can be obtained by the following: For head i h t N (i = 1, 2, · · · , N), it is calculated by the following formulas: where Q, K, and V represent the query, key, and value matrices, respectively. In these matrices, their vectors q, k i and v i are calculated as follows: where W q , W k , and W v are the weight matrices whose values are different in different attention heads. Sentiment Classification For the multi-head self-attention representation s t , we map it to the target space with C sentiment polarities by a non-linear layer: where x = (x 1 , x 2 , · · · , x C ), W r and b r are the weight matrix and bias within the non-linear layer, respectively. Then, x is transformed into the conditional probability distribution through a Softmax layer. Therefore, the final distributions of the C sentiment polarities are obtained by the following: From this result, the sentiment polarity corresponding to the maximum probability, i.e., max C c=1 {y c }, is chosen as the final sentiment classification. Model Training In the PMHSAT-BiGRU model, the cross entropy and L 2 regularization will be regarded as the loss function, where D denotes the data set which consists of different sample d; y c d ∈ R C represents the real sentiment polarity distribution of sample d; g c d ∈ R C stands for the sentiment polarity vector of sample d; λ is the L 2 regularization coefficient; and θ includes all model parameters. For the sake of optimizing all model parameters, the loss function should be minimized as much as possible. By the back-propagation method, the parameters θ is updated by the following: where λ l is the learning rate. In order to prevent overfitting during training process, the dropout strategy is adopted as the method of discarding some learned features. EXPERIMENTS In this section, we will make some experiments under the proposed PMHSAT-BiGRU model and several baseline models on several large data sets. By comparing the results of these experiments, the effectiveness of the proposed PMHSAT-BiGRU model will be verified. Then several ablation experiments are set to affirm the effectiveness of the modules in the proposed model. Finally, we visualize the dataset in the experiment based on the proposed PMHSAT-BiGRU model. Dataset The ASC benchmark data sets, officially published by SemEval including SemEval 2014 Task4 1 , SemEval 2015 Task12 2 , and SemEval 2016 Task5 3 , will be adopted. In these datasets, the SemEval 2014 contains Restaurant14 (R14) and Laptop14 (L14) datasets; the SemEval 2015 uses Restaurant15 (R15) dataset; and the SemEval 2016 uses the Restaurant16 (R16) dataset. More specifically, each dataset contains a training set and a test set. In each dataset, every data is a single sentence, including the review text, aspect terms, sentiment labels corresponding to the aspect terms, and the starting position of the aspect terms. There are four aspect-level sentiment polarities, i.e., positive, negative, neutral, and conflict in these data sets. To facilitate subsequent experiments, we only use the positive, negative, and neutral aspect-level sentiment polarities and remove the conflict aspectlevel sentiment polarity from these data sets, i.e, the number of the sentiment polarity categories C = 3. For all adopted datasets, their details of the training sets and test sets are shown in Table 1. In addition, we count the number of words in the aspect terms in Table 2. It easily finds that more than one-fourth of the datasets have the aspect terms with multiple words. Parameters Setting In our experiments, the Glove 4 (Pennington et al., 2014) is used to initialize the aspect and contextual word embedding. It sets the embedding dimension of each word to 300. The weight matrices are initialized from the uniform distribution U (−µ, µ), where µ = 0.01 and all offsets are set to 0. The aspect embedded dimension is also set to 300; the BiGRU hidden unit is set to 200; and the position embedded dimension is set to 100. The maximum length of a sentence is 80; the batch size is 4; and the number of multi-head self-attention heads is 8. In our PMHSAT-BiGRU model, the dropout rate is set as 0.5; the L 2 regularization coefficient is set as 1e-5; the Adam optimizer is used to optimize the training parameters; and the learning rate is set as 1e-4. To implement our PMHSAT-BiGRU model, we employ Pytorch 5 in the experiments. Evaluation In the experiments, we used two common evaluation indexes, i.e., Acc and Macro-F1 in classification tasks. In detail, the Acc represents the proportion of correctly classified samples to the total sample number, and its calculation is as follows: where tp denotes the number of the samples whose true labels and sentiment labels predicted by the model are both positive categories; and tn represents the number of the samples whose true labels are positive categories and sentiment labels predicted by the model are negative categories. Correspondingly, fp represents the number of the samples whose true labels are negative categories and sentiment labels predicted by the model are positive categories; and fn represents the number of the samples whose true labels and sentiment labels predicted by the model are both negative categories. Next, the Recall, Precision, and F1-score (RPF value) are calculated by the following. In the experiments, we will calculate the RPF values for the positive, negative, and neutral categories. Then, we obtain the Macro-F1 values by averaging the F1-score values of the three categories. Baselines In order to verify the effectiveness of the PMHSAT-BiGRU model, the experiment results will compare with the following baseline models: Context word vectors average (ContextAvg): It averages the word embedding and aspect vectors, and then input the result into the softmax classifier, which was cited as the baseline model in Tang et al. (2016b). Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997): For a sentence, the one-way LSTM network is used to model the sentence; the last hidden layer vector is regarded as the final representation of the sentence, and then sent to the Softmax classifier for the final classification. Target-dependent long-term short-term memory (TD-LSTM) (Tang et al., 2016a): For a sentence with target words, the sentence is divided into two different parts based on a target word of the sentence, and then respectively uses two LSTMs to model the context on the left side of the target word and the right side of the target word. Finally, it connects the related representations of the two parts as the classifier input to predict the sentiment polarity of the target word. Target-connection long-term short-term memory (TC-LSTM) (Tang et al., 2016a): This model is similar to TD-LSTM. However, the difference is that TC-LSTM has added the aspect word information at the input; and the word vector and the aspect vector are connected, obviously integrating the correlation information between the aspect word and the context word. Attention-based long short-term memory (AE-LSTM) (Wang et al., 2016): Based on the standard LSTM, the aspect embeddings are designed to represent the aspect information; and the aspect embeddings are regarded as a part of the training parameters. Attention-based long short-term memory with aspect embedding (ATAE-LSTM): On the basis of AE-LSTM, the aspect is embedded in each word embedding and hidden vector; and the attention mechanism is used to further strengthen the effect of the aspect embedding. This model was cited as the baseline model in Zhou et al. (2020). Memory Network (MemNet): Using the deep memory network instead of the RNN-based method for sentence modeling, it repeatedly employs the attention mechanism to capture the connections between the context words and aspect words. This model was cited as the baseline model in Zhou et al. (2020). Interactive attention network (IAN) (Ma et al., 2017): Two LSTMs are respectively used to model the aspect terms and context words. Through the interactive attentions from the sentences to their corresponding aspects and from the aspects to the sentences, the sentence representations and aspect representations are generated. Then, the two representations are connected to input the Softmax classifier for the classification. Gated convolutional network with aspect embedding (GCAE) (Xue and Li, 2018): Many pairs of convolution kernels are used to extract local N-gram features, where each pair of convolution kernels contains one aspect-independent convolution kernel and one aspect-dependent convolution kernel. Then, the model respectively adopts tanh and ReLU gated units to output the sentiment features of a given aspect. Attention-based long short-term memory with position context (PosATT-LSTM) (Zeng et al., 2019): On the basis of the one-way LSTM, the position relationships between the aspect words and the context are considered. The relationships are applied to the calculations of the attention weights. Compared Methods Based on the proposed PMHSAT-BiGRU model and baseline models, some experiments on R14, L14, R15, and R16 are made. These models' Acc and Macro-F1 values are shown in Table 3. From Table 3, compared with other models, the performance of the ContextAvg model is the worst because it is directly classified by average word embedding and aspect embedding. Among the sequential models, the performance of the LSTM model is the worst because the model does not consider the attention mechanism and aspect word information but equally treats the aspect words and other words in the model. Compared with the LSTM model, the aspects are embedded into the LSTM model for training in the AE-LSTM model. So, the Accuracy values under the AE-LSTM model are respectively 1.32, 1.75, 0.79, and 2.17% better than the values under the LSTM model on R14, L14, R15, and R16. Compared with the TD-LSTM model, although the TC-LSTM model considers the aspect word information at the input end, its performance is worse than the TD-LSTM model. Because the attention mechanism is used to model the relationships between the aspect words and context under the Memnet model, the performance of the Memnet model is better than the AE-LSTM model. Compared to the Memnet and ATAE-LSTM model, the Accuracy and Macro-F1 values on the Memnet model are slightly higher than the values on the ATAE-LSTM model under partial datasets. The performance of the IAN model is better than the ATAE-LSTM model because, in the IAN model, two LSTMs are respectively adopted to model the aspect terms and context, and an interactive attention mechanism is used to obtain context related to aspect terms. However, the importance of the position relationships between aspect words and context is not considered in the IAN model. Therefore, the performance of the IAN model is worse than the PosATT-LSTM model. Because the GCAE model uses the CNN and gated mechanism to realize parallel computation, making the model insensitive to position information, its performance on R16 has little promotion compared with the IAN model. For the PMHSAT-BiGRU model, the aspect embedding information and the importance of the aspect words and context position information are applied in the calculations of the attention weights. Meanwhile, the multi-head attention mechanism is used to learn the dependent information in different contexts. The self-attention mechanism is also employed to capture the important words in the aspect terms. For the PosATT-LSTM model, only the semantic representation in a single context is captured; and each word in aspect terms The bold value indicates that the effect of this method is the best compared with other baseline models. The bold value indicates that the effect of this method is the best compared with other baseline models. is equally treated. Therefore, the performance of the PMHSAT-BiGRU model is obviously better than the PosATT-LSTM. Overall, the performance of the PMHSAT-BiGRU model is superior to above baseline models. In particular, compared with the original LSTM model, the Accuracy values of the PMHSAT-BiGRU model on R14, L14, R15, and R16 are improved by 5.72, 6.06, 4.52, and 3.15%, respectively. Model Analysis In this section, a series of models will be designed to verify the effectiveness of the PMHSAT-BiGRU model. First of all, in order to verify the validity of the position information, the position information is removed from the PMHSAT-BiGRU model, denoted by the MHSAT-BiGRU model. In the MHSAT-BiGRU model, the representations of aspect words and sentences, and the multi-head self-attention mechanism is adopted to model the relationships between aspect words and sentences. Second, in order to verify the effectiveness of the multi-head self-attention mechanism for the PMHSAT-BiGRU model, the multi-head self-attention mechanism is replaced with a normal attention mechanism and the other parts are kept unchanged in PMHSAT-BiGRU, denoted by the PAT-BiGRU model. The structure of the PAT-BiGRU model is almost similar to the ATAE-LSTM model, except that the PAT-BiGRU model considers the position relationship between aspect words and context and uses the BiGRU structure instead of LSTM. Finally, we also use the BiGRU FIGURE 2 | The influence of the number of the heads k of the multi-head attention mechanism on the accuracy of the model. model to verify the effectiveness of our multi-head self-attention mechanism. The experimental results of these models are shown in Table 4. From Table 4, the performance of the BiGRU model is the worst in all models. The reason is that this model equally treats every word in sentences. In contrast, the multi-head selfattention mechanism can learn contextual information related to terms from different contexts. So the MHSAT-BiGRU model gets better grades than the BiGRU model. Because the PAT-BiGRU model uses position embedding and aspect embedding to calculate the weight of attention, while the MHSAT-BiGRU model only adopts the aspect embedding, the PAT-BiGRU model performs better than the MHSAT-BiGRU model. Compared with the PMHSAT-BiGRU model, since the PAT-BiGRU model ignores the aspect words with different meanings in different contexts and the role of important words in aspect terms, the performance of the PAT-BiGRU model is lower than the PMHSAT-BiGRU model. On the basis of the above analysis, the PMHSAT-BiGRU model performs the best in all models. The reason is that the model not only fully considers the position information of aspect terms in the corresponding sentences but also regards the relationships between aspect terms and sentences from multiple levels. Besides, the model pays more attention to the important words in aspect terms, which is mainly realized by the multi-head self-attention mechanism. The multi-head self-attention mechanism is employed to learn the semantic information in different representation subspaces for the PMHSAT-BiGRU model, where the number of the subspaces is controlled by the number of the heads k in the multi-head attention mechanism. In the following, the influence of the parameter k on the Accuracy of the PMHSAT-BiGRU model is shown in Figure 2. It can be observed that when k increases, the changing trends of the Accuracy values for the PMHSAT-BiGRU model on the four data sets are similar. Specifically, when k = 1, the multi-head self-attention mechanism is equivalent to an ordinary singlehead self-attention mechanism. As the values of k increase, the performance of the model almost increases from 1 to 8, and then the performance of the model declines with the rise of k. The main reason is that when the value of k is more than 8, some heads will learn same attention weights, which bring noise for the sentiment classification of aspect terms. Evidently, when k = 8, the performance of the model on the four data sets is the best. Case Study In order to intuitively show the validity of the model, we will take a sentence with aspect terms as an example for predicting the aspect terms of sentences by the PMHSAT-BiGRU model. For example, the sentiment polarities of the sentence "The wine list was extensive-though the staff did not seem knowledgeable about wine pairings." will be predicted by the model. For the sentence, the attention weights of the aspect terms and the sentence are visualized in Figure 3, the darker the color of words is, the more the words are important for predicting the sentiment polarities of aspect terms. It easily finds that the model focuses on the words adjacent to the aspect terms. When the model predicts the sentiment polarity of the aspect term "wine list, " the word "extensive" is closer to the position of the words "wine list, " so the model pays more attention to "extensive" which plays an important role in calculating the sentiment polarity of the aspect term "wine list;" whereas, the words "not" and "knowledgeable" are farther away from the aspect term, and then they receive less attention. In the aspect term "wine list, " the word "wine" gets more attention which is mainly realized by the self-attention mechanism. So the model can correctly predict the sentiment polarity of the aspect term "wine list" as positive. Similarly, when the model predicts the sentiment polarity of the aspect term "staff, " the words: "knowledgeable" and "not" get more attention than other words in the model. Since the positive polarity of the word "knowledgeable" for the "staff " is eventually reversed by the word "not, " the model can correctly predict the sentiment polarity of "staff " as negative. Therefore, the PHMSAT-BiGRU model accurately predicts the sentiment polarities of all aspect terms of the sentence. From this, even if a given sentence contains multiple aspect terms, the PMHSAT-BiGRU model can find the relevant sentiment descriptors of the given aspect terms, exactly predicting the sentiment polarities of its aspect terms. CONCLUSION AND FUTURE STUDY In this article, a PMHSAT-BiGRU based on the position influence vector, multi-head self-attention mechanism, and BiGRU is proposed for the ASC. The PMHSAT-BiGRU model considers the aspect terms contained in multi-words and the importance of each context word. The model also integrates the aspect word and its relative position information of the context into the semantic model. First, this model establishes position vectors based on the position information between the aspect words and its context. Then the position vectors and aspect embeddings are added to the hidden representations of BiGRU. Finally, the keywords in aspect terms and the sentiment features related to the aspect terms are captured by the multi-head self-attention mechanism. The experimental results on the SemEval 2014, 2015, and 2016 datasets show that the PMHSAT-BiGRU model can learn effective features and obtain better performance than the baseline model on the ASC tasks. In future study, the individual models and different fused approaches of the three important factors will be further improved. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
8,479.8
2022-01-25T00:00:00.000
[ "Computer Science" ]
Wireless Network Sensing of Urban Surface Water Environment Based on Clustering Algorithm School of Remote Sensing and Information Engineering, North China Institute of Aerospace Engineering, Langfang Hebei 065000, China Hebei Remote Sensing Information Processing and Application of Collaborative Innovation Center, Langfang Hebei 065000, China School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300131, China Beijing Insights Value Technology CO., LTD., Beijing 100071, China Introduction With the development of wireless sensing technology, especially the improvement of wireless sensing image processing level, wireless sensing is increasingly widely used in various social fields. In terms of urban planning, wireless sensing can realize dynamic monitoring of land use, supervision and control of air quality, urban ecological environment planning and construction, etc. [1]. Urban surface is a subsystem of urban ecosystem. As an important part of urban ecosystem, it plays an important role in improving urban ecological environment quality and improving residents' living standards [2]. In recent years, many cities at home and abroad have applied wireless sensing technology to surface information extraction to dynamically master the coverage area, optimize the spatial structure of green space, improve the potential of sustainable development of the city, and realize the overall planning [3]. Compared with the traditional way, the extraction of green space information by aerial wireless sensing image has the advantages of wide field of view, strong macroscopic view, clear and realistic image, large amount of information, short repetition cycle, and convenient data collection. It is very economical in terms of manpower, material resources, and financial resources and has a short time and high efficiency. Wireless sensing is a comprehensive earth detection technology developed in the century [4]. That is, collect information about an object without directly touching it. It usually refers to the acquisition of various ground object information from the air or space by some kind of sensor and the extraction and analysis of this information, so as to measure and judge the nature or characteristics of the ground object. With the development of space technology, optical technology, sensor technology, computer technology, and modern communication technology, wireless sensing technology has made great progress [5]. Since the rise of the century, the development of wireless sensing technology is increasingly rapid, on the basis of aerial photogrammetry, with the rapid development of modern science and technology such as space technology and electronic computer, as well as the needs of the development of geoscience, biology, and other disciplines; as an emerging technical discipline gradually developed, it has formed a relatively complete basic theoretical system and a series of technical support [6]. As a means of information acquisition, wireless sensing technology has penetrated into various fields of the national economy, such as agriculture, forestry, geology, meteorology, oceanography, environment, urban planning and land management, and other professional fields and departments [7]. Zhao et al. proposed the main methods of urban wireless sensing summarized as postclassification comparison method, multitemporal complex method, image difference ratio method, vegetation index method, principal component analysis, and transformation vector analysis [8]. Song et al. proposed a new urban land use classification method, which determines urban building density based on image texture to determine land use type. They used panchromatic spectral images to conduct experiments in Athens, Greece, which is higher than the traditional maximum likelihood method. This method can be used for urban wireless sensing monitoring such as urban land use change, urban expansion, and illegal building monitoring [9]. Awad et al. adopted the difference method, used spatial texture information and set certain constraints to eliminate the change information of agricultural land, and accurately extracted the annual urban expansion area [10]. Based on the current research, MFCM algorithm only considers the fixed neighborhood of each pixel to improve the robustness of the algorithm [11]. Most of the mature monitoring system instruments are imported. The imported instruments have high precision and many measuring indexes, but they are expensive and cumbersome to operate. The wireless frequency domain used in China needs to be specially applied to the radio department. There are also some independent research and development monitoring systems, but compared with foreign countries, there are still some problems, such as the number of monitoring stations and limited scope. In particular, in the wireless sensor network, the application of wireless sensor network technology is less. Real-world objects, however, are irregular neighborhood areas. On the basis of the current research, this paper proposes a regional FCM clustering method combined with water index, which calculates the normalized water index (NDWI) through the fusion of multispectral wireless sensing images. Combined with normalized water index, fuzzy clustering results were obtained by RFCM algorithm proposed in this paper. The optimal threshold was selected to defuzzify the fuzzy clustering results, and finally, the extraction results of urban surface water were obtained. The accuracy of the proposed algorithm was compared with that of the traditional surface water extraction algorithm. The experimental results showed that the size of different neighborhood regions affected the water extraction accuracy. In W city, the kappa coefficient of MFCM16 was 0.41% higher than that of MFCM8, and the overall classification accuracy of MFCM16 was 1.33% higher than that of MFCM. In G city area, the kappa coefficient of MFCM16 was 1.81% higher than that of MFCM8, and the overall classification accuracy of MFCM16 was 1.7% higher than that of MFCM. Comparing the RFCM algorithm with other algorithms, the RFCM algorithm obtained the best experimental results, to reduce the "saltand-pepper phenomenon" effect [12]. The innovation of sensor technology and communication technology makes wireless sensor network technology more perfect, which is often used in environmental monitoring. Using wireless sensor network to monitor water quality and obtain water quality data for storage and remote transmission can well solve the shortcomings of traditional monitoring methods and achieve real-time, low-cost, and long-term measurement of the environment. Wireless Sensing Data Preprocessing. Two-scene gF-2 domestic high-resolution wireless sensing images (Guangzhou and Wuhan) were used to extract urban surface water. (1) Firstly, radiometric calibration and atmospheric correction are performed on the multispectral data and panchromatic data, and then, a 1 m resolution multispectral wireless sensing image is obtained by using NND (Nearest Neighbor Diffusion) image fusion algorithm. (2) Use morphological shadow index to remove building shadow. (3) Normalized water index (NDWI) was obtained by using fusion multispectral wireless sensing images. (4) Combined with normalized water index, fuzzy clustering results were obtained by using the RFCM algorithm proposed in this paper. (5) Select the optimal threshold to defuzzify the fuzzy clustering results, and finally, obtain the extraction results of urban surface water. (6) Artificial vectorized real surface water data is used to verify the effectiveness of the algorithm, and the accuracy of the algorithm is compared with that of the traditional surface water extraction algorithm [13]. Water Index. Water index method is a combination of single-band threshold method and multiband spectral logic operation, so as to improve the difference between water body and other ground objects and effectively suppress the influence of background noise (shadow, black impervious surface, ice and snow, etc.). Water index method is to use the water body in visible light reflectance which is generally low, less than 10%, generally 4%-5%, and gradually decreases with the increase of wavelength, but in the nearinfrared band, water body almost shows full absorption. In order to enhance the difference between water body and other ground objects, the ratio calculation of visible band and near-infrared band is carried out or different weights are given to the band based on the above principle. At present, the mainstream water index includes normalized water index (NDWI), improved normalized water index (MNDWI), enhanced water index (EWI), and automated water extraction index (AWEI). However, gF-2 domestic high-resolution wireless sensing image has only one panchromatic band, which consists of four multispectral bands, namely, red, green, blue, and near middle red, without middle infrared band and thermal infrared band. Therefore, 2 Wireless Communications and Mobile Computing normalized water index (NDWI) is adopted in this paper. The specific calculation is shown in Formula (1). This water index has a good suppression effect on background noise [14]. where ρGreen is the surface reflectance of green band and ρNIR is the surface reflectance of near-infrared band. 2.3. Regional FCM Clustering Algorithm 2.3.1. Principle of Regional FCM Clustering Algorithm. RFCM (regional fuzzy C-means) clustering algorithm is derived from traditional FCM algorithm and improved FCM algorithm. The traditional FCM algorithm only considers the information of the pixel itself, not the spatial information of the pixel neighborhood. The improved FCM algorithm MFCM (modified FCM) considers the spatial information of the pixel neighborhood, but it only considers the spatial information of the fixed window in the pixel regular neighborhood. Real objects are irregular neighborhood areas (image objects). The regional FCM clustering algorithm will determine the size of the neighborhood region according to the spatial heterogeneity of the pixel and the neighborhood pixel. The principle of the RFCM algorithm only adds spatial information of pixel neighborhood region on the basis of the FCM algorithm and considers the membership degree constraint of pixel neighborhood region. Finally, the membership degree was obtained by iterative optimization of RFCM objective function considering neighborhood region constraints, and the category of pixels was determined according to the optimal threshold of membership degree. The objective function J RFCM of region FCM is shown in where u ij represents the membership degree that pixel J belongs to category I; m ∈ ½0, 1 represents a weighting coefficient; kx j − v i k 2 represents the feature space distance between pixel J and cluster center v i . A is the constraint parameter of neighborhood area; k x j − region − v i k 2 represents the feature space distance between the mean value of the feature space of the pixel in the neighborhood of pixel J and v i of the cluster center. By introducing Lagrange theory to optimize the objective function of regional FCM, a constraint function O RFCM is constructed, as shown in where λ stands for Lagrange multiplier; ∑ C i=1 μ ij represents the sum of the membership degree of each category for the JTH pixel, where ∑ C i=1 μ ij = 1. Calculation of Neighborhood Area. The key of RFCM algorithm is the calculation of neighborhood area. The size of neighborhood area is determined by the spectral difference between center pixel and neighborhood pixel. PSI index method is adopted in this paper to determine neighborhood area, which is more reasonable than fixed window neighborhood area [15]. The basic theory of the algorithm is a series of direction lines diverging from the center of the pixel in different directions as shown in Figure 1. The spectral heterogeneity of the pixel on each direction line and the central pixel is calculated. If the spectral heterogeneity is less than the threshold value and the length of the direction line is less than the threshold value, the pixel is determined as the neighborhood area of the central pixel. The calculation of spectral heterogeneity measure is shown in where p d ði, kÞ represents the heterogeneity measure value of the current pixel I and the neighborhood pixel K on the direction line D, p s ðiÞ and p s ðkÞ represent the spectral value of the center pixel and the current neighborhood pixel on band S, respectively, and N represents the number of bands. Under the condition of two thresholds, each direction line expands simultaneously from the center pixel. One threshold condition is that when the current pixel heterogeneity value is greater than the spectral constraint threshold T1, diffusion stops on this direction line. Another threshold condition is that the length of the direction line between the current pixel and the center pixel is less than the threshold T2, so the search in this direction is stopped to prevent the neighborhood on the direction line from being too large. Finally, the neighborhood region pixels that meet the conditions in each direction of the central pixel are counted, and the pixel set obtained is the neighborhood region of the pixel. Compared with the spectral features of wireless sensing image, the water index improves the separability of water body and other ground objects and has a good suppression effect on background noise. Therefore, the normalized water index and shadow index are stacked with spectral features as the input features of RFCM clustering algorithm in the experiment. At the same time, in RFCM clustering algorithm, the spatial information of homogeneous neighborhood region of pixels is considered. Based on the advantages of the above two aspects, an urban surface water extraction algorithm integrating water index and RFCM was designed and applied to surface water extraction from domestic gF-2 wireless sensing images in complex urban environment. Design Objectives of Water Environment Monitoring System Based on Wireless Sensor Network (1) The construction of intelligent system and the application of wireless sensing technology to achieve the measurement and monitoring of water quality (2) To achieve reliable, high-speed, and low-power wireless transmission of water monitoring data (3) Collect data and monitor data: the host computer and the web page display the location of specific water areas and sensor nodes in water and display the latest time recorded data of various water quality parameters measured by sensor nodes (4) Under low power consumption, it can work normally for a long time Results and Analysis The RFCM algorithm proposed in this paper is compared and analyzed with the improved MFCM algorithm (8 × 8 and 16 × 16 were considered as MFCM8 and MFCM16 in the experiment), K-means clustering algorithm, NDWI threshold algorithm (TH), and object-oriented method (OBIA). In the experiment, OA and kappa coefficients of overall classification accuracy were used for statistical and quantitative evaluation of accuracy, and visual discrimination was used for qualitative evaluation [16]. The classified kappa coefficients and overall accuracy of the two research areas in G city and W city are shown in Figures 2 and 3. The kappa coefficient in G city is 89.88%, the kappa coefficient in W city is 92.49%, and the overall classification accuracy in G city is 89.14%. The overall classification accuracy of W city is 92.58%, and the classification accuracy of the method considering regional spatial information (MFCM and RFCM) and the object-oriented method (OBIA) is higher than that of the pixel-based method (K-means) and the water index threshold method (TH) and has a lower misclassification error rate and missed classification error rate [17]. Among them, in W city, the kappa coefficient of RFCM was 1%, 1.5%, and 1.3% higher than that of OBIA, MFCM8, and MFCM16, respectively. The OA of RFCM was 3.86%, 5.82%, and 4.48% higher than that of OBIA, MFCM8, and MFCM16, respectively. In city G, the Wireless Communications and Mobile Computing kappa coefficient of RFCM was 0.2%, 2.3%, and 0.7% higher than that of OBIA, MFCM8, and MFCM16, respectively, and the overall classification accuracy (OA) of RFCM was 3.2%, 5.83%, and 4.12% higher than that of OBIA, MFCM8, and MFCM16, respectively [18]. The OBIA method maintains the integrity of ground object better than K-means and TH method. RFCM and MFCM algorithms not only maintain the integrity of ground object but also better retain local details of ground object. Compared with other OBIA, Kmeans, TH, MFCM8, and MFCM16 algorithms, the RFCM algorithm can not only maintain the integrity of ground object but also better retain local details of ground object. The fine surface water bodies are effectively identified, the boundary information of surface water bodies is maintained well, and the influence of shadow of urban buildings is eliminated at the same time [19]. The improved FCM algorithm also considered the neighborhood regions with different window sizes of rules. Experimental results showed that the size of different neighborhood regions affected the water extraction accuracy. In W city, the kappa coefficient of MFCM16 was 0.41% higher than that of MFCM8, and the overall classification accuracy of MFCM16 was 1.33% higher than that of MFCM. In G city area, the kappa coefficient of MFCM16 was 1.81% higher than that of MFCM8, and the overall classification accuracy of MFCM16 was 1.7% higher than that of MFCM. The "salt-and-pepper" phenomenon of the experimental results was quantitatively counted, and the speckle noise was defined as the water area less than the water area. Table 1 shows that compared with other algorithms, the RFCM algorithm can obviously eliminate the "salt-and-pepper phenomenon" of surface water extraction. In the comparison experiment of the six algorithms, TH has the worst effect and has a large number of noise spots. Due to the phenomenon of "same object, different spectrum, foreign object in the same spectrum" of high-resolution data, a large number of isolated noise points are caused. K-means algorithm is better than TH algorithm and MFCM8 and MFCM16 algorithms. Thirdly, MFCM algorithm significantly reduces the "noise" spots of surface water, which proves that the spatial information of pixel neighborhood has a certain inhibitory effect on the "noise" spots of water. The OBIA algorithm is better than MFCM8 and MFCM16. The RFCM algorithm achieves the best experimental results and has a significant effect on reducing "salt-and-pepper phenomenon" [20]. Conclusions In this paper, a regional FCM clustering method combined with water index is proposed, which calculates the normalized water index (NDWI) from the fusion of multispectral wireless sensing images. Through the design of wireless sensor network water environment monitoring system, combined with normalized water index, fuzzy clustering results were obtained by the RFCM algorithm proposed in this paper. The optimal threshold was selected to defuzzify the fuzzy clustering results, and finally, the extraction results of urban surface water were obtained. The accuracy of the proposed algorithm was compared with that of the traditional surface water extraction algorithm. Comparing the RFCM algorithm with other algorithms, the RFCM algorithm obtained the best experimental results, to reduce the "salt-and-pepper phenomenon" effect. The algorithm in some details of surface water extraction was not taken into account and will continue to be studied in the future. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare no conflicts of interest.
4,221
2021-12-06T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Machine Learning Methods to Estimate Productivity of Harvesters: Mechanized Timber Harvesting in Brazil : The correct capture of forest operations information carried out in forest plantations can help in the management of mechanized harvesting timber. Proper management must be able to dimension resources and tools necessary for the fulfillment of operations and helping in strategic, tactical, and operational planning. In order to facilitate the decision making of forest managers, this work aimed to analyze the performance of machine learning algorithms in estimating the productivity of timber harvesters. As predictors of productivity, we used the availability of hours of machine use, individual mean volumes of trees, and terrain slopes. The dataset was composed of 144,973 records, carried out over a period of 28 months. We tested the predictive performance of 24 machine learning algorithms in default mode. In addition, we tested the performance of blending and stacking joint learning methods. We evaluated the model’s fit using the root mean squared error, mean absolute error, mean absolute percentage error, and determination coefficient. After cleaning the initial database, we used only 1.12% to build the model. Learning by blending ensemble stood out with a determination coefficient of 0.71 and a mean absolute percentage error of 15%. From the use of data from machine learning algorithms, it became possible to predict the productivity of timber harvesters. Testing a variety of machine learning algorithms with different dynamics contributed to the machine learning technique that helped us reach our goal: maximizing the model’s performance by conducting experimentation. Introduction Management integrates the routine of forest managers responsible for guiding and implementing mechanized logging operations. The optimization of time and biological assets capitalized in planted forests, when exhausted by timber harvesters, affects the success of the operation. Thus, it is necessary to know the variables that influence mechanized timber harvesting, allowing for more effective planning. Quality indicators, evaluation criteria, and risk analysis techniques enhance the structures that support decision makers. In doing so, data collected in forest inventory, measurements at the stand level, operational forest management, and onboard computers in forest machinery help and allow the management procedures for timber harvesting operations [1][2][3][4][5][6]. The manipulation and reuse of this information promote its use in the management development itself and helps in the identification of opportunities that can be foreseen. However, a mechanized timber harvesting operation planning scope necessarily requires a quantitative, robust, and reliable quantitative database [7,8]. Dataset We used structured data from the production and operation of mechanized timber harvesting in Eucalyptus-and Pinus-planted forests carried out by cut-to-length systems with harvesters. The planted forests with Eucalyptus had a spacing of 3.3 m × 1.8 m and mean age of 14 ± 9.87 years. The Pinus forest had a spacing of 3.3 m × 1.8 m and mean age of 22 ± 9.09 years. The wood from these forests was used as raw material for the production of pulp and paper. The average meteorological conditions in the study region, according to the National Institute of Meteorology [56], were a relative humidity of 69.24%, wind speed of 4.29 ms −1 , and an air temperature of 289.3 K. The operations took place in Brazil, in a region with a slope gradient from 7.32% to 35.06%. The intervals were categorized by a gentle (3% to 10%), moderate (10% to 32%), and steep slope relief (32% to 56%), according to Speight [57]. This research was based on empirical data and silvicultural inputs; therefore, the data were part of the daily records collected in the field by the onboard computers of timber harvesters. Despite considering all records in the initial analysis, we employed a series of compensatory controls from data wrangling. In the two 10 h shifts daily, the machine availability, the individual mean volumes of trees, and the terrain slope added up to 144,973 instances incurred in the period of 28 months. These data were categorical, numerical, and ordinal, according to the box plot and distributions of predictor and target variables provided in the Supplementary Material ( Figure S1). The bases were labeled, joined, and manipulated using the R programming language [58]. The actual times spent in activities were recorded in the onboard computers of timber harvesters. This way, we estimated productivity from the ratio between the timber volume extracted by a harvester, in cubic meters, and the effective operation time, in seconds [59,60]. The same operator could operate different brands and models of timber harvesters; however, this variable was not added in the construction of the model, due to the difficulty in tracking these data in the database. Altogether, the operating records of 21 harvesters were used (Table 1). Through the programming language R, when implementing machine learning routines for management planning and detecting data quality, we considered, according to Konstantinou and Paton [61], procedures for transforming, cleaning, and merging different sources. We built a data wrangling routine, in which the outliers and potentially correlated variables were removed. The instances went through the data wrangling process, which was performed in order to properly transform and gather acquired data. Additionally, through the interquartile range, conceptually defined by the Tukey range [62], we removed outliers and, using Spearman correlation, we verified the correlations between attributes (p < 0.05). Furthermore, data balancing from SMOTE was performed. The SMOTE was adopted because it is a reference algorithm to solve the class disequilibrium learning problem [63]. The SMOTE algorithm has the dynamics of generating new synthetic examples in the neighborhood of small groups of nearby instances, using the k-nearest neighbor [64]. The function was implemented from the smotefamily package. Different Learning Methods and Algorithm Approaches Using a single dataset, we compared the predictive performance of 24 machine learning algorithms to estimate the productivity of timber harvesters. These algorithms were based on a decision tree, gradient boosting machine, linear regression, k-nearest neighbors, support vector machine, and artificial neural network. For determining the best model, we used the metrics: root mean error (RMSE), mean absolute error (MAE), and mean absolute percent (MAPE). We used the determination coefficient (R 2 ) as a final performance measure for each method. We adopted the gradient (5, 10, 15, 20, and 25) for cross-validation, in which the hyperparameters were automatically optimized. Finally, we implemented stacking ensemble and blending ensemble learning methods. We ordered stacking ensemble learning methods in a hierarchical data structure. On each fold set, we applied k-fold cross-validation. The predictive performance of models in relation to unseen data was maximized by determination coefficient (R 2 ), minimizing test RMSE, which was determined from the random sample mean generated (n = 80). We implemented supervised learning regression using the Python programming language PyCaret library [65] to automate machine learning workflow and model development. From data instances, we grouped 90% (n = 1466) as a training set and 10% (n = 163) as a test set. We tested machine learning algorithms and selected them according to their performance in predicting productivity of forest machines, using universal statistical metrics for evaluating the performance of models [66], such as root mean squared error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), and determination coefficient (R 2 ). Thus, we subjected the algorithms to different machine learning methods. First, we verified the performance of algorithms in a decoupled version, with hyperparameters in default mode. To improve performance, we adjusted the hyperparameters of the selected algorithms. We combined the validated data and formed the meta-feature set, the test data, and the target set. Again, we combined the sets using new meta-resource sets, creating a new meta-training set. The new target sets formed a new meta-test. We generated final predictions by meta-learner level one, from training with the meta-training set. The combination learning method consisted of combining machine learning algorithms to minimize prediction error rates. For this, we divided the dataset into training and testing, as well as implementing zero-layer algorithms, which generated validation and test sets. We combined respective sets with new meta-training and meta-test sets [67] and generated final predictions by level-one meta-learner, from training with a meta-training set. Dataset Quality The manipulation of the dataset with daily records of the mechanized timber harvesting operation resulted in a sample of 144,973 instances. However, because it was consolidated from the unity of different sources, including manual notes, the goodness of the dataset was partially compromised. Thus, we removed duplicate instances and instances with missing information. With a data wrangling routine, in addition to cleaning filtering and transforming data, we carried out an examination of data quality, excluding outliers and promoting balancing. It is noteworthy that, despite timber harvesters having onboard computers, the data recording process still required manual interactions. Consequently, we implemented this process in 1.12% of the dataset, the models with machine learning algorithms ( Table 2). The attributes selected for the model building were individual mean volumes of trees, terrain slope, and availability of hours of machine use. More details about the mean, standard deviation, and median of the dataset from mechanized timber harvesting operation for attributes under study are shown in Table 3. Table 3. Mean, standard deviation, and median of dataset from mechanized timber harvesting operation, after process of removing outliers of three initial attributes. Attribute Minimum Different Learning Methods and Algorithm Approaches First, we analyzed the predictive performance of 24 algorithms, individually, based on model fit metrics. Of the three trained algorithms, based on the decision tree, the determination coefficient of extra trees stood out, as it was 0.01 higher than the coefficient of determination of random forest and 0.22 higher than that of the decision tree (Table 4). When analyzing the four algorithms based on gradient-boosted machines, the best determination coefficient was obtained by CatBoost Regressor, which was 0.04 higher than Gradient Boosting Regressor and 0.20 higher than AdaBoost Regressor (Table 5). The algorithm's availability based on linear regression contributed to the application of the twelve trained algorithms (Table 6). It was found that the Automatic Relevance Determination, Kernel Ridge, Linear Regression, Huber Regression, Ridge Regression, and Bayesian Ridge algorithms showed the same determination coefficient, which was 0.02 higher than that of TheilSen Regressor, 0.06 higher than Least Angle Regression, 0.13 higher than Orthogonal Matching Pursuit, and 0.42 higher than Lasso Regression and Elastic Net. Despite having different dynamics, the best determination coefficient was obtained by the k-neighbors regressor, which was 0.05 higher than the multi-layer perceptron regressor, 0.18 higher than the Random Sample Consensus, and 0.45 higher than Support Vector Regression (Table 7). Table 7. Evaluation metric of models based on k-nearest neighbor, multi-layer perception regressor, random sample consensus, support vector regression, dummy regressor, and passive-aggressive regressor applied to the training set from mechanized timber harvesting operation. Among applied models that presented better determination coefficients were blending ensemble and stacking ensemble. Next, the algorithms were carried out in default mode, highlighting the Extra Trees Regressor (Table 8). When analyzing the metrics in the dataset test, the blending ensemble model was confirmed as the best predictor of productivity of timber harvesters (Table 9). In addition, as an assessment of overall model performance, we verified the 80 combinations of test set data, with the response. Thus, it was evident that the blending ensemble, followed by the stacking ensemble, produced relatively higher average values of R 2 ( Figure 1) and a lower degree of dispersion. Figure 2 illustrates the performance of the main algorithms used in model construction to predict productivity, relating to observed values. When we selected the black box algorithm, the increase in performance compromised the interpretation of the relationships between the predictor variables and the target variable. Complex mathematical functions made it difficult to infer from technical experts. However, by visualizing the distributions of predictor variables in each quartile of the response variable it was possible to infer that higher productivity was associated with greater machine availability and lower slope levels. Model Although the algorithms used in the construction of models do not allow interpretability, as evidenced by productivity quartiles of the test set, the density distribution of predictor variables of individual mean volumes of trees, terrain slope, and machine availability was determined (Figure 3). When we selected the black box algorithm, the increase in performance compromised the interpretation of the relationships between the predictor variables and the target variable. Complex mathematical functions made it difficult to infer from technical experts. However, by visualizing the distributions of predictor variables in each quartile of the response variable it was possible to infer that higher productivity was associated with greater machine availability and lower slope levels. Although the algorithms used in the construction of models do not allow interpretability, as evidenced by productivity quartiles of the test set, the density distribution of predictor variables of individual mean volumes of trees, terrain slope, and machine availability was determined (Figure 3). Discussion Incorporating machine learning models into forest operations management routines allows managers to infer tactical and operational adjustments, with agility in decision making and accurate prognosis. In mechanized timber harvesting, the scenario dynamics and external influences that impact activities demand this adaptability together with forecasting capacities. However, conducting and monitoring the performance of mechanized timber harvesting operations, using analytical tools such as machine learning, is restricted due to the quantity and quality of available data. Liski, et al. [31] and Maktoubian et al. [68]. Demirci et al. [69] and Abbasi et al. [70] report that decentralization and lack of data management in forest environments reduce the achievement of significant results. The harvesters that acted as data sources had embedded technology, with output records of activities of timber cutting and sectioning. The lack of interoperability among electronic devices made communication and data transfer susceptible, compromising the possibility of instant corrections and perceptions of deviations in notes. Furthermore, Buccafurri et al. [71] and Shi et al. [72] point out that the quality of instances generated is part of a cooperative process, which requires participation, and therefore, leveling of all Discussion Incorporating machine learning models into forest operations management routines allows managers to infer tactical and operational adjustments, with agility in decision making and accurate prognosis. In mechanized timber harvesting, the scenario dynamics and external influences that impact activities demand this adaptability together with forecasting capacities. However, conducting and monitoring the performance of mechanized timber harvesting operations, using analytical tools such as machine learning, is restricted due to the quantity and quality of available data. Liski, et al. [31] and Maktoubian et al. [68]. Demirci et al. [69] and Abbasi et al. [70] report that decentralization and lack of data management in forest environments reduce the achievement of significant results. The harvesters that acted as data sources had embedded technology, with output records of activities of timber cutting and sectioning. The lack of interoperability among electronic devices made communication and data transfer susceptible, compromising the possibility of instant corrections and perceptions of deviations in notes. Furthermore, Buccafurri et al. [71] and Shi et al. [72] point out that the quality of instances generated is part of a cooperative process, which requires participation, and therefore, leveling of all those involved. Data management must be aligned with operations' organization, which makes it the responsibility of forest managers to coordinate these efforts. The data residual volume, after execution of data wrangling processes, was still sufficient to verify the performance of machine learning algorithms in the productivity modeling of mechanized timber harvesting. Of the algorithm groups applied in the modeling, in the default process, the ones that performed best were those based on the decision tree, gradient-boosted machine, and k-nearest neighbor. Therefore, the best individual performance algorithms were, respectively: extract trees, gradient-boosted, and k-nearest neighbor. There are many types of decision trees that have as their core the entropy of information. According to An and Zhou [73], in the specific analysis process, the gained information for each attribute is classified and ordered. Among the decision tree algorithms evaluated, the one that presented the best performance was the extremely randomized trees or extra trees algorithm. This algorithm was developed by Geurts et al., [74] and uses the same principle of random forests. However, as supported by Ahmad et al. [75], the extra trees may have differentiated themselves by using the entire training dataset to train each regression tree and not just a bootstrap replica. Of the algorithms based on gradient-boosted machines, the CatBoost Regressor showed the best fit model to the data. This algorithm developed by Prokhorenkova et al. [76] is an enhancement of gradient boosting, designed to avoid attribute dependency and improve prediction accuracy on small datasets. As it is a non-parametric algorithm that, according to Ortiz-Bejar et al. [77], stores all known observations and uses them in the prediction based on similarity functions, the third-best performance was from the model based on k-nearest neighbor. As a way of enhancing the prediction, tests were carried out with the blending ensemble and stacking ensemble learning methods, using combined learning. These learnings were combined from the three algorithms, in default mode, which presented the best performances. The predictions obtained by both methods were superior to those obtained by algorithms in default mode. Jong et al. [78] and Jordan and Mitchell [79] point out that, in general, combined learning methods increase the performance of models built with machine learning. Associating the blending ensemble use with the possibility of pre-determining productivity, based on attributes of individual mean volumes of trees, terrain slope, and availability of hours of machine use, promotes dynamism in managers' planning, especially in operational planning, which requires quick responses in adverse operating conditions. This corroborates the limitations of traditional estimating method productivity through the study of times. In addition, the comparison through values of employed models' scatter diagrams demonstrated the effects of predictor variables on productivity. In upper quartiles, in operating conditions with lower slopes and longer availability harvesters, their effects increase considerably the productivity. The building of models involving machine learning algorithms, in addition to providing prediction of harvester productivity in the mechanized timber harvesting operation, allowed us to look at the bases that guide strategic decisions of operations in planted forests. This opportunity has shown that, despite the quality, suitable data promote knowledge extraction, mainly from attributes not correlated with productivity. Conclusions From the use of adjusted data of machine learning algorithms, it is possible to predict the productivity of timber harvesters. Among the attributes that compose datasets of mechanized timber harvesting activities, the individual mean volumes of trees, terrain slope, and machine availability are the main factors that impact harvester productivity estimation. Testing a variety of machine learning algorithms with different dynamics contributed to the development of a machine learning technique that enabled what it proposes, i.e., experimentation and good performance of the models. Thus, the choice for blending ensemble learning was guided by the comparison of model fit statistical metrics. Among the learning methods by blending ensemble, stacking ensemble, and algorithms, in default mode, the blending ensemble had a determination coefficient of 0.71 and a mean absolute percent error of 15%.
4,408.6
2022-07-07T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Diffractive Vector Photoproduction using Holographic QCD We discuss diffractive photon-production of vector mesons in holographic QCD. At large $\sqrt{s}$, the QCD scattering amplitudes are reduced to the scattering of pair of dipoles exchanging a closed string or a pomeron. We use the holographic construction in AdS$_5$ to describe both the intrinsic dipole distribution in each hadron, and the pomeron exchange. Our results for the heavy meson photon-production are made explicit and compared to some existing experiments. I. INTRODUCTION Diffractive scattering at high energy is dominated by pomeron exchange, an effective object corresponding to the highest Regge trajectory. The slowly rising cross sections are described by the soft Pomeron with a small intercep (0.08) and vacuum quantum numbers. Reggeon exchanges have even smaller intercepts and are therefore subleading. Reggeon theory for hadron-hadron scattering with large rapidity intervals provide an effective explanation for the transverse growth of the cross sections [1]. In QCD at weak coupling the pomeron is described through resummed BFKL ladders resulting in a large intercept and zero slope [2,3]. The soft Pomeron kinematics suggests an altogether non-perturbative approach. Through duality arguments, Veneziano suggested long ago that the soft Pomeron is a closed string exchange [4]. In QCD the closed string world-sheet can be thought as the surface spanned by planar gluon diagrams. The quantum theory of planar diagrams in supersymmetric gauge theories is tractable in the double limit of a large number of colors N c and t Hooft coupling λ = g 2 N c using the AdS/CFT holographic approach [5]. In the past decade there have been several attempts at describing the soft pomeron using holographic QCD [6][7][8][9][10][11]. In this paper we follow the work in [10] and describe diffractive γ+p → V +p production through the exchange of a soft pomeron in curved AdS 5 geometry with a soft or hard wall. This is inherently a bottom-up approach [12] with the holographic or 5th direction playing the role of the scale dimension for the closed string, interpolating between two fixed size dipoles. We follow the suggestion in [13,14] and describe the intrinsic dipole size distribution of hadrons on the light cone through holographic wave functions in curved AdS 5 . Diffractive production * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS>‡ Electronic address<EMAIL_ADDRESS>of vector mesons was investigated in the non-holographic context by many in [15]. Recently a holographic description was explored in [16] in the context of the color glass condensate, and reggeized gravitons in [17]. The organization of the paper is as follows: In section 2 we briefly review the set up for diffractive scattering through a holographic pomeron as a closed surface exchange in curved AdS 5 with a (hard) wall. In section 3, we detail the construction of the light cone wavefunctions including their intrinsic light cone dipole distributions. In section 4 and 5 we make explicit the AdS 5 model with a (soft) wall to descrive the intrinsic dipole distributions of massive vector mesons. As a check on the intrinsic wavefunctions, we calculate the pertinent vector electromagnetic decay constants. Our numerical results for the partial cross sections and their comparison to vector photoproduction data are given in section 6. Our conclusions are summarized in section 7. In this section we briefly review the set-up for dipoledipole scattering using an effective string theory. For that we follow [11] and consider the elastic scattering of arXiv:1804.09300v1 [hep-ph] 25 Apr 2018 two dipoles as depicted in Fig. 1. b is the impact parameter and the relative angle θ is the Euclidean analogue of the rapidity interval [18,19] cosh χ = s with s = (p 1 + p 2 ) 2 . A. Dipole-dipole correlator Following standard arguments as in [11], the scattering amplitude T in Euclidean space is given by with WW the connected correlator of two Wilson loops, each represented by a rectangular loop sustained by a dipole and slated at a relative angle θ in Euclidean space as shown in Fig. 1. The leading 1/N c contribution from a closed string exchange is where is the string partition function on the cylinder topology with modulus T . The sum is over the string world-sheet with specific gauge fixing or ghost contribution. Here g s is the string coupling. B. Holographic Pomeron In flat 2 + D ⊥ dimensions, the effective string description for long strings is the Polyakov-Luscher action with D ⊥ = 2. However, the dipole sources for the incoming Wilson loops vary in size within a hadron. To account for this change and enforces conformality at short distances, we follow [9] and identify the dipole size z with the holographic direction. The stringy exchange in (4) is now in curved AdS in 2 + D ⊥ with D ⊥ = 3. At large relative rapidity χ this exchange is dominated by the string tachyon mode with the result [9] (6) and ∆(χ, ξ) refers to the tachyon propagator in walled AdS. It solves a curved diffusion equation in the metric defined by within 0 ≤ z ≤ z 0 with a zero current at the wall, with the chordal distances given by and u = ln z0 z and u = ln z0 z . The holographic Pomeron intercept and diffusion constant are respectively given by The string coupling in walled AdS is identified as g s = κ g λ/4πN c and α /z 2 0 = 1/ √ λ. Here κ g is an overall dimensionless parameter that takes into account the arbitrariness in the normalization of the integration measure in (4). This analysis of the holographic Pomeron is different from the (distorted) spin-2 graviton exchange in [8] as the graviton is massive in walled AdS 5 . Our approach is similar to the one followed in [11] with the difference that 2 + D ⊥ = 5 and not 10 [9]. It is an effective approach along the bottom-up scenario of AdS 5 . Modulo different parameters, the holographic Pomeron yields a dipole-dipole total cross section that is similar to the one following from BFKL exchanges [20,21], and a wee-dipole density that is consistent with saturation at HERA [22]. III. PHOTON-HADRON SCATTERING In a valence quark picture an incoming meson is considered as a dipole made of a qq pair, while a baryon is considered as a dipole made of a pair of a quark-diquark. The quantum scattering amplitude follows by assigning to the scattering pairs dipole sizes r 1,2 and distributing them within the quantum mechanical amplitude of the pertinent hadron. At large √ s the scattering particles propagate along the light cone and are conveniently described by light cone wave functions. Typically, the latters are given in terms of an intrinsic wavefunction Ψ(x, r) for a dipole of size r with a fraction of parton longitudinal momentum x. With this in mind, the scattering amplitude for the diffractive process for vector meson photo-production γ + p → V + p, reads The 1 4π normalization conforms with the light cone rules. Note that in flat D ⊥ -space (also for ξ 1), the propagator (9) simplifies after the substitution z 2 0 D → α 2 . For an estimate of (12) we may insert (14) into (12), ignore the wall and assume z ∼ z to carry out the integration in (12) exactly with the Pomeron trajectory A. Photon wave function The description of the light cone photon wave function in terms of a qq pair follows from light cone perturbation theory as described in [23]. Let Q 2 be the virtuality of the photon of polarization h. The amplitude for finding a qq pair in the virtual photon with light cone momentum fractions (x,x) is given by [15,23] with Ψ γ h,h the matrix entries in helicity of Ψ γ in (12). Here ee f is the charge of a quark of flavor f , 2 = xxQ 2 + m 2 f , and K 0,1 are modified Bessel functions. Also (r, θ r ) are the 2-dimensional dipole polar coordinates. While the photo-production analysis to be detailed below corresponds to Q 2 = 0, we will carry the analysis for general Q 2 for future reference. B. Hadron wave functions We start by defining the proton (squared) wave function for a pair of quark-diquark as by simply assuming equal sharing of the longitudinal momentum among the pair, and a fixed dipole size r p , with the normalization The vector meson wave function on the light cone will be sought by analogy with the photon wave function given above. Specifically we write where Ψ V h,h are the matrix entries in helicity of Ψ V in (12). The intrinsic f L,T (x, r) dipole distributions for the vector mesons will be sought below in the holographic construction by identifying the holographic direction in the description of massive vector mesons with the dipole size [13,14]. C. Partial cross sections The partial diffractive cross sections for the production of longitudinal and transverse vector mesons are given by with the virtual-photon-vector-meson transition amplitudes following from the contraction of the helicity matrix elements (17)(18)(19)(20). The results are The vector charge e V is computed as the average charge in a state with flavor content V = f a ff f . The elastic differential cross section follows as IV. fL,T FROM HOLOGRAPHY The intrinsic light cone distributions in the vector mesons is inherently non-perturbative. Our holographic set-up for the description of the γ+p → V +p process as a dipole-dipole scattering through a holographic pomeron in AdS 5 suggests that we identify the intrinsic light cone distributions f L,T with the holographic wave function of massive Spin-1 mesons in AdS 5 . The mass will be set through a tachyon field in bulk. A. AdS model for Spin-1 With this in mind, consider an AdS 5 geometry with a vector gauge field A and a dimensionless tachyon field X described by the non-anomalous action with DX = dX + AX and F = dA, M, N = 0, 1, 2, 3, z and signature (−, +, +, +, +) The coupling g 2 5 ≡ 12π 2 /N c is fixed by standard arguments [12]. The background tachyon field satisfies d dz which is solved by The constants in (27) are fixed by the holographic dictionary [5,12] near the UV boundary (z ≈ 0) In the heavy quark limit Q Q → 0, so X(z) ≈ M z. In the presence of X(z), the vector gauge field satisfies We now seek a plane-wave vector meson with 4dimensional spatial polarization µ in the form which yields We now use the solution for X(z) ≈ c 1 z + c 2 z 3 with c 2 = 0 (no heavy chiral condensate), and identify 4g 2 5 c 2 1 = (2m f ) 2 with m f the (constituent) quark mass. Thus near the boundary We can now either solve (32) using a hard-wall by restricting (32) to the slab geometry 0 ≤ z ≤ z 0 , or introducing a soft wall [24]. The former is a Bessel function with a spectrum that does not Reggeizes, while the latter is usually the one favored by the light-cone with a spectrum that Reggeizes. The minimal soft wall amounts Defining E = M 2 − (2m f ) 2 , it follows that M 2 n = 4κ 2 (n + 1) + (2m f ) 2 ϕ n (z) ∼ (κz) The meson spectrum Reggeizes. The value for κ = √ σ T /2 ≈ 1 2 GeV is fixed by the string tension. B. Intrinsic wave functions We now suggest that the holographic wavefunction can be related to the intrinsic amplitudes f L,T for the dipole distribution in the light cone wavefunctions for the vector mesons in (20). For that we note that the main part of the transverse vector in (20) satisfies Ψ T ∼ ∇f T . With this in mind, we identify the holographic coordinate z with the relative dipole size r through z = √ xxr [13,14], and match the r-probability of the intrinsic state to the z-probability of the spin-1 state in bulk AdS 5 , The extra 1 z in the bracket is the warping factor. Solving for f T we obtain For a massive spin-1 meson with the helicity content and quark mass analogous to the γ * ∼qq content as ansatz in (37), we will assume the holographic dipole content derived in (37), with instead general overall constants More specifically we have (39) is in agreement with the intrinsic dipole wave function developed in [14] using the light cone holographic procedure for m f = 0. We note that (35) describes a massive spin-1 gauge field in AdS 5 . V. LEPTONIC DECAY CONSTANTS The size of the light cone wavefunction is empirically constrained by the electromagnetic decay width V → e + e − as captured by the measured vector decay constant f V for each of the vector mesons, This puts an empirical constraint on the longitudinal and transverse light cone wavefunctions (37) using the holographic intrinsic wavefunctions (39) as suggested earlier. A. Longitudinal More specifically, the longitudinal wavefunction gives for the right-hand-side in (43) The left-hand-side in (43) can be reduced using the light cone rules in the Appendix of [23] together with the longitudinal wavefunction (20) to have The first bracket refers to the reduction of the current, and the second bracket to the reduction of the longitudinal wavefunction. The result for the vector decay constant from the longitudinal current J + em is after the use of the normalization N L as given in (42). For example, for the rho meson f ρ /κ = 9 √ 5/(32 √ 2), while for the phi meson f φ /κ = 3 √ 5/32. B. Transverse For a consistency check, the same rules apply to the transverse component of the current J 1 em . The transverse wavefunction gives for the right-hand side of (43) The left-hand-side can be reduced using also the light cone rules The first contribution stems from the reduction of the current and the second contribution from the reduction of the transverse wavefunction. The ∓ signs in (48) follows the h = ± assignments. Using the explicit form of the wavefunction (37) and performing an integration by parts, we have the identity Inserting (49) in (48) gives for the left-hand-side which reduces to Substituting the value of κ from the Regge spectrum (34) yields the transverse to longitudinal ratio for the decay constants with ζ = 2m f /M V . In Fig. 2 we show the behavior of (52) in the range ζ = 0, 1 from the massless to the heavy quark limit where it reaches 1. VI. NUMERICAL ANALYSIS To carry out the numerical analysis, we can partially eliminate the model dependence in the tansition amplitudes (22), by trading κ in the normalizations N L,T in (42) with (46) to obtain Here we have set with κ fixed by the ground state meson mass in (34) With the exception of g s , κ, m f , all holographic parameters D ⊥ , λ, s 0 , z 0 , z p are fixed by the DIS analysis in [9] as listed in Table I. For the light vector mesons, we have set m u,d,s at their constituent values, and m c,b at their PDG values. The value of κ is adjusted to reproduce the best value for the vector meson decay constants. The vector masses M V are then fixed by (55) as listed in Table I. In our holographic set up, the lower decay constants for the heavier mesons imply smaller values of κ (string tension) for J/Ψ, Υ in comparison to the ρ for instance. Since f 2 V is a measure of the compactness of the wavefunction at the origin this is reasonable, although the spread in the transverse direction appears to be larger in the absence of the Coulombic interactions which are important for J/Ψ, Υ. Finally, the string coupling g s is adjusted to reproduce the overall normalization of the cross section for each vector meson channel. A. Radiative widths In terms of (46), the radiative decay width Γ(V → e + e − ) is We note that (46) is finite in the heavy quark limit as expected from the Isgur-Wise symmetry. Using (34) with e V fixed by (23). The holographic decay widths are in agreement with the empirical ones for the light vector mesons ρ, ω, φ, but substantially smaller for the heavy vector mesons JΨ, Υ. This maybe an indication of the strong Coulomb corrections in the heavy quarkonia missing in our current holographic construction. One way to remedy this is through the use of improved holographic QCD [35]. In Fig. 3 we show the differential ρ-photoproduction versus |t| for E γ = 2.8 GeV. At this energy the photon size is of the order of the hadronic sizes and sensitive to non-perturbative physics. In Fig. 4 we show the total cross section for ω-photoproduction in the range of low mass photons. The discrepancy close to treshold maybe due to t-channel sigma-exchange and the s-channel photo-excitation of the ∆(1230), N (1520), N (1720) in the intermediate nucleon state, not retained in our analysis. Note that both the ρ and ω have comparable transverse sizes with 1/κ ≈ 1 3 fm but very different decay constants. We expect their differential and total cross sections to be in the ratio of their decay constant, say f 2 ω /f 2 ρ ≈ 1 10 . In Figs. 5-8, we present the total and differential cross section for the φ-photoproduction γp → φp process. In Fig. 6 we compare our results to the available CLAS and LEPS data. Our results agree with the backward angle data well, but overshoot the forward angle data. In Fig. 7-8, the differential cross sections are shown. The agreement at large √ s probes mostly the Pomeron exchange. Note that our overall fit to the φ-decay constant implies a transverse size for the φ that is comparable to the ρ, ω sizes, which is reasonable. The differential and total cross sections are expected to be in the ratio of the squared decay constants FIG. 5: Total cross section for γp → φp from threshold to √ s = 100 GeV. Data are taken from [27][28][29]. In Figure 9 we show the differential corss section for γp → J/Ψp process, and in Figure 10 we show the differential corss section for γp → Υp process. We note that 2m f = 2.58, 8.83 GeV respectively, so √ s > 10 GeV are necessary to eikonalized the heavy quarks. These results are only exploratory, since the transverse sizes of the J/Ψ, Υ are large in our current construction as we noted earlier. To remedy this shortcoming requires including the effects of the colored Coulomb interaction which is important in these quarkonia states. In holography this can be achieved through the use of improved holographic QCD [35] which is beyond the scope of our current analysis. VII. CONCLUSIONS In QCD the diffractive photo-production of vector mesons on protons at large √ s is described as the scattring of two fixed size dipoles running on the light cone and exchanging a soft pomeron. In a given hadron the distribution of fixed size dipoles is given by the intrinsic dipole distribution in the light cone wavefunction. The soft pomeron exchange and the intrinsic dipole distribu-tion are non-perturbative in nature. We use the holographic construct in AdS 5 to describe both. The soft Pomeron parameters used in this work were previously constrained by the DIS data [9], so the extension to the photoproduction mechanism is a further test of the holographic construction. The new parameter characterizing the transverse size of the vector mesons was adjusted to reproduce the meson radiative decays and found to be consistent with the expected string tension characteristic of the vector Regge trajectory. Comparison of our results to the data for photoproduction of vectors show fair agreement with data for the ρ, ω, φ, although the inclusion of Reggeon exchanges may improve our description at low photon masses near treshold. At high photon masses, perturbative QCD scaling laws are expected. Our analysis of the photoproduction of J/Ψ, Υ is limited since the present construction does not account for the substantial Coulomb effects for these quarkonia. We hope to address this issue and others next.
4,793.6
2018-04-25T00:00:00.000
[ "Physics" ]
Understanding theodicy and anthropodicy in the perspective of Job and its implications for human suffering of the signs and symbols in the text. The results of this study indicate that the concept of anthropodicy stands as a complement to the idea of theodicy, which can help humans - especially believers - to understand the meaning of suffering and their vocation in a world full of uncertainty while still having faith in God, who is sovereign over all. Contribution: This article contributes to providing an understanding of anthropodicy from Job’s perspective, so that humans see suffering as God’s sovereignty and as something that God allows in order to see God’s omnipotence. silence on God's part is not a symptom of his displeasure but his remoteness as Creator. The Creator's remoteness implies that he is the first to speak and can choose to remain silent even if his creatures are in great danger (Jong 2013;Maletta 2021). In his speech and his silence, God acts according to his divine plan, which is a mystery to humanity; adversity does not always point to God's wrath and human transgression, and holy silence does not always indicate divine wrath (Kanov 2021). The usual conclusion drawn from this reflection is that God always cares about humans because he feels and experiences human suffering. Suffering humans must ultimately accept suffering -the mystery -with hope in God (Hoskins 2020). But another thing that needs to be discussed is how humans should act in the face of suffering itself. Should a person be silent with his or her suffering, drown in surrender and cry out, 'regretfully, I sit in dust and ashes'? (Job 42:6) To answer this question, the author tries to dig deeper into the book of Job, gaining truth values from the story of Job's suffering life. This research uses qualitative research methods by collecting data, analysing and interpreting them (Gioia 2021) to find a deep understanding of a phenomenon, fact or reality (Gear, Eppel & Koziol-Mclain 2018). This article begins by presenting the outline of the book of Job, analysing it and then drawing conclusions about the value or meaning of suffering in human life from a different perspective. Outline of the book of Job The book of Job begins with an introduction that introduces the reader to the book's central character, Job. It is said that Job was a devout and wealthy man who lived in the land of Uz. Most interpreters state that the land of the Uz was east of Israel in the Arabian desert, possibly between Damascus and the Euphrates River, which is today the border area between Jordan, Iraq and Saudi Arabia (Wright & Măcelaru 2018). No man in the east was richer than Job. In the beginning, Job is shown as an ideal human figure, having wealth and being loyal to God. The first heavenly congregation After a brief description of Job's godly and prosperous life, the setting changes. Now the reader is brought to the atmosphere of the heavenly congregation, where God's children come before God. Also among them are demons. In the original language, Satan also means accuser or claimant. This designation does not refer to a personal name but rather a functional description; however, there is no doubt that the devil as a creature is the enemy of God (Hill & Walton 2000). Satan does not come before God on his throne at will, but by God's sovereign will, as he calls Satan to come to him. In the dialogue between God and the devil, God takes the initiative to open the conversation about Job as his faithful servant, saying there is no one on earth like him. But the devil claims that God's blessings to the righteous hinder the development of actual truth, because people live righteously for the benefits (gifts) they can receive. Satan challenges God to take all that belongs to Job to prove his argument. God bets with the claimant on Job's integrity. The second suffering This dialogue between God and the devil in the second heavenly congregation results in the suffering Job feels in his own body. The terms 'flesh and bones' refer to a person's entire being (Boss 2010), so it can be understood that Job endures excruciating pain throughout his body. The disease he may be suffering from is black leprosy, the most disgusting and dangerous type of leprosy, which causes scabies all over the skin, swelling of the legs and face, hair loss and loss of the sense of touch; the voice becomes nasal and hoarse, and bones and skin are covered with spots and tumours, initially red and then black (Sørensen & Kalleberg 2001). Job and his friends Hearing the news about Job, three of his friends (Eliphaz, Bildad and Zophar) visit him. When they see Job's suffering, they weep and mourn with him. So severe is the suffering that they sit for a week and mourn without uttering a word (2:13). After that, Job begins to speak. But Job's words seem different from what he has said before. He complains that his life is miserable. The patient Job seems to have turned into an impatient Job. This is, of course, very human, considering that Job is described as experiencing the apocalypse. The friends then take turns advising Job, and to each of his friends' advice, Job always answers. The arguments of Job's friends are the same, namely that suffering is the result of personal sin (retributive justice theory). Therefore, Job's great suffering proves his sinfulness and hypocrisy (Janzen 2012). Job's three friends advise him to repent and turn to God based on this view. In every answer Job gives to his friends, he always emphasises that he is innocent. After a long dialogue between Job and his three friends, Elihu's argument draws the reader's attention. Elihu is a mysterious figure. He is not mentioned as part of Job's three friends. The absence of Job's response and God's response to Elihu's argument shows that it seems that Elihu's argument is a part that was added later (Vicchio 2020). However, Elihu's appearance in Chapter 32 is unique. He was introduced complete with his father's name (Barakheel), his tribal name (the Bus) and the name of his people (the Ram). The name Elihu itself means 'he is my Lord', which is similar to the name Eliyahu, 'Yahweh is my Lord' (Batnitzky & Pardes 2014), while his father's name, Barakheel, means 'God has blessed him' (Whybray 2008). Based on this name, he too is a worshiper of the Lord, just like Abraham. Elihu expresses his anger at Job for thinking he is more righteous than God (32:2). On the other hand, Elihu also questions the arguments of Job's three friends, who corner Job instead of providing a solution to his problem. In his argument, Elihu puts forward three new views regarding the problem of Job, which contradict the theory of retributive justice that Eliphaz, Bildad and Zophar have previously put forward. The first view is the view of moral quality. This view would suggest that sometimes God uses evil and suffering to develop certain moral qualities, such as fortitude and patience (33:19) (O'Connor 2012). The second view is the testing perspective. This view shows that sometimes God uses evil and suffering to test the character of godly people (33:16; 34:3; 34:36) (Balentine 2021). The third view is the view of the divine plan. This view emphasises that God knows the meaning of Job's suffering better than humans (33:12; 34:31-32; 36:22 and 26-30; 37:23-24) (Van der Zwan 2019). God has a plan far beyond human knowledge. Elihu discusses suffering from a view of the past (retributive justice theory) towards a view of the future (divine plan view). Elihu's view is theodicy. But this view is better than Job's other friends' answers, which contradict God's character. True to his name, in the case of Job, Elihu appears as a defender of God's honour, teaching that God disciplines his servants with mercy and justice (Gray 2010). Elihu's argument is the opening part of God's self-revelation to Job from within the storm. God answered Job from the storm If we think that Elihu's argument is a part that was added later, then the silence of Job and his companions during the argument is reasonable. But furthermore, Job himself is trying to wait for God's answer instead of humanity's answer (31:35), so it cannot be said that Elihu silences Job. God, as the holder of power over Job's suffering, is finally included in the discussion. Chapter 38, verse 1 of the book of Job explains that God answers Job from within the storm. The storm itself is a natural phenomenon (read: natural disaster). Instead of revealing his presence in a soothing bright light, God prefers to reveal himself in the darkness of a dreadful storm. The presence of God from within this storm also presents a new question. Did God mean to threaten Job to repent? The Bible shows that God's theophanies (appearances) are often accompanied by powerful natural phenomena such as dark clouds, storms, earthquakes or fire (Ps 77:18-19; 18:10-13; 97:2; Jdg 5:4; Ps 18:8; Is 30:27; Ps 50:3). The presence of God in a tremendous natural phenomenon emphasises his awesomeness, greatness and omnipotence over the entire universe (creation). Therefore, God's presence in the storm to answer Job cannot be interpreted as a form of action that tries to pressure and threaten Job. Throughout Job's defence of himself, none of his words condemn God as predicted by the accuser (the devil). Job does not focus his argument on his truth. He only repeatedly asserts his innocence to show his point that God has committed an inexplicable (or difficult) act (Boss 2010). However, God's presence in the storm completely silences Job, and he witnesses a universe of complexity far beyond his comprehension. The splendour of God's presence shows the difference between Creator and creation. God's voice from within the storm asks a series of questions that Job cannot answer ('Who are you? Where are you? Are you capable?') (Scott 2020). Even though Job is entirely innocent, he still has to face his ignorance, limitations and nature as a created being, not a creator (Mason 2020). Job's sufferings are only a tiny part of the universe's complexity, too transcendent for limited and weak human wisdom to comprehend (Margulies 2020). If the world and the entire universe were created with such uncertainty, suffering or challenges, could Job have made a better world and universe than that? Certainly not! Job's response to God In awe, Job looks to God and listens to everything he says. There is nothing he could say in response to interrupt God. There is only a doxology and an acknowledgement of the majesty and greatness of God. Job's confusion and questions about his suffering go unanswered. But in his encounter with God, Job realises that he cannot know all the complexities of suffering in this world, but he can understand that he belongs to his Creator (Are 1999:297). Job's words in his last sentences seem to indicate his repentance and resignation to his bad luck (42:6). Has Job sinned against God? Morrow explains that the sentence in 42:6 is a complicated passage, and there are differences among scholars regarding the correct translation (Morrow 1986). There are at least three translations of the book of Job 42:6 worth noting (Morrow 1986:211-212): 1. Wherefore I retract (or I submit) and I repent on (or on account of) dust and ashes. 2. Wherefore I reject it (implied object in v. 5), and I am consoled for dust and ashes. 3. Wherefore I reject and forswear dust and ashes. The vague and ambiguous language of 42:6 indicates that the author intentionally created a situation that could be interpreted in several ways, according to the theological leanings of the reader (Morrow 1986:225). So the choice of a particular translation is driven by what is believed (reader subjectivity). If the reader is pro-theodicy, then Job's repentance sentence (as shown by the first translation) will be considered natural because he has been against God. But did Job not speak honestly in his complaint against God (questioning God's actions) that he did not sin by cursing or blaspheming God? In this case, Job's conversion is less understandable. The second and third translations are more in favour of Job. Job, who previously only heard stories about God from people's mouths, now no longer views these stories as necessary because he has experienced an encounter with God. Thus, he is comforted by dust and ashes. Meanwhile, the third translation shows Job's rejection of dust and ashes themselves. The terms dust and ashes have previously been http://www.hts.org.za Open Access used by Job to denote his suffering situation (30:19), so this term does not refer to a place but is a metaphor to describe his suffering and humiliation (Fokkelman 2012). The third version of Job's response presents Job as a hero figure who refuses to continue suffering. This response shows resistance -not to God but to the heart's tendency to surrender and give up. Job cannot know why suffering came the way it did, but he can understand that grief does not rob him of his calling in God's creation (Are 1999). Evidence of this truth can be seen in God's subsequent actions. He restores Job's condition, instead of blaming and punishing him. Instead, God condemns Job's three friends, declaring Job to be his servant who spoke the truth (42:7). Job's condition is restored Job's story has a happy ending. He proves to be a faithful servant of God and eventually receives restoration from God. Job's restoration takes place in his relationship with God, society and the natural order. Job intercedes for his friends, showing his role as a mediator between God and the community. Eating together with all his relatives and relatives marks his return to social life, and all the material goods and offspring he receives signify that he is back to living in harmony with the universe (Habel 1985). Job, who refused to endure constant suffering, chooses to make peace with the world with all its uncertainties. Restoration from God also, of course, involves the activity of Job himself. The wealth given by his brothers and relatives becomes the basis for Job to rebuild his estate. Job's wealth must have doubled from its previous amount because of God's blessing for what his brothers and relatives have given him and his efforts (Hartley 1988). Theodicy and anthropodicy in the perspective of Job's suffering In the theodicy concept, Job's suffering is God's will for Job's good, not God's wrath for sin and evil. What is interesting is that the presence of Job's wife and her hurtful words to Job represent human responses in general in the face of suffering. If you view theodicy as God's justice, what happens to Job was because of his sin. The book of Job reveals that Job's friends are there for sympathy and comfort. Job's friends cannot understand why someone like Job, wealthy and prosperous, could suffer. They feel it appropriate to reassure Job that his suffering must be his fault (Kou 2003;Nadar 2003:350;Stoeber 2005). His friends criticise him for being inconsistent; they even try to make him see that he is cursing. They realise what his protest of innocence implies in their view of things, and they are offended (Gutiérrez 1987). In the end, God defends Job in front of his friends. The Lord says to Eliphaz, 'to you and your two friends, because you do not tell the truth about me as my servant Job did' (42:7, 8) (Kraemer 1995:33). The book of Job rejects theodicy themes that tend to explain suffering in terms of punishment by God. Rejection of theories that explain suffering as a result of human actions (Pellach 2012;Stoeber 2005). Job does not accept that this suffering is because of his sin and evil. Job's response as a human being shows the concept of anthropodicy towards suffering. There are lessons to be learned from suffering. The study concerns the human reaction to suffering rather than the causes, which remain in the realm of divine knowledge and beyond human understanding (Pellach 2012). Job's anthropodicy concept of protesting against God states that what he does is good in his sight. God ultimately gives Job the right in his protest against his humble position. This sentence also implies that God gave humans the right to protest, like Job, against their suffering. People suffering have the right to scream, and their cries must be taken very seriously (Tönsing 1996). In the 24th chapter of the book of Job, Job laments that those who cause people suffering, those who oppress and exploit, are not punished. He sees all injustices from the perspective of the poor, not as a wealthy farmer. He can only do this because he has experienced being poor; he has experienced the pain and suffering of the poor (Nadar 2003:349). Job responds to the words of his wife and friends and to the grief God has permitted by viewing it as a process of getting closer to God. If one wants to go deeper into this mystery of redemptive suffering, God allows us to feel, not just to know -to feel what it means to be empty, abandoned and unnoticed (Rohr 1996:15). Anthropodicy views suffering as something that humans with an excellent human existence should experience. Job experiences both theodicy and anthropodicy from the perspective of God's presence. From theodicy to anthropodicy: An implication The third translated version of Job's response provides readers with a more practical understanding of dealing with the realities of a world filled with various contexts of suffering, injustice and oppression (Patrick 1976:369). This version of the translation was brought up by Patrick after he saw that there was a difference in meaning between the Revised Standard Version of the Bible (RSV) and the Hebrew version. Rather than theodicy, Job's answer leads us to the concept of anthropodicy. Anthropodicy itself is the idea that humans can independently handle evil or suffering (Hall et al. 2019). 'Anthropodicy is visibly clinging to the human ability to create goodness amid suffering' (Untea 2019). This thinking is becoming more substantial along with the rise of the social sciences. However, in this case, anthropodicy is not proposed as antitheodicy. More precisely, it is a complement to theodicy itself. The anthropodicy narrative implicit in Job's answer stems from his realisation of theodicy. His encounter with God makes him realise that God is far beyond his understanding and that he and his sufferings are only a tiny part of the complex universe created and governed by God. In this realisation, Job looks back at the reality he is facing and refuses or does not want to sit in dust and ashes anymore. Job, who has refused to continue to suffer, accepts his nature as a creature, makes peace with his world of uncertainty and works for his welfare. In his experiences -especially his suffering -Job has led the reader to understand the first theodicy, God's omnipotence over suffering. He transcends the wisdom of all creation and then anthropodicy -human beings of faith who remain empowered in the context of suffering, injustice and oppression. The concept of anthropodicy proposed here invites the reader of Job's story to underline several things. First, complaints, groans and cries when faced with the context of suffering, injustice and oppression are natural and human things. That is not bad, as long as there is no betrayal of God and denial of his omnipotence. Job personally is not always patient, but he is always faithful to God. In the end, the suffering cannot uproot Job's belief in God's faithfulness (Are 1999). Second, when faced with suffering, injustice and oppression, humans will often question (or grapple with) God's omnipotence and justice -just like Job did. It is also very human. Job's integrity emerges from within and is shaped by the process of his struggle with the laws of an almighty and just God -and which at the same time contradicts the situation of suffering he is in (Ticciati 2005). Questioning God's actions is entirely different from blaspheming or cursing God. When a suffering person questions God's omnipotence and justice with the same desire as Job -longing for the Almighty's answer (31:35) -indeed, the suffering person will experience an encounter with God and be comforted in his or her suffering. Third, in suffering, injustice and oppression, despair and surrender are not options. Job's life being restored by God, of course, involves the efforts and work of Job as a human being. Instead of getting bogged down in adversity and looking for justifications, acting and doing something are more practical. It is in works and work that God's blessing is revealed. It is unfortunate to hear a story, for example, about a girl who has been beaten badly by her lover, who merely says that it was his destiny. Another story is about a patient with breast cancer who endured pain for a long time and refuses to go to the hospital with the excuse of surrendering her life to God. Some people who hope for God's help during the current coronavirus disease 2019 (COVID-19) pandemic have never done anything for their safety and health. In a global context (about two years running), humans are struggling with the COVID-19 outbreak. This acute respiratory syndrome was first discovered in Wuhan, Hubei province, China and spread throughout the world very quickly, claiming many lives in a short time (Ciotti et al. 2020:365). In this case, believers are not only required to hope for God's help, but also to do something for the safety and health of their families and even those around them. In Exodus 3:7, the Lord says, 'I have seen the tribulation of my people in Egypt. I have heard them weep for their slaves, and I am concerned about their suffering'. God's purpose is to simplify our beliefs until our relationship with him is exactly like that of a child (Frisby 2007). In facing various contexts of suffering, injustice and oppression, the story of Job not only teaches the reader to acknowledge and rely on God's omnipotence in prayer but also invites the reader to remain empowered and take actual actions for the safety and well-being of themselves, their families, the community and even their social environment. Conclusion In the context of human suffering, theodicy provides a Godcentred answer. God is entirely righteous and holy and has absolute power over all creation. But responses like this are less in favour of humans who struggle with suffering. Through an in-depth reading of the book of Job, a new concept comes to the fore -an idea that favours suffering humans. The concept of anthropodicy appears as a complement to the concept of theodicy, which enables humans to be encouraged to face suffering as a natural occurrence in life and to realise their vocation in a world which is full of uncertainty, while still having faith in God, who is sovereign over all creation. The various sufferings experienced by humans do not show God's limitations in helping and managing life. Undoubtedly, the suffering experienced by every human being in the past, present and future is only a tiny part of the universe's complexity, which is often difficult for people to understand and explain by themselves. Almighty God orders everything perfectly, far beyond the understanding of creation. But that does not mean humans can remain silent in resignation and despair when facing suffering. Awareness of the omnipotence of God, who perfectly organises life, must also be a driving force for suffering humans to remain empowered in the face of suffering. It is in human empowerment that the restoration of Almighty God becomes manifest.
5,541.2
2022-08-17T00:00:00.000
[ "Philosophy" ]
Defect-free surface states in modulated photonic lattices We predict that interfaces of modulated photonic lattices can support a novel type of generic surface states. Such linear surface states appear in truncated but otherwise perfect (defect-free) lattices as a direct consequence of the periodic modulation of the lattice potential, without any embedded or nonlinearity-induced defects. This is in a sharp contrast to all previous studies, where surface states in linear or nonlinear lattices, such as Tamm or Shockley type surface states, are always associated with the presence of a certain type of structural or induced surface defect. Interfaces separating different physical media can support a special class of transversally localized waves known as surface waves. Linear surface waves have been studied extensively in many branches of physics [1]. For example, electro-magnetic waves localized at the boundaries of periodic photonic structures, such as waveguide arrays or photonic crystals, have been extensively analyzed theoretically and experimentally. The appearance of localized surface waves in photonic structures is commonly explained as the manifestation of Tamm or Shockley type localization mechanisms [2,3,4], being associated with the presence of a certain type of surface defect. Tamm states were first identified as localized electronic states at the edge of a truncated periodic potential [3], and then they were found in other systems, e.g. at an interface separating periodic and homogeneous dielectric optical media [5,6]. In discrete systems, such as arrays of weakly coupled optical waveguides [7], different types of linear and nonlinear states localized at and near the surface have also been analyzed extensively. It was found that Tamm surface waves can exists at the edge of an array of optical waveguides when the effective refractive index of the boundary waveguide is modified above a certain threshold [8,9,10,11,12,13,14], whereas surface localization was considered to be impossible when all waveguides are exactly identical, as sketched in Fig. 1(a). In the latter case, the beam launched into array delocalizes due to diffraction [ Fig. 1(b)], and it is also strongly reflected from the boundary as illustrated in Fig. 1(c). In this Letter we predict, for the first time to our knowledge and contrary to the accepted notion, that novel type of generic defect-free surface waves can exist at the boundary of a periodic array of identical optical waveguides, which axes are periodically curved along the propagation direction as schematically shown in Fig. 1(d). The periodic bending of waveguide axes was shown to result in the modification of diffraction [15,16,17,18], which strength nontrivially depends on the waveguies bending and optical wavelength. An interesting feature is that the diffraction can be completely suppressed for particular values of the bending amplitude, and this effect is known as dynamic localization or beam selfcollimation [15,16,17,18]. Under such very special conditions, the beam experiences periodic self-imaging, propagating without spreading for hundreds of free-space diffraction lengths, as illustrated in Fig. 1(e). On the other hand, if the beam is launched at the edge of a semiinfinite modulated lattice tuned to the self-collimation, one can intuitively expect that it can not penetrate deep into the lattice as away from the lattice edge effect of the boundary is negligible and coupling between the lattice sites is canceled in the self-collimating lattice. In Fig. 1(f) we indeed observe that the beam remains localized at the surface of the self-collimating modulated lattice. However, our most nontrivial finding detailed below is that surface localization is possible for an extended range of structural parameters even when diffraction is non-vanishing. We study propagation and localization of light in a semi-infinite one-dimensional array of coupled optical waveguides, where the waveguide axes are periodically curved in the propagation direction z with the period L, as shown schematically in Fig. 1(d). When the tilt of beams and waveguides at the input facet is less than the Bragg angle, the beam propagation is primarily characterized by coupling between the fundamental modes of the individual waveguides, and it can be described by the tight-binding equations taking into account the periodic waveguide bending [15,19], where a n (z) is the field amplitude in the n-th waveguide, n = 1, . . ., and a n≤0 ≡ 0 due to the structure termination. Transverse shift x 0 (z) ≡ x 0 (z + L) defines the periodic longitudinal lattice modulation. Coefficient C defines the coupling strength between the neighboring waveguides, it characterizes diffraction in a straight waveguide array with x 0 ≡ 0 [20] [see an example in Fig. 1 Expression (1) shows that the effect of periodic lattice modulation appears through the modifications of phases of the coupling coefficients along the propagation direction z. In order to specially distinguish the effects due to diffraction management, we consider the light propagation in the waveguide arrays with symmetric bending profiles, since asymmetry may introduce other effects due to the modification of refraction, such as beam dragging and steering [21,22,23]. Specifically, we require that In order to analyze light propagation near the surface of a semi-infinite modulated lattice, we first consider the case of small modulation periods L, such that the parameter κ = 2π/L is large, κ ≫ 1. Then we can employ the asymptotic expansion (see, e.g., Ref. [24]) a n (z) = u n (z) + m =0 v n,m (z) exp(imκz), where u n (z) have the meaning of the averaged field values over the modulation period, and we take into account first-and second-order terms for the oscillatory corrections which have zero average, v n,m = v n,m (1) Since the modulation is periodic, we can perform Fourier expansion of the coupling coefficients as C exp[−iẋ 0 (z)] = m C m exp(imκz). Then, in the regime close to selfcollimation when the average coupling is small, |C 0 | ∼ O(κ −1 ), we combine the terms of the same orders [24] and finally obtain the effective equations for the slowly varying functions u n (z), Here δ is the Kronecker delta, the bar stands for the complex conjugation, and u n≤0 ≡ 0. From these equations one can see that the effect of periodic modulation is to introduce the "virtual" defects ∆ 1 and ∆ 2 at the lattice boundary. We now seek solutions in the form of stationary modes, u n (z) = u n (0) exp(ikz/L), where k is the Bloch wave-number. The values of |k| ≤ 2|C 0 |L correspond to a transmission band, where the modes are infinitely extended. On the other hand, the modes can become localized at the surface of semi-infinite modulated lattices is |k| > 2|C 0 |L, and we find that such solutions exist if the modulation parameters are sufficiently close to the self-collimation condition where |C 0 | is small. Specifically, there exists one surface state if α 2 − α 1 ≤ 2 ≤ α 1 + α 2 , and two surface states emerge if We note that if the modulation is symmetric, such thaṫ x 0 (z + L/2) = −ẋ 0 (z), then |C m | ≡ |C −m | and accordingly ∆ 1 = 0, meaning that the modes should always appear in pairs. Moreover, this conclusion is valid even beyond the applicability of the asymptotic expansion, since we identify the exact symmetry of the model Eq. (1) in case of symmetric modulations: for each solution a n (z), b n (z) = (−1) nā n (z + L/2) is also a solution. Therefore, in symmetric structures surface modes always appear in pairs with the Bloch wavenumbers of the opposite sign, As an example, we further consider a sinusoidal modulation function of the form x 0 (z) = A [cos (2πz/L) − 1], similar to the one which has recently been employed to demonstrate dynamical localization in modulated waveguide arrays [15,19]. In this case, the Fourier coefficients can be calculated analytically, C m = CJ m [ξA/A 0 ], and J m is the Bessel function of the first kind of order m. The modulation amplitude A 0 corresponds to the self-collimation condition [15,25], A 0 = ξL/(2π), where ξ ≃ 2.405 is the first root of the Bessel function J 0 . Since the sinusoidal modulation is symmetric, then for each modulation amplitude A such that A − crit < A < A + crit , where A − crit and A + crit are the left and right mode cutoffs, respectively, there exists (at least) a pair of surface modes. We use our asymptotic analysis to estimate the cut-off values in the case of small modulation periods, In order to confirm our analytical results, we calculate numerically the mode spectrum of the original Eq. (1). In Fig. 2(a) one can see that for sufficiently small modulation periods L there indeed exists a pair of symmetric surface modes outside the lattice transmission band, and the wave numbers of surface modes calculated using asymptotic expansion are in excellent agreement with those calculated numerically. At the cross-section z = 0, one mode has unstaggered input profile [ Fig. 2(b)], while the other one exhibits staggered structure [ Fig. 2(c)]. We note that there is very weak additional phase modulation, Im[a n ]/Re[a n ] ∼ 10 −3 , in agreement with the asymptotic analysis predicting real profiles up to second-order corrections. In all the figures, we put C = 1, since results can be mapped to the other coupling values using a simple transformation a n (z, C, L, A) = a n (Cz, C ≡ 1, CL, CA). We further demonstrate that defect-free surface modes in modulated lattices can be effectively generated using single-site excitation of the edge lattice waveguide, if the lattice modulation amplitude A is between the left and the right cut-offs A − crit and A + crit . An example of such sur- face wave excitation after some initial radiation is shown in Fig. 3(b), where even though the lattice modulation is very close to the left cut-off, A ≃ 1.0065A − crit , the surface wave is still very well localized. In contrast, when the beam is launched far away from the surface, it always diffracts if A = A 0 , as shown in Fig. 3(a). This illustrates the fundamental difference between the dynamical localization in infinite modulated lattices [15,25], and formation of the defect-free surface modes in truncated modulated lattices. While dynamical localization is a purely resonant effect which takes place just for one single value of the modulation amplitude A = A 0 [see Fig. 1(e)], the families of defect-free surface modes always exist in a finite range of the modulation amplitudes sufficiently close to the self-collimation value A 0 . If the deviation of the modulation amplitude from the self-collimation value is greater than the one determined by the left and the right cut-offs, the defect free modes disappear, and the beam always diffracts irrespectively of its input position in the semi-infinite modulated lattice, see Figs. 3(c) and 3(d). For large modulations periods the asymptotic analysis is not valid, and we use numerical simulations to find families of the defect-free surface modes. These results are summarized in Fig. 4(a), where hatched is the domain of the existence of the defect-free surface modes on the (L, A) parameter plane. For small modulation periods, the asymptotic expansion provides an estimate for the surface modes cut-offs (dashed lines). When the modulation period grows, the number of defect-free modes increases, as shown in Figs. 4(b) and (c). For the large modulation periods the domain of the existence of the defect-free surface modes is basically limited by the region where lattice transmission band extends to the whole Brillouin zone from −π to −π, and therefore localized states cannot exist. The region where localized modes can not exist [shown with solid shading in Fig. 4(a)] is given by the relation L ≥ π/(2|C 0 |) = π/(CJ m [ξA/A 0 ]). We note that although the defect-free surface states were introduced here for modulated photonic lattices, such novel type of surface modes may also appear in other fields where wave dynamics is governed by coupled Schrödinger type equations (1) with z standing for time. In particular, by introducing special periodic shift of lattice potential it may be possible to observe peculiar surface localization in Bose-Einstein condensates. On the other hand, our results indicate the possibility for novel mechanism of surface localization of charged particles in complex time-varying driving electric fields, for which the possibility of the dynamical localization has been suggested earlier [25]. In conclusion, we have demonstrated, for the first time to our knowledge, that interfaces of modulated photonic lattices can support a novel type of generic defect-free surface states. Such surface states appear in truncated but otherwise perfect (defect-free) lattices as a direct consequence of the periodic modulation of the lattice potential, without any embedded or nonlinearity-induced defects. This is in a sharp contrast to all previous studies, where surface states in linear or nonlinear lattices, such as Tamm or Shockley type surface states, are always associated with the presence of a certain type of surface defect. Using both asymptotic expansion technique and numerical simulations, we presented detailed analysis of the different families of the defect-free surface states in modulated lattices. The work was supported by the Australian Research Council through Discovery and Centre of Excellence projects.
3,091.2
2007-12-17T00:00:00.000
[ "Physics" ]
Initialization effects via the nuclear radius on transverse in-plane flow and its disappearance We study the dependence of collective transverse flow and its disappearance on initialization effects via the nuclear radius within the framework of the Isospin-dependent Quantum Molecular Dynamics (IQMD) model. We calculate the balance energy using different parametrizations of the radius available in the literature for the reaction of 12C+12C to explain its measured balance energy. A mass-dependent analysis of the balance energy through out the periodic table is also carried out by changing the default liquid drop IQMD radius. Introduction The collective transverse flow in heavy ion collisions is a measure of the pressure build up during the compression phase and has been used extensively to gain insight into the properties of nuclear matter at different thermodynamical conditions [1][2][3][4][5]. The collective transverse flow is directly connected to the dynamic evolution of the reaction, and is sensitive to the momentum dependence of the mean field [2,3], the nucleon-nucleon cross-section [4], different equations of state (EOS) [5] and as well as to various reaction parameters such as the incident energy [6], colliding geometry [7] and mass of the colliding system [3,7,8]. The beam energy dependence of the collective transverse flow leads to its disappearance at a particular energy termed the balance energy [9]. The balance energy is the result of the counterbalancing of the attractive mean field (which is dominant at low incident energies) and the repulsive nucleon-nucleon scattering, which decides the fate of the reaction at higher incident energies. The balance energy (representing the vanishing of flow) is of great significance because the experimentally determined balance energy can be easily compared with various theoretical calculations as it is free from any experimental uncertainties. Detailed theoretical studies using various transport models have revealed its sensitivity to the EOS and the in-medium nucleon-nucleon crosssection as well as to various entrance channel parameters [6,[8][9][10][11][12]. At the same time, the collective transverse flow and its disappearance has also been found to depend on the isospin degree of freedom [13,14]. As inferred from the literature, structural and initialization effects play a significant role when studying lighter systems as compared to heavier systems [15,16]. Thus, the initialization as well as structural effects in heavy-ion collisions at intermediate energies can be of important concern in a e-mail<EMAIL_ADDRESS>DOI: 10 [16] studied initialization effects on symmetry energy sensitive observables like the free neutron to proton ratio (n/p), the π + /π − ratio and the neutron to proton differential flow, F n−p x using different parametrizations of Skyrme forces within the framework of the Isospin-dependent Boltzmann Uehling Uhlenbeck (IBUU) model. The radius parameter plays a very crucial role in phenomena like fusion, fission, cluster radioactivity, formation of super heavy nuclei, etc. [17,18]. Even in the framework of the proximity potential, a suitable choice of the radius parametrization is essential to reproduce the experimental data on the fusion barrier nicely [17,18]. Here we aim to study the role of initialization effects on the collective transverse flow and its disappearance via nuclear radii. This could also, in part, explain the experimental balance energy for the reaction of 12 C+ 12 C. The model The IQMD model has been used extensively for studying isospin effects on a large number of observables [19]. The IQMD model is an n-body theory which simulates a heavy ion reaction on an event by event basis, hence preserves the correlations and fluctuations of the reaction. The isospin degree of freedom enters into the calculations via the symmetry potential, cross-sections, and Coulomb interaction. In this model, baryons are represented by Gaussian-shaped density distributions given by: Nucleons are initialized in a sphere with radius R = 1.12 A 1/3 fm, in accordance with the liquiddrop model. Each nucleon occupies a volume of h 3 , so that phase space is uniformly filled. The initial momenta are randomly chosen between 0 and the Fermi momentum ( p F ). The nucleons of the target and projectile interact by two-and three-body Skyrme forces, a Yukawa potential, Coulomb interactions and momentum-dependent interactions. In addition to the use of explicit charge states of all baryons and mesons, a symmetry potential between protons and neutrons corresponding to the Bethe-Weizsacker mass formula has been included. The hadrons propagate using the Hamilton equations of motion: with The baryon potential V i j , in the above relation, reads as Here Z i and Z j denote the charges of the i th and j th baryon, and T 3i and T 3 j are their respective T 3 components (i.e., 1/2 for protons and −1/2 for neutrons). The parameters t 1 , . . . , t 6 are adjusted to the real part of the nucleon optical potential. For the density dependence of the nucleon optical potential, a standard Skyrme-type parametrization is employed. We use a soft momentum-dependent (SMD) equation of state with an isospin and energy-dependent cross-section reduced by 20 % i.e., σ = 0.8σ f ree and the value of 32 MeV for the strength of the symmetry potential in the present simulations. It is worth mentioning that this choice of EOS and in-medium nucleon-nucleon crosssection is also used to reproduce the balance energy for the reactions of 58 Ni + 58 Ni and 58 Fe + 58 Fe for the entire collision geometry [20]. The details about the elastic and inelastic cross-sections for proton-proton and proton-neutron collisions can be found in Ref. [21]. The cross-sections for neutronneutron collisions are assumed to be equal to the proton-proton collision cross-sections. Also the neutron-proton cross-section is three times the neutron-neutron collision cross-section. Two particles collide if their minimum distance d fulfills where 'type' denotes the ingoing collision partners (N-N....). Explicit Pauli blocking is also included; i.e., Pauli blocking of the neutrons and protons is treated separately. We assume that each nucleon occupies a sphere in coordinate and momentum space. This trick yields the same Pauli blocking ratio as an exact calculation of the overlap of the Gaussians would yield. We calculate the fractions P 1 and P 2 of final phase space for each of the two scattering partners that are already occupied by other nucleons with the same isospin as that of the scattered ones. The collision is blocked with the probability and, correspondingly is allowed with the probability 1 -P block . For a nucleus in its ground state, we obtain an averaged blocking probability of P block = 0.96. Whenever an attempted collision is blocked, the scattering partners maintain the original momenta prior to scattering. For the present study, we simulate the reactions of 12 For the transverse flow we use the quantity "directed transverse momentum p dir x ", which is defined as: where y(i) is the rapidity and p x (i) is the transverse momentum of the i th particle. The rapidity is defined as: where E(i) and p z (i) are, respectively, the energy and longitudinal momentum of the i th particle. In this definition, all rapidity bins are taken into account. In Fig. 1, we display the time evolution of the directed transverse momentum for the reactions of 12 C + 12 C, 40 Ca + 40 Ca, 135 Ho + 135 Ho and 196 Cf + 196 Cf at an incident energy of 100 MeV/nucleon for central collisions. The solid and dashed lines represent the calculations performed using the default liquid drop formula for the radius and reducing this radius by 10% (keeping the Fermi momentum constant), respectively. We see that the p dir x remains negative for the reaction of 12 C + 12 C ( Fig. 1(a)), and it remains positive for the reactions of 135 Ho + 135 Ho (Fig. 1(c)) and 196 Cf + 196 Cf ( Fig. 1(d))for both choices of the radius. This is due to the dominance of the mean field (attractive in nature) in the reaction of 12 C + 12 C. On the other hand, nucleon-nucleon collisions dominate the reactions of 135 Ho + 135 Ho and 196 Cf + 196 Cf, and being repulsive in nature they results in positive flow. But for the reaction of 40 Ca + 40 Ca ( Fig. 1(b)), p dir x is negative during the initial phase of the reaction (due to the mean field), when calculated using the default radius. As the reaction proceeds, binary nucleonnucleon collisions start to take place, which in turn, increase the value of p dir x . Also we find that the value of p dir x calculated using the default IQMD radius (solid line) is always smaller than the value calculated using the reduced radius (dashed line) for all four of the reactions. The increase in the flow with the decrease in radius is due to the increase in the density gradient of the nuclear matter. Also the effect of the nuclear radii is more prominent in the lighter systems ( 12 C + 12 C & 40 Ca + 40 Ca) as it results in an approximate 70% increase in the flow, whereas in the heavier systems ( 135 Ho + 135 Ho & 196 Cf + 196 Cf) the increase in the flow is only about 20% with the decrease in radius. This is because of the fact that in the lighter systems the ratio of surface diffuseness to radius is larger compared to that of the heavier systems. So, we expect this behavior of a change in the density gradient to be quite significant. Therefore, in the lighter colliding nuclei, due to the increase in the density gradient, repulsive forces (which are ∝ ( ρ ρ 0 ) γ ) get strengthened and increase the momentum transfer in the transverse direction. Hence we can say that the radius parameter plays a very crucial role in the reaction dynamics of lighter systems. In the past, a lot of theoretical studies have been undertaken to explain the balance energy (122±12 MeV/nucleon) for the reaction of 12 C+ 12 C measured at National Superconducting Cyclotron Laboratory (NSCL) [8]. Westfall et al. [8], for example, expressed the need of a density-dependent parametrization of the cross-section to predict the balance energy for lighter systems ( 12 C+ 12 C) instead of an overall reduction of the cross-section by a constant factor. In another study, Klakow et al. [10] have shown that the proper choice of the surface thickness is necessary for calculating the balance energy for the reaction of 12 C+ 12 C, as a shift of 50 MeV/nucleon is observed with a change in the surface thickness by 1 fm. The above mentioned studies were undertaken within the framework of the Boltzmann-Uehling-Uhlenbeck (BUU) model. Puri and co-researchers have used the momentum dependence of the mean field within the Quantum Molecular Dynamics (QMD) approach to justify the power law for the entire mass range lying between 12 C+ 12 C and 197 Au+ 197 Au and stated the need of momentum-dependent interactions when dealing with lighter systems ( 12 C+ 12 C) as these provide necessary transverse momenta to lighter systems during the initial phase of the reaction [3]. Mota et al. [11] using a Landau-Vlasov formulation (which guarantees that good ground-state properties of finite nuclei such as binding energies and mean square radii are achieved) calculated the balance energy of 12 C+ 12 C (using free nucleon-nucleon cross-sections and a soft EOS with momentum dependence of the mean field) to be around 120 MeV/nucleon. In the context of another study, Antisymmetrized Molecular Dynamics (AMD) model [12] calculations predicted the value of the balance energy for the reaction of 12 C+ 12 C to be around 100 MeV/nucleon with same set of parameters used in the Landau-Vlasov Approach. Thus we find that different authors have given different reasons for the deviation of the balance energy of 12 C+ 12 C from the experimentally measured one and have also suggested corresponding solutions, accordingly. Since one of the initialization effects lies in the size of the nuclei (the radius of the nucleus), hence, it can also play a crucial role while extracting the balance energy for the reaction of 12 C+ 12 C. So in the next part of the paper, we also calculated the balance energy for the reaction of 12 C+ 12 C at an impact parameter ofb = 0.4 using different parametrizations of the radius available in the literature. The choice of collision geometry is motivated by the experimentally measured balance energy [8]. In Fig. 2, we display the balance energy calculated using different parametrizations of the radius for the reaction of 12 C+ 12 C. The calculated balance energies using the radius parametrization due to Bass, Brogila and Winther (BW), Christensen and Winther (CW), Blocki, IQMD, Ngô and Aage Winther (AW) are represented by open circles and also labelled respectively. We also calculated the balance energy for 12 C+ 12 C using a value of the radius (2.3 fm) for 12 C evaluated by Elton (labelled as Elton) [22]. The experimental datum is represented by a solid horizontal thick bar. The solid squares represent the calculated balance energy using the IQMD model radius (R = r 0 A 1/3 ) by varying the radius from 80% to 110% with increment of 5%. The linear fit to the solid squares is shown by the solid line. We see that the balance energy increases with increasing radius. This is due to the decrease in the repulsive forces with increasing radius. Hence a larger value of the incident energy is required to counterbalance the attractive mean field. Also, the slope of the fitted line (giving the linear relationship between the balance energy and the radii of the colliding nucleus) is around 101 ± 7, which is large enough not to ignore any dependence of the balance energy on the radius for the 12 C+ 12 C system. We also see that the experimental datum is well reproduced when one uses the measured radius (Elton) of 12 C to calculate the balance energy. Further to see the effect of the radius on the mass dependence of the balance energy, in Fig. 3(a), we display the system size dependence of the balance energy calculated using the default IQMD radius and by reducing the default radius by 10%, represented by squares and circles, respectively. The lines represent the power law behavior of the mass dependence of the balance energy. We find that the balance energy decreases with a decrease in the radius due to the increase in the strength of the repulsive forces (∝ ρ/ρ o ) with a decrease in radius. This effect is more pronounced in the lighter systems as compared to the heavier systems. Fig. 3(b) displays the percentage deviation of the balance energy (ΔE bal (%)) (given by eq.(8)) calculated using a 10% reduced IQMD radius from the default IQMD calculations. We see that ΔE bal (%) is higher for the lighter systems when compared to heavier cases. This justifies that lighter systems are more sensitive to the surface effects compared to heavier ones, and thus heavier systems remain almost unaffected by the reduction of the radius. Summary In summary, within the framework of IQMD model using different radii for the colliding nuclei, we have demonstrated that the collective transverse flow showed a strong dependence on initialization effects for the lighter systems. Our study indicates that the radii of colliding nuclei must be treated carefully when extracting the balance energy for lighter systems. Acknowledgement This work has been supported by a grant from University Grants Commission (UGC), Government of India. Authors are thankful to Professor Rajeev K. Puri for enlightening discussions on the present work.
3,641.4
2014-04-01T00:00:00.000
[ "Physics" ]
Fully nonlinear stochastic and rough PDEs: Classical and viscosity solutions We study fully nonlinear second-order (forward) stochastic partial differential equations (SPDEs). They can also be viewed as forward path-dependent PDEs (PPDEs) and will be treated as rough PDEs (RPDEs) under a unified framework. We develop first a local theory of classical solutions and define then viscosity solutions through smooth test functions. Our notion of viscosity solutions is equivalent to the alternative one using semi-jets. Next, we prove basic properties such as consistency, stability, and a partial comparison principle in the general setting. When the diffusion coefficient is semi-linear (but the drift can be fully nonlinear), we establish a complete theory, including global existence and comparison principle. Our methodology relies heavily on the method of characteristics. Introduction We study the fully nonlinear second-order SPDE du (t, x, ω) = f (t, x, ω, u, ∂ x u, ∂ 2 x x u) dt + g (t, x, ω, u, ∂ x with initial condition u(0, x, ω) = u 0 (x), where (t, x) ∈ [0, ∞) × R, B is a standard Brownian motion defined on a probability space ( , F, P), f and g are F B -progressively measurable random fields, and • denotes the Stratonovic integration. Our investigation will build on several aspects of the theories of pathwise solutions to SPDEs studied in the past two decades. These include: the theory of stochastic viscosity solutions, initiated by Lions and Souganidis (1998a;1998b;2000a;2000b) and also studied by Buckdahn and Ma (2001a;2001b;2002); path-dependent PDEs (PPDEs) studied by Buckdahn et al. (2015), based on the notion of path derivatives in the spirit of Dupire (2019); and the aspect of rough PDEs studied by Keller and Zhang (2016), in terms of the rough path theory (initiated by Lyons (1998)) and using the connection between Gubinelli's derivatives for "controlled rough paths" (2004) and Dupire's path derivatives. The main purpose of this paper is to integrate all these notions into a unified framework, in which we shall investigate the most general well-posedness results for fully nonlinear SPDEs of the type (1.1). A brief history SPDE (1.1), especially when both f and g are linear or semilinear, has been studied extensively in the literature. We refer to the well-known reference Rozovskii (1990) for a fairly complete theory on linear SPDEs and to Krylov (1999) for an L p -theory of linear and some semilinear cases. When SPDE (1.1) is fully nonlinear, as often encountered in applications such as stochastic control theory and many other fields (cf. the lecture notes of Souganidis (2019), and Davis and Burstein (1992), Buckdahn and Ma (2007), and Diehl et al. (2017) for applications in pathwise stochastic control problems), the situation is quite different. In fact, in such a case one can hardly expect (global) "classical" solutions, even in the Sobolev sense. Some other forms of solutions will have to come into play. In a series of works, Lions-Souganidis (1998a;1998b;2000a;2000b) initiated the notion of "stochastic viscosity solutions" for fully nonlinear SPDEs, especially in the case when g = g(∂ x u), along the following two approaches. One is to use the method of stochastic characteristics (cf. Kunita (1997)) to remove the stochastic integrals of SPDE (1.1), and define the (stochastic) viscosity solution by considering test functions along the characteristics (whence randomized) for the transformed ω-wise (deterministic) PDEs. The other approach is to approximate the Brownian sample paths by smooth functions and define the (weak) solution as the limit, whenever it exists, of the solutions to the approximating equations, which are standard The Main contributions of this work The main purpose of this paper is to establish the viscosity theory for general fully nonlinear parabolic SPDEs and path-dependent PDEs through a unified framework based on the combined rough path and Dupire's pathwise analysis, as well as the idea of stochastic characteristics. We consider the most general case where the diffusion coefficient g is a nonlinear function of all variables (t, ω, x, u, ∂ x u). We shall first obtain the existence of local (in time) classical solutions when all the coefficients are sufficiently smooth. We remark that these results, although not surprising, seem to be new in the literature, to the best of our knowledge. More importantly, assuming that g is smooth enough, we shall establish most of the important issues in viscosity theory. These include: 1) consistency (i.e., smooth viscosity solutions must be classical solutions); 2) the equivalence of the notions of stochastic viscosity solutions using test functions and by semi-jets; 3) stability; and 4) a partial comparison principle (between a viscosity semi-solution and a classical semi-solution). Finally, in the case when g is linear in ∂ x u (but nonlinear in u, and f can be nonlinear in (u, ∂ x u, ∂ x x u)), we prove the full comparison principle for viscosity solutions and thus establish the complete theory. To be more precise, let us briefly describe alternative forms of SPDEs that are equivalent to the underlying one (1.1) in some specific pathwise senses. First, note that Buckdahn et al. (2015) established the connection between (1.1) and the following path-dependent PDE (PPDE): (t, x, ω, u, ∂ x u). (1.2) Here, ∂ ω t and ∂ ω are temporal and spatial path derivatives in the sense of Dupire (2019). On the other hand, Keller and Zhang (2016) showed that the PPDE (1.2) can also be viewed as a rough PDE (RPDE): where ω is a geometric rough path corresponding to Stratonovic integration. We should note that the connection between SPDE (1.1) and RPDE (1.3) has been known in the rough path literature, see, e.g., Friz and Hairer (2014). Bearing these relations in mind, we shall still define the (stochastic) viscosity solutions via the method of characteristics. More precisely, we utilize PPDE (1.2) by requiring that smooth test functions ϕ satisfy ∂ ω ϕ(t, x) = g(t, x, ϕ, ∂ x ϕ). (1.4) It should be noted that the involvement of g in the definition of test functions is not new (see, e.g., the notion of "g-jets" and the g-dependence of "path derivatives" in Buckdahn and Ma (2001b;2002) and Buckdahn et al. (2015)). The rough-path language then enables us to define viscosity solutions directly for RPDE (1.3) as well as PPDE (1.2) in a completely local manner in all variables (t, x, ω). We should note that, barring some technical conditions as well as differences in language, our definition is very similar or essentially equivalent to the ones in, say, Lions and Souganidis (1998a;2000a); and when f does not depend on ∂ 2 x x u (i.e., in the case of first-order RPDEs), our definition is essentially the same as the one in Gubinelli et al. (2014). Furthermore, we show that our definition is equivalent to an alternative definition through semi-jets (such an equivalence was left open in Gubinelli et al. (2014)). Moreover, by using pathwise characteristics, we show that RPDE (1.3) can be transformed into a standard PDE (with parameter ω) without the dω t term. When g is semilinear (i.e., linear in ∂ x u), our definition is also equivalent to the viscosity solution of the transformed PDE in the standard sense of Crandall et al. (1992), as expected. In the general case when g is nonlinear on all (x, u, ∂ x u), the issue becomes quite subtle due to the highly convoluted system of characteristics and some intrinsic singularity of the transformed PDE, and thus we are not able to obtain the desired equivalence for viscosity solutions. In fact, at this point it is not even clear to us how to define a notion of viscosity solution for the transformed PDE. Besides clarifying the aforementioned connections among different notions, the next main contribution of this paper is to establish some important properties of viscosity solutions, including consistency, stability, and a partial comparison principle. Our arguments follow some of our previous works on backward PPDEs (e.g., Ekren et al. (2014) and Ekren et al. (2016a;2016b)). However, unlike the backward case, the additional requirement (1.4) leads to some extra subtleties when small perturbations on the test function ϕ are needed, especially in the case of general g. Some arguments for higher-order pathwise Taylor expansions along the lines of Buckdahn et al. (2015) prove to be helpful. As in all studies involving viscosity solutions, the most challenging part is the comparison principle. The main difficulty, especially along the lines of stochastic characteristics, is the lack of Lipschitz property on the coefficients of the transformed ω-wise PDE in the variable u, except for some trivial linear cases. Our plan of attack is the following. We first establish a comparison principle on small time intervals. Then we extend our comparison principle to arbitrary duration by using a combination of uniform a priori estimates for PDEs and BMO estimates inspired by the backward SDEs with quadratic growth. Such a "cocktail" approach enables us to prove the comparison principle in the general fully nonlinear case under an extra condition, see (6.13). In the case when g is semilinear however, even when f is fully nonlinear (e.g., of Hamilton-Jacobi-Bellman type), we verify the extra condition (6.13) and establish a complete theory including existence and a comparison principle. Thereby, we extend the result of Diehl and Friz (2012), which follows the second approach proposed by Lions and Souganidis (1998a;1998b) and studies the case when both g and f are semilinear. However, the verification of (6.13) in general cases is a challenging issue and requires further investigation. Another contribution of this paper is the local (in time) well-posedness of classical solutions in the general fully nonlinear case. We first establish the equivalence between local classical solutions of RPDE (1.3) and those of the corresponding transformed PDE. Next, we provide sufficient conditions for the existence of local classical solutions to this PDE, similar to that of Da Prato and Tubaro (1996) when g is linear in u and ∂ x u. To the best of our knowledge, these results for the general fully nonlinear case are new. We emphasize again that our PDE involves some serious singularity issues so that the local existence interval depends on the regularity of the classical solution (which in turn depends on the regularity of u 0 ). Consequently, these results are only valid for classical solutions. Remarks As the first step towards a unified treatment of stochastic viscosity solutions for fully nonlinear SPDEs, in this paper we still need some extra conditions on the coefficients f and g. For example, even in the case when g is semilinear, we need to assume that f is uniformly non-degenerate and convex in ∂ x x u. It would be interesting to remove either one, or both constraints on f. Also, as we point out in Remark 7.5, in the general fully nonlinear case the equivalence between our rough PDE and the associated deterministic PDE in the viscosity sense is by no means clear. Consequently, a direct approach for the comparison principle for RPDE (3.6), which is currently lacking, would help greatly. It would also be interesting to investigate the alternative approach by using rough path approximations as in Caruana et al. (2011) and many other aforementioned papers, in the case when g is fully nonlinear. We hope to investigate some of these issues in our future publications. We would also like to mention that, although the SPDEs in Buckdahn and Ma 2007, Davis and Burstein 1992, Diehl et al. (2017 for pathwise stochastic control problems appear with terminal conditions, they fall into our realm of forward SPDEs with initial conditions by a simple time change (which is particularly convenient here since our rough path integrals correspond to Stratonovic integrals). However, many SPDEs arising in stochastic control theory with random coefficients and in mathematical finance, see, e.g., Peng (1992) and Musiela and Zariphopoulou (2010), have different nature and are not covered by this paper. The main difference lies in the time direction of the adaptedness of the solution with respect to the random noise(s), as illustrated by Pardoux and Peng (1994). Finally, for notational simplicity throughout the paper, we consider the SPDEs on a finite time horizon [0, T ] and in a one-dimensional setting. Our results can be easily extended to the infinite horizon in most of the cases. But the extension to multidimensional rough paths, albeit technical, is more or less standard. We shall provide further remarks when the extension to the multidimensional case requires extra care. For example, Proposition 4.1 relies on results for multidimensional RDEs. Finally, some of the results in this paper involve higher-order derivatives and related norms. For simplicity, we shall use the norms involving all partial derivatives up to the same order; and our estimates, although sufficient for our purpose, will often contain a generic constant, and are not necessarily sharp. This paper is organized as follows. In Section 2, we review the basic theory of rough paths and rough differential equations (RDEs). Furthermore, we introduce our function spaces and the crucial rough Taylor expansions. In Section 3, we set up the framework for SPDEs, RPDEs, and PPDEs. In Section 4, we introduce the crucial characteristic equations and transform our main object of study, the RPDE (3.6), into a PDE. We establish the equivalence of their local classical solutions and provide sufficient conditions for their existence. Sections 5 and 6 are devoted to viscosity solutions in the general case. In Section 7, we establish the complete viscosity theory in the case that g is semilinear. Finally, in the Appendix (Section 8), we provide the proofs of the results from Section 2 that go beyond the standard literature. Preliminary results from rough path theory We begin by briefly reviewing the framework for rough path theory that is used in this paper, mainly following Keller and Zhang (2016) (see Friz and Hairer (2014) and the references therein for the general theory). To this purpose, we introduce some general notation first. For normed spaces E and V, put When V = R, we omit V and just write L ∞ (E). For a constant α > 0, set Given functions u : [0, T ] → R and u : [0, T ] 2 → R, we write the time variable as subscript, i.e., u t = u(t) and u s,t = u(s, t), and we define Moreover, we shall use C to denote a generic constant in various estimates, which will typically depend on T and possibly on other parameters as well. Furthermore, we define the standard Hölder spaces and parabolic Hölder spaces (cf. Lunardi (1995, Chapter 5)): Given k ∈ N 0 and β ∈ (0, 1], set Rough path differentiation and integration Rough path theory makes it possible to integrate with respect to non-smooth functions ("rough paths") such as typical sample paths of Brownian motions and fractional Brownian motions. In this paper, we use Hölder continuous functions as integrators. To this end, we fix two parameters α ∈ (1/3, 1/2] and β ∈ (0, 1] satisfying The parameter α denotes the Hölder exponent of our integrators. The parameter β will take the role of the exponent in the usual Hölder spaces C k+β . Later, we introduce modified Hölder type spaces suitable for our theory. To be more precise, a rough path, in general, consists of several components, the first stands for the integrator whereas the additional ones stand for iterated integrals. Those additional components have to be given exogenously and a different choice leads to different integrals, e.g., those corresponding to the Itô and to the Stratonovic integral. In our setting, the situation is relatively simple. We consider a rough pathω := (ω, ω) with only two components ω and ω that are required to satisfy the following conditions: (2.3) Note that ω s,t should not be understood as ω t − ω s as in (2.1). (iii) In standard rough path theory, it is typically not required thatω is truly rough as defined in (2.3). But it is convenient for us because, under (2.3), the rough path derivatives we define next will be unique. Next, we introduce path derivatives with respect to our rough path. To this end, we introduce spaces of multi-indices Remark 2.3 (i) In the rough path literature, a first-order spatial derivative ∂ ω u is typically called a Gubinelli derivative and the corresponding function u is called a controlled rough path. In our case, the path derivatives defined above are unique due toω being truly rough (Friz and Hairer 2014, Proposition 6.4). (ii) The derivative ∂ ω u depends on ω, but not on ω. The derivative ∂ ω t u depends on ω as well and should be denoted by ∂ω t u. However, in our setting, ω is a function of ω and thus we write ∂ ω t u instead. (iii) When ∂ ω u = 0, it follows from (2.5) and (2.2) that u is differentiable in t and ∂ ω t u = ∂ t u, the standard derivative with respect to t. (iv) In the multidimensional case, ∂ ωω u ∈ R d×d could be symmetric if u is smooth enough (Buckdahn et al. 2015, Remark 3.3); i.e., ∂ ω i and ∂ ω j commute for 1 ≤ i, j ≤ d. However, typically ∂ ω t and ∂ ω do not commute, even when d = 1. Remark 2.4 Note that in (2.5) the term t − s is the difference of the identity function t → t, which is Lipschitz continuous. For all estimates below, it suffices to assume ∂ ω t u ∈ C α(2+β)−1 ([0, T ]). However, to make the estimates more homogeneous, we only use the Hölder-2α regularity of t and thus require ∂ ω t u ∈ C αβ ([0, T ]). For this same reason, all of our estimates will actually hold true if we replace t with a Hölder-2α continuous path ζ ∈ C 2α ([0, T ]). To be more precise, we define a path derivative of u with respect to ζ as a function ∂ ω then Lebesgue integration dt should be replaced with Young integration dζ t . (2.7) We emphasize that, besides k, the norms depend on T, ω, α, and β as well. To simplify the notation, we do not indicate these dependencies explicitly. In some places we restrict u to some subinterval [t 1 , t 2 ] ⊂ [0, T ]. Corresponding spaces C k α,β ([t 1 , t 2 ]) are defined in an obvious way. To not further complicate the notation, the corresponding norm is still denoted by · k . Note that, for u ∈ C 1 α,β ([0, T ]) and for a constant C depending on ω, Finally, we define the rough integral of u ∈ C 1 α,β ([0, T ]). Let π : 0 = t 0 < · · · < t n = T be a time partition and |π | := max 0≤i≤n−1 |t i+1 − t i |. By Gubinelli (2004), exists and defines the rough integral. The integration path U t := t 0 u s dω s belongs to C 1 α,β ([0, T ]) with ∂ ω U t = u t and we define t s u r dω r := U s,t . In this context, we define iterated integrals as follows. For ν ∈ V n , set (μ 1 ,···,μ n ) s,r d μ n+1 r for μ = (μ 1 , · · ·, μ n+1 ) ∈ V n+1 . In the multidimensional case, defining iterated integrals is not trivial. Nevertheless, by Lyons (1998, Theorem 2.2.1), this can be accomplished via uniquely determined (higher-order) extensions of the geometric rough pathω = (ω, ω). By (2.5) and (2.2), the following result is obvious and we omit the proof. Rough differential equations We start with controlled rough paths with parameter x ∈ R d . They serve as solutions to RPDEs and coefficients for RDEs and RPDEs. For this purpose, we have to allow d > 1 here. Consider a function u : [0, T ] × R d → R. If, for fixed x ∈ R d , the mapping t → u(t, x) is a controlled rough path, we use the notations ∂ ω u, ∂ ω t u, D ν u to denote the path derivatives as in the previous subsection. For fixed t, we use ∂ x u, ∂ 2 x x u, etc., to denote the derivatives of x → u(t, x) with respect to x. Now, we introduce the appropriate spaces, extending Definition 2.2. (iii) We say u ∈ C 2,loc α,β ([t 1 , t 2 ] × O) if the following holds: We first show that the differentiation and integration operators are commutative. (2.14) The next result is the crucial chain rule (Keller and Zhang 2016, Theorem 3.4). (2.16) (2.17) Our study relies heavily on the following rough Taylor expansion. The result holds true for multidimensional cases as well and we emphasize that the numbers δ below can be negative. Lemma 2.9 Let u ∈ C k,loc α,β ([0, T ] × R) and K ⊂ R be compact. Then, for every Proof See the Appendix. To study RDEs, uniform properties for the functions in C k,loc α,β ([t 1 , t 2 ] × O) are needed. In the next definition, we abuse the notation · k from (2.7). Definition 2.10 (i) We say that u (ii) For solutions to standard PDEs (recall Remark 2.3 (iii)), we use (2.20) We remark that in (i) we do not require sup t∈[t 1 ,t 2 ] [∂ k x u(t, ·)] β < ∞, but restrict ourselves to local Hölder continuity with respect to x (uniformly in t), which suffices for our rough Taylor expansion above. Although functions in C k,0 α,β ([0, T ] × R) are, in general, only at most once differentiable in time, they behave in our rough path framework as if they were k times differentiable in time (Friz and Hairer 2014, section 13.1). Remark 2.11 , is continuous under · k (as defined in (2.7)) and, for ν = k, D ν u(t, ·) is Hölder-β continuous, uniformly in t. Hence, the continuity required in the definition of C k,loc (as defined in (2.19)). Now, we study rough differential equations of the form ( 2.22) Proof See the Appendix. In the following linear case, we have a representation formula for u: This is a direct consequence of Lemma 2.8, and thus the proof is omitted. Remark 2.14 This representation holds true only in the one-dimensional case. For multidimensional linear RDEs, Keller and Zhang (2016) derived a semi-explicit representation formula. Moreover, note that (2.23) actually does not satisfy the technical conditions in Lemma 2.12 (f and g are not bounded). But nevertheless, due to its special structure, RDE (2.23) is well-posed as shown in this lemma. Finally, we extend Lemma 2.12 to RDEs with parameters of the form Proof See the Appendix. Then, for u ∈ C( ), we have Here, the left-hand side is a Stratonovic integral while the right-hand side is a rough path integral. In this sense, we may write SPDE (3.1) as the RPDE (ii) In an earlier version of this paper (see arXiv:1501.06978v1), we studied pathwise viscosity solutions of SPDE (3.1) in the a.s. sense. In this version, we study instead the wellposedness of RPDE (3.5) for fixed ω. This is easier and more convenient. Moreover, the rough path framework allows us to prove crucial perturbation results such as Lemma 5.8. (iii) If we have obtained a solution (in the classical or the viscosity sense) u(·, ω) of RPDE (3.5) for each ω, to go back to SPDE (3.1), one needs to verify the measurability and integrability of the mapping ω → u(·, ω). To do so, one can, in principle, apply the strategy by Da Prato and Tubaro (1996, section 3), which relies on construction of solutions to SDEs via iteration so that adaptedness is preserved. This strategy can be applied in our setting and does not require f and g to be continuous in ω. Another possible approach is to follow the argument by Friz and Hairer (2014, section 9.1), which is in the direction of stability and norm estimates but requires at least g to be continuous in ω. Since the paper is already very lengthy, we do not pursue these approaches here in detail. From now on, we shall fix (α, β) and ω as in Section 2.1 and omit ω in f, g, and u. To be precise, the goal of this paper is to study the RPDE In particular, ∂ ω t u is different from ∂ t u in the standard PDE literature. Moreover, by Lemma 2.5, we may write (3.6) as the path-dependent PDE The arguments of f and g are implicitly denoted as f (t, x, y, z, γ ) and g (t, x, y, z). Throughout this paper, the following assumptions are employed. Note that, for any bounded set Assumption 3.4 Let u 0 be continuous and u 0 ∞ ≤ K 0 . We remark that for RPDE (3.6) there is no comparison principle in terms of g. Hence, a smooth approximation of g does not help for our purpose and thus we require g to be smooth. By more careful arguments, we may figure out the precise value of k 0 , but that would make the paper less readable. In the rest of the paper, we use k to denote a generic index for regularity, which may vary from line to line. We always assume that k is large enough so that we can freely apply all the results in Section 2, and we assume that the regularity index k 0 in Assumption 3.2 is large enough so that we have the desired k-regularity in the related results. We say that u is a classical solution (resp., subsolution, supersolution) of RPDE (3.6) if Again, note that there is no comparison principle in terms of g. So the first line in (3.8) is an equality even for sub/super-solutions. Classical solutions of rough PDEs We establish wellposedness of classical solutions for RPDE (3.6). To this end, we must require that the coefficients f, g and the initial value u 0 are sufficiently smooth. For general RPDEs, most results are valid only locally in time. However, this is sufficient for our study of viscosity solutions in the next sections. The characteristic equations Our main tool is the method of characteristics (see Kunita (1997) for the stochastic setting). It will be used to get rid of the diffusion term g and to transform the RPDE into a standard PDE. Given θ := (x, y, z) ∈ R 3 , consider the coupled system of RDEs Proposition 4.1 Let Assumption 3.2 hold and let K 0 ≥ 0 be a constant. Then there exist constants δ 0 > 0 and C 0 , depending only on K 0 and the k 0 -th norm of g (in the sense of Definition 2.10 (i)) on [0, T ] × Q, such that for all θ ∈ Q, the system (4.1) has a unique solution (θ) such that Proof Uniqueness follows directly from an appropriate multidimensional extension of Lemma 2.12 for each θ ∈ Q. To prove existence, we note that the main difficulty here is that some coefficients in (4.1) are not bounded. To deal with this difficulty, we introduce, for each N > 0, a smooth truncation function ι N : Next, for each θ ∈ R 3 , consider the system Applying Lemma 2.15, but extended to the multidimensional case (using the extended Lemma 2.13 as shown in Remark 2.14), the RDE above has a unique solution Next, we linearize system (4.1). To this end, put The next result is due to Peter Baxendale. It is a slight generalization of Kunita (1997, (14), p. 291) (which corresponds to (4.15) below). RPDEs and PDEs Our goal is to associate RPDE (3.6) with a function v satisfying which would imply that v solves a standard PDE. To illustrate this idea, let us first derive the PDE for v heuristically. Assume that u is a classical solution of RPDE (3.6) with sufficient regularity. Recall (4.1). We want to find v satisfying (4.7) and In fact, recall (4.4) and writê (4.9) Applying the operator ∂ ω t on both sides of the first equality of (4.8) together with Lemma 2.8 yields We emphasize that the variable θ t (x) above is fixed when Lemma 4.2 is applied, while the variable t in V t is viewed as the running time. In particular, in the last term above s (θ t (x)) involves both times s and t. Then, by (4.10), By (4.8), u(t,X t ) and ∂ x u(t,X t ) are functions of (t, θ t (x)). Moreover, by applying the operator ∂ x on both sides of the second equality of (4.8), Therefore, formally v should satisfy the PDE Now, we carry out the analysis above rigorously. We start from PDE (4.10) and derive the solution for RPDE (3.6). Recall (2.20) and that k is a generic, sufficiently large regularity index that may vary from line to line. Let δ 0 be determined by Proposition 4.1. Then there exists a constant δ ∈ (0, δ 0 ] such that the following holds: Recall that, by Definition 2.10 (i), the regularity here is uniform in x. Thus, together with the regularity of v, we have (4.14) has a unique solutionS ∈ C k,loc α,β ([0, δ] × R). Now, by (i), we see thatS actually satisfies RDE (4.12). Theorem 4.4 Let Assumption 3.2 hold and v and δ be as in Lemma 4.3. Assume further that v is a classical solution (resp., subsolution, supersolution) of PDE (4.10). Since . We prove only the subsolution case. The other statements can be proved similarly. and thus Since v is a classical subsolution of (4.10)-(4.11), the definition of F yields . Now, we proceed in the opposite direction, namely deriving v from u. Assume that u ∈ C k α,β ([0, T ] × R) for some large k and define K 0 := u ∞ ∨ ∂ x u ∞ . Let Q 2 and Q be as in (4.2) and δ 0 as in Proposition 4.1. For any fixed (t, x) ∈ [0, δ 0 ] × R, consider the mapping from Q 2 to R 2 . The Jacobi matrix of this mapping is given by Note that det(J (0, x, y, z)) = 1. Thus, noting also that ∂ x u and ∂ 2 x x u are bounded, one can see, similarly to (4.13), that there exists a δ ≤ δ 0 such that det(J (t, x, y, z)) ≥ 1/2 for all (t, x, y, z) ∈ [0, δ] × Q. This implies that the mapping (4.21) is one to one and the inverse mapping has sufficient regularity. Denote by R(t, x) the range of the mapping (4.21). Then Thus, by (4.13) and the boundedness of ∂ x u, ∂ 2 x x u again, and by choosing a smaller δ if necessary, we may assume that (0, 0) ∈ R(t, x) for all (t, x) ∈ [0, δ] × R. Therefore, for any (t, Differentiating the first equation in (4.22) with respect to x and applying the second, we obtain where the last equality holds true thanks to Lemma 4.2. Then w(t, x) = ∂ x v(t, x) and thus (4.8) holds. In particular, we may use the notation θ t (x) in (4.8) again to replaceθ t (x). We verify now that v indeed satisfies PDE (4.10). Theorem 4.5 Let Assumption 3.2 hold, let u ∈ C k α,β ([0, T ] × R) for some large k, and let δ and v be determined as above. Assume further that u is a classical solution (resp., subsolution, supersolution) of RPDE (3.6). Then, for a possibly smaller δ > 0, Proof The regularity of v is straightforward. We prove only the case that u is a classical subsolution. The other cases can be proved similarly. Recall the notations in (4.9). Differentiating the first equality of (4.8) with respect to ω and applying the second equality, we obtain By (3.8) and (4.8), ∂ ω u(t,X t ) = g(t,X t , u(t,X t ), ∂ x u(t,X t )) = g(t,ˆ t ). Then, by (4.1) and Lemma 4.2, Thus, ∂ ω v(t, x) = 0 and Lemma 4.3 can be applied. In particular, for a possibly Finally, following exactly the same arguments as for deriving (4.10), one can complete the proof that v is a classical subsolution of PDE (4.10). Remark 4.6 We shall investigate the case with semilinear g in detail in section 7 below. Here, we consider the special case which has received strong attention in the literature. Let σ and σ denote the first-and second-order derivatives of σ , respectively. In this case, the system of characteristic equations (4.1) becomes which has the explicit global solution Moreover, in this case, (4.11) becomes Local wellposedness of PDE (4.10) To study the wellposedness of PDE (4.10) and hence that of RPDE (3.6), we first establish a PDE result. Let K 0 > 0 and, similar to (4.2), consider . The further regularity of v when k ≥ 2 follows from standard bootstrap arguments (Gilbarg and Trudinger 1983, Lemma 17.16) together with Remark 2.11. Since the proof is very similar to that of Lunardi (1995, Theorem 8.5.4), which considers a similar boundary-value problem, we shall present only the main ideas for the more involved existence part of the lemma. The first step is to linearize our equation and set up an appropriate fixedpoint problem. To this end, let δ > 0 and define an operator (4.28) Now given v ∈ B 1 , consider the solution w of the linear PDE with w(0, ·) = u 0 . Following the arguments by Lunardi (1995, Theorem 8.5.4), when δ > 0 is small enough, PDE (4.29) has a unique solution w ∈ B 1 . This defines a mapping (v) := w for v ∈ B 1 . Moreover, when δ > 0 is small enough, is a contraction mapping, and hence there exists a unique fixed point v ∈ B 1 . Then v = w and, by (4.29), v solves (4.10) on [0, δ] × R. Proof Recall (4.11). By the uniform regularity of in Proposition 4.1, one can verify straightforwardly that, for δ > 0 small enough, F satisfies the conditions in Lemma 4.7 (ii). Then, by Lemma 4.7, PDE (4.10)-(4.11) has a classical solution v ∈ B 1 for a possibly smaller δ. Finally, it follows from Theorem 4.4 that RPDE (3.6) has a local classical solution. The first-order case We consider the case f being of first-order, i.e., (4.30) This case is completely degenerate in terms of γ . It is not covered by Theorem 4.8. However, in this case, PDE (4.10)-(4.11) is also of first-order, i.e., When f is smooth, so is F. Thus, we can modify the characteristic Eqs. 4.1 to solve PDE (4.10)-(4.31) explicitly. Put˜ = (X ,Ỹ ,Z ) and consider x)). Then one can see that (4.7) should be replaced with ∂ tṽ = 0, and thusṽ(t, x) = u 0 (x). By similar (actually easier) arguments as in previous subsections, one can prove the following statement. (ii) For each t ∈ [0, δ], the mapping x ∈ R →X t (x, u 0 (x), ∂ x u 0 (x)) ∈ R is invertible and thus possesses an inverse function, to be denoted byS t . Viscosity solutions of rough PDEs: definitions and basic properties We introduce a notion of viscosity solution for RPDE (3.6) and study its basic properties. For any (t 0 , ii) We say that u is a viscosity solution of RPDE (3.6) if it is both a viscosity supersolution and a viscosity subsolution of (3.6). We remark that it is possible to consider semi-continuous viscosity solutions as in the standard literature. However, for simplicity, in this paper we restrict ourselves to continuous solutions only. First, assume that u is a viscosity subsolution. By choosing u itself as a test function, we can immediately infer that u is a classical subsolution. Equivalent definition through semi-jets As in the standard PDE case (Crandall et al. 1992), viscosity solutions can also be defined via semi-jets. To see this, we first note that, for ϕ ∈ A 0 g u(t 0 , x 0 ; δ), our second-order Taylor expansion (Lemma 2.9) yields Motivated by this, we define semi-jets as follows. Given u ∈ C([0, T ] × R), We then define the g-superjet J g u(t 0 , x 0 ) and the g-subjet J g u(t 0 , x 0 ) by Nevertheless, we still have the following equivalence. Proposition 5.3 Let Assumptions 3.2 and 3.3 be in force and let u ∈ C([0, T ] × R). Then u is a viscosity supersolution (resp., subsolution) of (3.6) at Proof We prove only the supersolution case. The subsolution case can be proved similarly. Remark 5.4 By Proposition 5.3 and its proof, we can see that, depending on the regularity order k 0 of g as specified in Assumption 3.2, it is equivalent to use test functions of class C k α,β (D δ (t 0 , x 0 )) for any k between 2 and k 0 . This is crucial for Theorem 5.9 below. Change of variables formula Let λ ∈ C([0, T ]) and n ≥ 2 be an even integer. For any u : Clearly,f andg inherit the regularity of f and g. Whenever they are smooth, Then it is straightforward to verify thatf andg inherit most desired properties of f and g that we utilize later. Lemma 5.5 (i) If g is of the form of (7.1) or (7.26), then so isg; and if f is of the form of (7.29), then so isf . (ii) If f is convex in γ , then so isf . (iii) If f is uniformly parabolic, then so isf . (iv) If f is uniformly Lipschitz continuous in y, z, γ , then so isf . In particular, if f and g satisfy Assumptions 3.2 and 3.3, then so dof andg. However, we remark thatg does not inherit the same form when g is in the form of (4.23). Now consider the RPDE forũ: Proposition 5.6 Let Assumptions 3.2 and 3.3 be in force, λ ∈ C([0, T ]), n ≥ 2 even, and u ∈ C([0, T ] × R). Then u is a viscosity subsolution (resp., classical subsolution) of RPDE (3.6) if and only ifũ is a viscosity subsolution (resp., classical subsolution) of RPDE (5.11). Proof The equivalence of the classical solution properties is straightforward. Regarding the viscosity solution properties, we prove the if part; the only if part can be proved similarly. Assume thatũ is a viscosity subsolution of RPDE (5.11). For any (t 0 , 1+x n ϕ(t, x). It is straightforward to check thatφ ∈ Agũ(t 0 , x 0 ). Then, by the viscosity subsolution property ofũ at This implies that u is a viscosity subsolution of RPDE (3.6). (ii) If f is uniformly Lipschitz continuous in y, by choosing λ sufficiently large (resp., small), we havẽ f is strictly increasing (resp., decreasing) in y. Remark 5.7 Let ( f, g) satisfy Assumptions 3.2 and 3.3 and let u be a viscosity (5.13) In particular,f will be proper in the sense of Crandall et al. (1992). Theorem 5.9 (Stability) Let Assumption 3.2 hold and ( f n ) n≥1 be a sequence of functions satisfying Assumption 3.3. For each n ≥ 1, let u n be a viscosity subsolution of RPDE (3.6) with generator ( f n , g). Assume further that, for some functions f and u, locally uniformly in (t, x, y, z, γ ) ∈ [0, T ] × R 4 . Then u is a viscosity subsolution of (3.6). Proof By the locally uniform convergence, f and u are continuous. Let (t 0 , x 0 ) ∈ (0, T ] × R and ϕ ∈ A g u(t 0 , x 0 ). We apply Lemma 5.8 at (t 0 , x 0 ), but in the left neighborhood We emphasize that, while for notational simplicity we established Lemma 5.8 in the right neighborhood D + ε (t 0 , x 0 ), we may easily reformulate it to the left neighborhood by using the backward rough paths introduced in (2.12). By Remark 5.4, we may assume without loss of generality that ϕ ∈ C k α,β ([0, T ] × R) for some large k. Then, for any ε > 0 small, by Lemma 5.8, there exists ψ ε ∈ C 4 α,β (D − ε (t 0 , x 0 )) such that the following holds: This together with setting ϕ ε := ϕ + ψ ε yields Since u n converges to u locally uniformly, we have, for n = n(ε) large enough, Note that Then ϕ ε ∈ A g u n (t ε , x ε ). By the viscosity subsolution property of u n , x ε ) ≤ 0. Fix n and send ε → 0. Then, by the convergence of ψ ε and its derivatives, , u is a viscosity subsolution of (3.6). Partial comparison principle Here, we assume that at least one of the functions u 1 and u 2 is smooth. We need the following result (cf. Lemma 5.8). and ε > 0, recall (5.14), and consider the RPDE where C depends only on g and ϕ, but not on t 0 , ε, and δ. Moreover, ψ ε satisfies Proof The uniform regularity of ψ ε and the first line of (6.3) are clear. Note The second line of (6.3) follows from the Hölder continuity of the functions in terms of t. Moreover, since g ϕ (t, x, 0, 0) = 0, we may write it as g ϕ (t, x, ψ ε , ∂ x ψ ε ) = σ (t, x)ψ ε +b(t, x)∂ x ψ ε , where σ and b depend on ψ ε . Then we may view (6.2) as a linear RPDE with coefficients σ and b. Thus, by (7.31)-(7.32), we have a representation formula for ψ ε . The uniform regularity of ψ ε implies the uniform regularity of σ and b, which leads to the third line of (6.3). Theorem 6.2 Let Assumptions 3.2 and 3.3 and (6.1) be in force. If one of u 1 and u 2 is in C k α,β ([0, T ] × R) for some large k, then u 1 ≤ u 2 . Remark 6.3 When g is independent of y, we can prove Proposition 6.2 much easier without invoking Lemma 6.1. In fact, in this case, assuming to the contrary that By (5.12) and [u 1 − u 2 ](0, ·) ≤ 0, there exists (t * , x * ) ∈ (0, t 0 ] × R such that Define ϕ = u 2 + c. Since g is independent of y, we have Then one can easily verify that ϕ ∈ A g u 1 (t * , x * ). Moreover, by Remark 5.7 (ii), we can assume without loss of generality that f is strictly decreasing in y. Now it follows from the classical supersolution property of u 2 and the viscosity subsolution property of u 1 that, taking values at (t * , x * ), , which is the desired contradiction since f is strictly decreasing in y. Full comparison We shall follow the approach of Ekren et al. (2014). For this purpose, we strengthen Assumption 3.2 slightly by imposing some uniform property of g in terms of y. Assumption 6.5 The diffusion coefficient g belongs to C k 0 ,loc We remark that, under Assumption 3.2, all the results in this subsection hold true if we assume instead that T is small enough. Proof We prove U = ∅ in several steps. The proof for U is similar. We remark that it is possible to extend our definition of viscosity supersolutions to lower semi-continuous functions. However, here (i) shows that u is upper semicontinuous. So it seems that the continuity of u in (ii) is intrinsically required in this approach. Proof By the proof of Lemma 6.6, u is bounded from above. Similarly, u is bounded from below. Then it follows from (6.11) that u and u are bounded. We establish next the upper semicontinuity for u. The regularity for u can be proved similarly. Fix (t, x) ∈ [0, T ] × R. For any ε > 0, there exists ϕ ε ∈ U such that ϕ ε (t, x) < u(t, x)+ε. By the structure of U , it is clear that ϕ ε ≥ u on [0, T ] × R. Assume that ϕ ε ∈ U corresponds to the partition 0 = t 0 < · · · < t n = T as in (6.5). We distinguish between two cases. Case 1. Assume t ∈ (t i−1 , t i ) for some i = 1, . . ., n. Since ϕ ε is continuous This implies that u is upper semi-continuous at (t, x). We finally show that u is a viscosity subsolution provided it is continuous. The viscosity supersolution property of u follows similar arguments. Proof By Lemma 6.7 and (6.13), it is clear that u = u is continuous and is a viscosity solution of RPDE (3.6). By Theorem 6.2 (partial comparison), u 1 ≤ u and u ≤ u 2 . Thus (6.13) leads to the comparison principle immediately. Remark 6.9 The introduction of u and u is motivated from Perron's approach in PDE viscosity theory. However, there are several differences. (i) In Perron's approach, the functions in U are viscosity supersolutions, rather than classical supersolutions. So our u is in principle larger than the counterpart in PDE theory. Similarly, our u is smaller than the counterpart in PDE theory. Consequently, it is more challenging to verify the condition (6.13). (ii) The standard Perron's approach is mainly used for the existence of viscosity solution in the case the PDE satisfies the comparison principle. Here we use u and u to prove both the comparison principle and the existence. (iii) In the standard Perron's approach, one shows directly that u is a viscosity solution, while in Lemma 6.7 we are only able to show u is a viscosity supersolution. The condition (6.13) is in general quite challenging. In the next section, we establish the complete result when the diffusion coefficient g is semilinear. Clearly, Assumption 7.1 implies Assumption 6.5. Note that in this section, we obtain a global result. Thus, we require that g 0 and its derivatives are uniformly bounded in y as well. (7.5) (iii) For each t, the mapping x → X t (x) has an inverse function X −1 t (·); and for each (t, x), the mapping y → Y t (x, y) has an inverse function Y −1 t (x, ·). We remark that the proof below uses (7.5). One can also use the backward rough path in (2.12) to construct the inverse functions directly. This argument works in multidimensional settings as well (Keller and Zhang 2016). Proof (i) follows directly from Lemma 2.15, which also implies Then the representations in (7.5) follow from Lemma 2.13. Moreover, setX := x +X t (x)). Then, by the uniform regularity of σ , sup x∈R σ (·, x) k ≤ C. This implies that uniformly bounded, uniformly in (t, x). Therefore, we obtain the first estimate for ∂ x X in (7.5). The second estimate for ∂ y Y in (7.5) follows from the similar arguments. Finally, for each t, the fact ∂ x X t (x) ≥ c implies that x → X t (x) is one to one and the range is the whole real line R. Thus X −1 t : R → R exists. Similarly, one can show that Y −1 t (x, ·) exists. One can easily check, omitting (x, y, z) in X t (x), Y t (x, y), Z t (x, y, z), and then (4.11) becomes F(t, x, y, z, γ ) Under our conditions, F has typically quadratic growth in z and is not uniformly Lipschitz in y. Moreover, the first equality of (4.8) becomes By using similar arguments as in Section 4.2, we obtain the following result which is global in this semilinear case. The next result establishes equivalence in the viscosity sense. Remark 7.5 In the general case, there are two major differences: (i) The transformation determined by (4.8) involves both u and ∂ x u, i.e., to extend Theorem 7.4, one has to assume that the candidate viscosity solution u is differentiable in x. (ii) The transformation is local, in particular, the δ in Theorem 4.5 depends on ∂ 2 x x u ∞ , i.e., unless ∂ 2 x x u is bounded and the solution is essentially classical, we have difficulty to extend Theorem 7.4 to the general case, even in just a local sense. Some a priori estimates Here, we establish uniform a priori estimates for v that will be crucial for the comparison principle of viscosity solutions in the next subsection. First, we estimate the L ∞ -norm of v. Proof First, we write (4.10)-(7.6) as Since v is a classical solution, a and b are smooth functions. Reversing the time by ThenŶ t :=v(t,X t ) solves the BSDÊ , we have |F(t, x, y, 0, 0)| ≤ C[1 + |y|] (7.13) following from Lemma 7.2. Then, by standard BSDE estimates, which yields (7.11) for t = T . Along the same lines, one can prove (7.11) for all t > 0. Remark 7.7 (i) We are not able to establish similar a priori estimates for ∂ x v. Besides the possible insufficient regularity of u 0 , we emphasize that the main difficulty here is not that F has quadratic growth in z, but that F is not uniformly Lipschitz continuous in y. Nevertheless, we obtain some local estimate for ∂ x v in Proposition 7.9, which will be crucial for the comparison principle of viscosity solutions later. (ii) To overcome the difficulty above and apply standard techniques, Lions and Souganidis (2000a, (1.12)) imposed technical conditions on f in the case f = f (z, γ ): γ ∂ γ f + z∂ z f − f is either bounded from above or from below. (7.14) This is essentially satisfied when f is convex or concave in (z, γ ). Our f in (7.15) below does not satisfy (7.14), in particular, we do not require f to be convex or concave in z. See also Remark 7.13. The next result relies on a representation of v and BMO estimates for BSDEs with quadratic growth. For this purpose, we restrict f to Bellman-Isaacs type with the Hamiltonian where E := E 1 × E 2 ⊂ R 2 is the control set and e = (e 1 , e 2 ). Lipschitz continuous in (x, y, z) with Lipschitz constant L 0 , and f 0 (t, x, 0, 0, e) is bounded by K 0 . Remark 7.10 (i) We reverse the time in (7.19). Hence, in spirit of the backward rough path in (7.19), B and the rough path ω (or the original B in (3.1)) have opposite directions of time evolvement. Thus (7.19) is in the line of the backward doubly SDEs of Pardoux and Peng (1994). When E 2 is a singleton, Matoussi et al. (2018) provide a representation for the corresponding SPDE (3.1) in the context of secondorder backward doubly SDEs. We shall remark though, while the wellposedness of backward doubly SDEs holds true for random coefficients, its representation for solutions of SPDEs requires Markovian structure, i.e., the f and g in (3.1) depend only on B t (instead of the path B · ). The stochastic characteristic approach used in this paper does not have this constraint. Note again that our f and g in RPDE (3.6) and PPDE (3.7) are allowed to depend on the (fixed) rough path ω. (ii) For (7.22), from a game theoretical point of view, it is more natural to use the so-called weak formulation (Pham and Zhang 2014). However, as we are here mainly concerned about the regularity, the strong formulation used by Buckdahn and Li (2008) is more convenient. The global comparison principle and existence of viscosity solution We need the following PDE result from Safonov (1988) (Mikulevicius and Pragarauskas (1994) have a corresponding statement for bounded domains and Safonov (1989) has one for the elliptic case). Remark 7.13 The requirement that f is convex or concave is mainly to ensure the existence of classical solutions for PDE (7.23). Theorem 7.11 holds true for the multidimensional case as well. When the dimension of x is 1 or 2, Bellman-Isaacs equations may have classical solutions as well, see Lieberman (1996, Theorem 14.24) for d = 1 and Pham and Zhang (2014, Lemma 6.5) for d = 2 for bounded domains, and also Gilbarg and Trudinger (1983, Theorem 17.12) for elliptic equations in bounded domains when d = 2. We believe such results can be extended to the whole space and thus the theorem above as well as Theorem 7.14 will hold true when f is indeed of Bellman-Isaacs type. However, when the dimension is high, the Bellman-Isaacs equation, in general, does not have a classical solution (Nadirashvili and Vladut (2007) provide a counterexample). Proof By Lemma 6.7, u and u are bounded by some C 0 . When f is semilinear, i.e., linear in γ , clearly under natural conditions f satisfies the requirements in Theorem 7.14. We provide next a simple fully nonlinear example. satisfies the requirements in Theorem 7.14. Remark 7.16 (i) As pointed out in Remark 7.5, for general g = g(t, x, y, z), the transformation is local and the δ in Theorem 4.5 depends on ∂ 2 x x u ∞ . Then the connection between RPDE (3.6) and PDE (4.10) exists only for local classical solutions, but is not clear for viscosity solutions. Since our current approach relies heavily on the PDE, we have difficulty in extending Theorem 7.4 to the general case, even in just the local sense. We will investigate this challenging problem by exploring other approaches in our future research. (ii) When f is of first-order, i.e., σ f = 0 in (7.15), then (7.17) becomes F(t, x, y, z, γ ) = sup x, e)z + F 0 t, x, y, e , (7.25) Under Assumption 7.8, F 0 is uniformly Lipschitz continuous in y, and thus the main difficulty mentioned in Remark 7.7 (i) does not exist here. Then, following similar arguments as in this subsection, we can show that the results of Theorems 7.12 and 7.14 still hold true if we replace the uniform nondegeneracy condition σ f ≥ c 0 > 0 with σ f = 0. The case that g is linear In this subsection, we study the special case when g is linear in (y, z) (by abusing the notation g 0 ) 6.1) to obtain the Eq. 2.26. Hence ∂ x R 1,u s,t (x) = ∂ x u s,t (x) − g(s, x, u s (x))ω s,t = ∂ x u s,t (x) − ∂ x g(s, x, u s (x)) + ∂ y g (s, x, u s (x))∂ x u s (x) ω s,t = t s [∂ x f + ∂ y f ∂ x u r (x)](r, x, u r (x))dr + t s [∂ x g + ∂ y g∂ x u r (x)](r, x, u r (x))dω r − ∂ x g(s, x, u s (x)) + ∂ y g(s, x, u s (x))∂ x u s (x) ω s,t .
13,151.6
2015-01-28T00:00:00.000
[ "Mathematics", "Physics" ]
CARAN: A Context-Aware Recency-Based Attention Network for Point-of-Interest Recommendation Point-of-interest (POI) recommendation system that tries to anticipate user’s next visiting location has attracted a plentiful research interest due to its ability in generating personalized suggestions. Since user’s historical check-ins are sequential in nature, Recurrent neural network (RNN) based models with context embedding shows promising result for modeling user’s mobility. However, such models cannot provide a correlation between non-consecutive and non-adjacent visits for understanding user’s behavior. To mitigate data sparsity problem, many models use hierarchical gridding of the map which cannot represent spatial distance smoothly. Another important factor while providing POI recommendation is the impact of weather conditions which has rarely been considered in the literature. To address the above shortcomings, we propose a Context-Aware Recency based Attention Network (CARAN) that incorporates weather conditions with spatiotemporal context and gives focus on recently visited locations using the attention mechanism. It allows interaction between non-adjacent check-ins by using spatiotemporal matrices and uses linear interpolation for smooth representation of spatial distance. Moreover, we use positional encoding of the check-in sequence in order to maintain relative position of the visited locations. We evaluate our proposed model on three real world datasets and the result shows that CARAN surpasses the existing state-of-the art models by 7–14%. I. INTRODUCTION The advancement of modern smart devices with location based services made it easy for people to share the locations they are visiting and their check-in information in the location based social networks (LBSNs). Such check-in data point in LBSNs yield an excellent possibility to understand the mobility of a user. Mobility prediction has a wide area of applications, ranging from recommendation systems and location based services to smart transportation and urban planning. Some recognized LBSNs are Foursquare, Yelp, Gowalla, and Facebook place where millions of check-in information are The associate editor coordinating the review of this manuscript and approving it for publication was Xianzhi Wang . recorded. The huge volume of accumulated online footprints attracted researchers on recommending POIs (Point of Interests) which are of high interests to the users [1]- [3]. Such recommendation system can help LBSN services to improve their user experience by providing suggestions about convenient POIs [4]. It will also enable POI holders to predict the time period of next customer arrival. It may also be useful for location-aware online advertising services. With the rapid increase of such mobile applications, it has become crucial to recognize the mobility patterns of users from their past trajectory. The task of POI recommendation differs from other recommendation systems (for example, movies, goods, news recommendation) in the sense that it has strong spatiotemporal VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ dependence on the visited locations [5]- [7]. In Fig. 1, a user's check-in sequence is demonstrated including various contextual information like time, distance, and weather condition. From the figure, we can realize that the user prefer to go to restaurant or to the park if the weather is clear. The user also choose to go to bar during late of the night. The distance to travel also play a role when deciding where to go next. All these contextual information are crucial for providing a personalized prediction for the next location. In POI recommendation, the goal is to recommend the next possible POI that the user might be interested in based on contextual information and trajectory of the historical visit sequence. However, the task is particularly challenging due to high sparsity of the data and difficulties in incorporating various contextual information into a unified predictive model. In the literature, various techniques have been proposed to make a personalized recommendation. Early years, methods using matrix factorization and Markov chain model were applied together for analyzing the sequential data [8]- [10]. He et al. proposed a latent factor model for capturing successive visit sequences for exploring user preferences [11]. Later on, Recurrent Neural Network (RNN) based techniques with variations in the gate mechanism have been employed for modeling sequence data and capturing long term dependencies of the visited POIs [12]- [14]. Zhu et al. [15] introduced a variation of Long Short Term Memory (LSTM) network to equip time intervals into the model for the recommendation task. However, the model did not consider the geographical distance between the two neighboring checkins. Later, Zhao et al. [16] suggested Spatio-Temporal Gated Network (STGN) with two pairs of distance gate and time gate for controlling short term and long term interest of the user to enhance the memory network of the LSTM. One major drawback of RNN based methods (and its variants) is that, most of them rely on the output of the last hidden layer activation and when the sequence gets very large, they fail to focus on early visit sequences. As a result, the recommendation accuracy fails to improve further. Recently, attention mechanism is becoming very popular and showed remarkable results in modeling sequential task [17]. Current state-of-the art models in POI recommendation tries to adopt the attention mechanism on top of the RNN models [18]- [20]. However, these models fail to incorporate personalized item frequency (PIF) [21] when generating the recommendations. To resolve this issue, Luo et al. [22] used bi-attention architecture for all pairs of historical visit sequence including repetitive check-ins. Although the model learned the PIF information using matrix representation of historical check-ins, it did not make use of the order of the visited locations. When generating recommendation, it is vital to make use of as much contextual information as possible. There is a scope to provide better personalized recommendation by incorporating weather information within the context of the visited POI. For example, a user might be interested in going to a theater during sunny weather but not during rain, whereas other users might have a different taste of choice. Or users may prefer travelling shorter distance in case of poor weather condition (e.g., storm, snow). Trattner et al. in [23] showed that it is possible to incorporate the weather data to enhance the quality of recommendations. However, weather information is rarely considered for POI recommendation. Recency (recently visited locations) of the visited POIs is another important factor for generating the recommendation [24], [25]. For example, consider a scenario where a user always visits nearby restaurant after returning from office. Thus, the model should give more focus on recommending nearby restaurants if the last visited POI was office. So, before generating the recommendation the model should learn which historical POIs should be given more focus on as well as the time of the check-in and the condition of the weather that reflects the user preference. Another major challenge in POI recommendation is sparsity of the spatiotemporal information. It's difficult to learn every possible continuous geographical distance and time interval without partitioning them into discrete bins. To minimize the sparsity of the temporal domain, early works divided every day into hours of discrete time slots [26]. Hierarchical gridding of the map was performed [27], [28] for reducing the spatial sparsity. However, dividing the map into discrete grids cannot properly reflect spatial distance between two POIs in the neighboring grids, as they will give the same distance if the POIs were close together or further apart within the grids. Therefore, keeping these drawbacks in mind, the research question addressing in this paper is -''RQ: How to minimize above-mentioned limitations and generate context-aware recommendation that can focus on user's historical check-ins maintaining relative order of the visit sequence?''. To answer this research question, we present a contextaware recency based attention network (CARAN) for the recommendation of next point-of-interest. In CARAN, we incorporate weather information along with the spatiotemporal context for reflecting better user preference. We use attention network that can learn which locations to give more focus on depending on user's historical checkins. To maintain the relative order of the check-in sequence, we perform positional encoding. Instead of using hierarchical gridding mechanism, we use linear interpolation technique for spatial quantization which is more sensitive to geographical interval compared to gridding mechanism. In summary, the contribution of our work is listed below: • We propose CARAN, that can effectively learn to give focus on user's past visited locations by incorporating contextual information with recency based attention mechanism. • Along with spatiotemporal data, CARAN incorporates weather information of the visited POIs which results in a richer contextual information and can provide better personalized recommendation. To the extent of our knowledge, CARAN is the first model that integrates weather condition and spatiotemporal information into a unified model. • We perform positional encoding of the check-ins for preserving the relative order of the visit sequence within the historical trajectory of the visited POIs. • To reflect smooth spatial distance between two POIs, we use linear interpolation instead of gridding mechanism. • We perform extensive experiments on three real-world datasets for fine-tuning and evaluating CARAN. The results show that CARAN outperforms the state-of-theart models for POI recommendation by 7-14%. II. RELATED WORK In this section, we discuss about various methods that are used in the field of POI recommendation system. A. COLLABORATIVE FILTERING BASED METHODS Collaborative Filtering (CF) methods examine the interactions between users and items to construct patterns between them when generating recommendations. Early years CF based methods were very popular for general recommendation system [29] (e.g., movie recommendation, item recommendation, music recommendation). It has been also used extensively for POI recommendation task where the model tried to make use of check-ins of related users or POIs [9], [30]. These techniques tried to represent every user and POI into latent vector space which was learned from observed user-item matrix. Then the recommendation is provided based on the similarity between users and POIs [31], [32]. For calculating similarity, various methods like Euclidean Distance, Cosine similarity, and Pearson similarity were used. The choice of similarity measure greatly influenced the performance of the recommendation. Many models tried to incorporate the geographical information [33], and temporal information [34] with the CF based methods. Most of the models considered the spatial effect by considering the distance between POIs as penalty. Jiao et al. [26] fit a curve for reflecting the correlation between user's travel distance and travel probability. Ye et al. [35] modeled geographical information using power law distribution into user-based CF framework. Major drawbacks of the CF based methods is that, most of the models work with user-item interaction matrix and fail to show the effect of spatiotemporal or sequential influence which is very important in predicting dynamic mobility of the users. B. MARKOV CHAIN BASED METHODS When generating POI recommendation, it is essential to contemplate different time relations and spatial distances among historical check-in sequences. In order to realize the impact of sequential information, many models utilized the properties of Markov Chain (MC) [36]- [38]. Rendle et al. [8] proposed the first MC model for the recommendation task using sequential data. It is also possible to combine MC based methods and CF based methods into a unified model for sequential recommendations as show in [39]. Cheng et al. [9], made use of Factorized Personalized Markov Chains (FPMC) with addition of physical restrictions among neighboring POIs after dividing the map into discrete grids. Liu et al. [40], proposed a multi-order Markov model that incorporates geographical influence and temporal popularity. Zhang et al. [41] used ensemble of Hidden Markov Models (HMMs) for characterizing movement regularity. MC based methods were popularly used for their simplicity since they try to find the probability of visiting next POI based on the immediate previously visited POI. However, they suffer from strong Markov assumptions and also cannot model long-term dependency. C. NEURAL NETWORK BASED METHODS In recent years, neural network based methods are showing promising results and successfully applied to the POI recommendation system. When modeling various features of users or items, neural networks perform really well for simulating nonlinear patterns and complex interactions [42], [43]. Zhao et al. [44], applied word2vec framework for modeling sequential context using temporal POI embedding. Yang et al. [45] proposed a semi-supervised learning framework for mitigating data sparsity and used a deep neural network framework for learning the embeddings of POIs and users. Due to the success of modeling sequential data, RNN based methods have become prevalent in the field of POI recommendation [46]- [48]. Li et al. [49], proposed a model that utilizes the time intervals between successive check-ins and explicitly modeled the timestamps for recommendation. Wang et al. [50], used similarity tree for organizing the locations and applied word2vec for embedding which is followed by RNN to model successive movement behavior. Yao et al. [51] proposed Semantics Enriched Recurrent Model (SERM) that combines embedding of diverse factors VOLUME 10, 2022 (location, user, keyword, time) for capturing spatiotemporal transition regularities. In [52], Contextual Attention Recurrent Architecture (CARA) was proposed for leveraging both sequential and contextual information related to user's dynamic preference. In order to incorporate spatiotemporal effect on long sequences, many variations of the RNN with attention network were proposed [18]. Zhu et al. [15] adapted time gates with LSTM for capturing user's short term interests. In [53], a multi attention network (MANC) was proposed for learning contextual information of neighborhood POIs. ATST-LSTM [19] uses LSTM network followed by an attention module for giving focus on input check-ins but only considers successive check-ins. LSTPM [54] used geo-dilated RNN for learning short-term preferences of the users. Overall, most of the existing models are good at modeling short-term preferences but fails to capture long-term relationships between non-consecutive POIs. Also, the models overlooked the impact of weather condition on the recommendation results. In contrast, our proposed model uses a recency based attention model by incorporating weather information and preserving sequential information of the visited locations which can model non-consecutive visits and results in a more personalized POI recommendation system. III. PROPOSED CARAN MODEL In this section, we first formulate the problem and then explain different layers used in the CARAN architecture. CARAN mainly consists of four layers: 1) Input layer, consisting of contextual information of the check-ins and spatiotemporal matrices, 2) Embedding layer, that converts contextual information and spatiotemporal matrices into their latent vector representation through embedding, 3) Attention layer, applies recency attention and predicts probability of a POI being selected as the next recommended POI, and finally, 4) Output layer, consisting of two phases, during training the model performs negative sampling to compute loss and during testing the model recommends top-k POIs. The overall architecture of CARAN is shown in Fig. 2. A. PROBLEM FORMULATION Let us consider a location based social network, where U = {u 1 , u 2 , . . . , u |U | } is the set of users and L = {l 1 , l 2 , . . . , l |L| } is the set of POI locations. Each l i ∈ L is geocoded using a pair (lat i , lon i ) indicating the latitude and longitude of l i respectively. In addition to latitude and longitude, each POI contains categorical information (e.g., park, museum, restaurant, etc.) which is represented using the set V = i.e., m u i <n, then we perform zero padding on the right of the sequence which is later masked off in the model. Otherwise, we take the last n check-ins of user u i . Given the historical check-in sequence S u i of the user u i , the goal of POI recommendation is to suggest top-k relevant POIs that the user u i might be interested in. For the ease of understanding, all the notations used in our paper are described in Table 1. B. INPUT LAYER In the input layer, contextual information of the users are collected and two spatiotemporal matrices are formulated. 1) CONTEXTUAL INFORMATION Given the user id u i and checked-in location l i at time t i , we first retrieve the category v i of l i . Then we retrieve the weather information w i using the OpenWeatherMap API (https://openweathermap.org/) from (lat i , lon i ) of l i and time t i . The API responses with the current weather condition of that location which is one of the ten following categories: Clear, Rain, Clouds, Haze, Mist, Fog, Thunderstorm, Snow, Drizzle, and Smoke. The contextual information for the i'th check-in can be represented as, For all the users and their check-in information, we accumulate the contextual information for passing it into the next layer. 2) SPATIOTEMPORAL MATRICES For modeling spatial and temporal interval between two locations, we define two spatiotemporal matrices: 1) trajectory spatiotemporal matrices, and 2) candidate spatiotemporal matrices. Each entry of these matrices represent the spatial distance or temporal interval between two check-ins. Trajectory spatiotemporal matrices attempt to find the correlation between non-consecutive check-ins, while the candidate spatiotemporal matrices focus on the distances and temporal intervals between all check-ins and next possible POI. For trajectory spatiotemporal matrices, the temporal interval between i'th and j'th check-in is calculated using |t j − t i | FIGURE 2. Proposed CARAN architecture for POI recommendation system. and the distance between two locations l i and l j are calculated using H (l i , l j ) indicating the Haversine distance [55] for great-circle distance of Earth. Given the check-in sequence of length n, the trajectory spatial matrix M S ∈ R n×n and trajectory temporal matrix M T ∈ R n×n is calculated as, In order to assist in computing probability of final recommended POI and for incorporating PIF information, we form candidate spatiotemporal matrices. For spatial candidate matrix, we compute the spatial distance between all candidate POIs i ∈ [1, |L|] and all checked-in locations j ∈ [1, n] using H (l i , l j ). For temporal candidate matrix, we compute the time interval between i'th check-in and (n + 1)'th check-in using |t i − t (n+1) | which is later broadcast along the row |L| times for converting into two-dimensional matrix and incorporating with the spatial candidate matrix. So, the spatial candidate matrix M S ∈ R |L|×n and temporal candidate matrix M T ∈ R |L|×n is calculated as, C. EMBEDDING LAYER In this layer, we perform embedding of contextual information and spatiotemporal matrices for converting them into their latent vector representation. 1) CONTEXT EMBEDDING Given the contextual information c i = (u i , l i , t i , v i , w i ), here we perform multi-modal embedding for encoding contextual information. We use embedding technique instead of one-hot encoding because total category of each contextual information (i.e., users, locations) can be very large and will take huge computation and memory power. Besides, one-hot encoding will only increase the sparsity of the data. Hence, we choose to perform embedding of the contextual information considering embedding dimension d model = d. In order to reduce sparsity, we map the week of the day by dividing the continuous time into slots of 7 × 24 = 168 hours. This discretization across the time domain helps the model to learn user mobility throughout the week. Then, individual context is embedded and added together to form the embedded context E(c i ) ∈ R d as shown in (5). Context embedding is performed on the n historical check-in of S u i . In order to maintain the order of the check-in sequence, we perform positional encoding [17] to the embedded context. For check-in position i ∈ [1, n], and embedding dimension j ∈ [1, d 2 ], the positional encoding PE ∈ R n×d is computed as in (6). Then for each check-in position i ∈ [1, n], the context embedding E(c i ) ∈ R d , and positional encoding PE(i) ∈ R d , the final embedded context of user's historical check-ins E(C) ∈ R n×d is calculated as in (7). 2) MATRIX EMBEDDING For the spatiotemporal matrices, if we try to learn continuous geographical distances and time intervals, then it will easily lead to sparse representation. We partition spatial distances and temporal intervals into discrete bins of hundred meters and one hour as the basic unit respectively. To reduce sparsity, it is possible to perform discrete bin embedding of the matrices. However, the latest study suggests performing linear interpolation for improved performance [56]. Hence, a linear interpolation embedding is performed on each element of the spatiotemporal matrices for smooth representation of the intervals with dimension d. Finally, the spatial matrix and temporal matrix are added together by taking summation of the last embedded dimension to get the embedded matrix representation. In (8), we show how trajectory matrix embedding E(M ) ∈ R n×n is obtained from their spatiotemporal matrices. Similar calculation is performed on candidate spatiotemporal matrices to obtain candidate matrix embedding E(M ) ∈ R |L|×n . where, S and T indicate the upper bound of spatial and temporal intervals respectively, and γ S and γ T represent the lower bound of spatial and temporal intervals respectively. D. ATTENTION LAYER In this layer, we perform recency attention and compute the final candidate probability for recommendation. 1) RECENCY ATTENTION We consider latest n visited locations of the user and using self-attention mechanism find out which visits should be given more focus on when generating the recommendation. This module combines user's trajectory matrix embedding E(M ) with the sequential context embedding E(C) and gives updated representation of each visit which can apprehend both the long-term and short-term dependencies. Furthermore, we perform masking on user's check-in sequence if m u i is less than n. As we mentioned in the problem formulation, we perform zero padding on the right, the Boolean mask ∈ (0, 1) n×n which is constructed using (9) will only contain ones on the upper-left portion of the mask. Later, we multiply this mask with the attention output so that the padding values do not impact final prediction. Now, given the user's context embedding matrix E(C) ∈ R n×d , along with the embedded trajectory matrix E(M ) ∈ R n×n , recency attention R(u i ) ∈ R n×d of the user u i is computed using the self-attention mechanism as shown in (10), where, Q, K , and V are query, key and value of self-attention which is obtained by multiplying E(C) with learnable weights W Q ∈ R d×d , W K ∈ R d×d , and W V ∈ R d×d respectively. Note that, in (10) every multiplication is a matrix multiplication except for the multiplication between and the output of softmax which is an element wise multiplication. 2) CANDIDATE PROBABILITY In this module, we compute the probability of a POI being recommended from all |L| locations using the output of the recency attention R(u i ). From the embedding layer, we retrieve location embedding E(l i ) where i ∈ [1, |L|]. Now, given the output of recency attention R(u i ) ∈ R n×d , candidate matrix embedding E(M ) ∈ R |L|×n , and location embedding E(L) ∈ R |L|×d , the probability of all POIs P(L) ∈ R |L| is computed using the formula shown in (11). where, W p ∈ R n×1 is a learnable weight matrix that is multiplied with the output of softmax. W p learns to give focus on locations which are more suitable of being selected as the next recommended POI. E. OUTPUT LAYER The output layer works in two separate phases, one for training the model and another for testing the accuracy of recommended POIs. 1) TRAINING PHASE Before training the model, we need to define a loss function that the model will try to optimize. In POI recommendation system, total number of locations are very large compared to the number of locations to be recommended by the model. So, due to the imbalance distribution of positive and negative class, we cannot compute the loss function for every predicted classes. Because in this way, the model will only focus on the negative classes for reducing total loss and recall rate will drop as well. In order to resolve this, we perform negative sampling of the predicted candidate probability P(L). We randomly sample Q = {q 1 , q 2 , . . . , q µ } POIs at each training step for computing the loss. Here, µ indicates the number of negative samples and can be considered as a hyperparameter of our model. After every iteration of the training, we also update the random seed of negative sampler. Given the candidate probability, P(L) ∈ R |L| , target location l t , and negative samples Q ∈ N µ , the loss is computed as shown in (12). [1,µ] The full procedure of computing loss from candidate probability is presented in Algorithm 1. In the testing phase, we select top-k probable POIs that is recommended by the model. Then we evaluate and compare our model with other POI recommendation frameworks. IV. EXPERIMENTS In this section, we carry out experiments on three real world datasets to assess the proposed CARAN model. First, we explain the datasets used in our experiment. Then we show the trainable parameters of our model which is followed by evaluation and comparison of our model with other baseline models. Then we present the impact of various contextual information in recommendation performance. Finally, we discuss the stability of our model followed by visualization of positional encoding and recency attention layer. A. DATASETS For evaluating the proposed model, we use three public LBSNs datasets: NYC, TKY, and Gowalla. Here, NYC and TKY datasets are collected from Foursquare locating in New York and Tokyo city respectively. Gowalla is another widely used global scale LBSN dataset which is popularly used for evaluating POI recommendation models. Similar to other models, we remove POIs that are visited by fewer than 10 users. The statistics of the dataset after preprocessing are presented in Table 2. For Gowalla dataset, there is no categorical information of POI, hence we consider the category embedding as zero for Gowalla dataset. B. BASELINE METHODS We compare our proposed CARAN model with the following baseline methods. • STGN [16]: An enhanced LSTM model for capturing user's preferences. It uses two pairs of time gate and distance gate for incorporating sequential information. • LSTPM [54]: RNN based method that uses non-local network for long-term preference and geo-dilated RNN for the short-term preference. • TiSASRec [49]: Self-attention based network that models time intervals and timestamp of interactions. However, it does not consider any spatial information. • GeoSAN [28]: A geography aware self-attention based model that performs hierarchical gridding of the map without explicitly considering spatiotemporal intervals. • STAN [22]: Uses a bi-layer attention model for capturing user's preference from historical trajectory. C. MODEL TRAINING In µ = 10. We tuned our model to reach on these optimal hyperparameters which is described later in model stability section. D. EVALUATION METRICS We use Recall@k as the evaluation metric which is popularly used in evaluating POI recommendation models. Our model recommends top-k POIs as the output and will contribute to the recall rate if the target POI is within the recommended top-k POIs. For a user u i with m u i check-ins, and CARAN's top-k prediction by considering first (m u i − 1) check-ins of u i , the Recall@k is computed as, E. MODEL PERFORMANCE We test CARAN on the datasets and compare recommendation performance with the baseline methods using Recall@5 and Recall@10. The comparison result is presented in Table 4. We can observe that CARAN considerably outperforms all the baseline methods with 7-14% improvement in the recall rate. It is clear from the literature that, collaborative filtering and Markov chain based methods cannot fully utilize long term sequential nature of the visit sequence which results in low recommendation performance. Hence, we do not show such methods in the comparison. Compared to STGN, DeepMove and LSTPM performs well among recurrent network based models due to their capability of capturing periodicity. TiSASRec only considers relative timestamps of the check-ins and ignores the impact of spatial distance. GeoSAN performs hierarchical gridding for partitioning the map which cannot reflect spatial distance smoothly. STAN follows a bi-attention layer mechanism but fails to consider the sequential property. Besides, none of them considered the impact of weather condition on user's preference. On the other hand, CARAN considers rich contextual information of user's check-in sequence by incorporating weather information. It applies a recency based attention mechanism that can give focus on historical trajectory based on user's preferences. Through positional encoding, the model is also able to capture relative order of the check-ins. As a result, the proposed CARAN model exceeded the current state-of-the-art methods with significant improvement. F. ABLATION STUDY To measure the impact of various contextual information on the recommendation result, we use the following variations of CARAN: • CARAN-C: Remove the impact of categorical information by setting the category embedding output to zero (i.e., E(v i ) = 0). • CARAN-PE: We remove the positional encoding part and pass the output of context embedding directly to the recency attention module. • CARAN-W: To ignore the impact of weather condition, we set the output of weather embedding as zero (i.e., E(w i ) = 0). • CARAN-W-PE: Here, we remove both the positional encoding and the weather embedding. • CARAN-MM : To observe the impact of trajectory and candidate spatiotemporal matrices, we do not perform addition of these matrices in the attention mechanism. The recall rate of these variations are shown in Fig. 3. We can see that, removing the impact of categorical information (CARAN-C) of each location, performance slightly decreases. Ignoring positional encoding (CARAN-PE) and weather information (CARAN-W) result in low recommendation performance. Compared to these variations, CARAN-W-PE gives lower performance by ignoring both weather condition and positional embedding. If we do not incorporate the spatiotemporal information (CARAN-MM ), the model fails to learn the distance and temporal preferences of the user and gives the lowest performance in this case. Hence, we can say that, all the contextual information are contributing together to provide relevant recommendation to the user. G. MODEL STABILITY Variations in dimension size d model and number of negative samples µ can impact the recommendation performance. We analyze the effect of these two parameters on recall rate, Recall@5. In Fig. 4, the impact of number of embedding dimension is shown. The figure demonstrates that the suitable value for d model is 60. For NYC and TKY dataset, the model seems to become stable earlier at 50. However, for Gowalla dataset the model is stable after 60 due to large number of users and locations. So, we can say the model will generalize for embedding dimension greater than 60. The impact of number of negative samples on recommendation result is shown in Fig. 5. From the figure we can see that, for number of negative samples less 15 the model gives good performance. After that, the recall rate decreases for NYC and TKY as the model gives more focus on negative samples compared to positive ones. However, due to the large number of POIs in Gowalla dataset, the rate of decrease in performance is not as high as the other two datasets which is an indication that for larger dataset we can consider larger value of µ. For our model, negative sampling is crucial as the recommendation performance may decrease drastically if the value of µ is larger than specific threshold. H. MODEL VISUALIZATION We use attention mechanism in our model which does not take order of the check-in sequence into account. In order to incorporate sequential information with the model, we use positional encoding of the input check-ins. In Fig. 6, embedding output of the positional encoding is shown. Cell (i, j) of the matrix indicates the i'th embedding output for the j'th check-in position. For example, if we take the values of positional encoding at column 0 from each row, we will get the encoding of check-in position 0. We see an inherent pattern in the positional encoding which gets added with VOLUME 10, 2022 the context embedding to reflect the relative position of that check-in within the user's historical trajectory. In CARAN, we use recency attention that learns to give focus on relevant past locations for predicting next POI. In order to understand the impact, we fed a user's check-in sequence from NYC dataset into the model and visualize the output of recency attention layer. For better understanding we take a slice of 10 × 10 grid from the output of the attention layer which is shown in Fig. 7. The (i, j)'th cell of the output image represents the amount of focus given on the i'th check-in at the j'th embedding output. We can see that the attention layer gives different amount of attention on different check-ins as indicated by each row. After finding out the category of these sliced check-ins we see that, the row numbered 0, 3, 4, 5, 8, 9 represents Cafe, Burger Shop, Taco place, Italian restaurant, Gastropub, and Italian restaurant respectively. Whereas the row numbered 1, 2, 6, 7 represents Hardware store, Clothing store, Park, and Hardware store respectively. The target prediction category for that user was Bakery, as a result the model is providing more focus on the food categories for generating the recommendation. This gives a clear indication that the proposed recency attention layer has a considerable impact on the output prediction. V. CONCLUSION In this paper, we presented an effective and novel POI recommendation system, named CARAN, that considers rich contextual information and performs recency based attention for modeling user's preference. We use matrix representation for finding the relevance between non-consecutive locations within the user trajectory. The recency attention mechanism helps CARAN to learn which visits to give more focus on and can capture long term dependencies. To maintain the relative order of the check-in sequence, we incorporate positional encoding with the embedded check-in context of the user. We also use linear interpolation that replaces the hierarchical gridding mechanism for smooth representation of the spatiotemporal intervals. The negative sampler used for computing cross-entropy loss outperforms the traditional binary cross-entropy loss computation technique. Experimental analysis shows that CARAN provide better recommendation by considering weather condition and incorporating positional encoding with the system. We also showed that, the model is stable and robust under hyperparameter variation. Specifically, in this paper we developed a context-aware POI recommendation system that improves the recall rate of state-of-the art models by 7-14% proving the superiority of CARAN. MD. BILLAL HOSSAIN (Member, IEEE) received the B.Sc. degree in computer science and engineering from the Chittagong University of Engineering and Technology (CUET), Bangladesh, where he is currently pursuing the M.Sc. degree in computer science and engineering. He was a Research Assistant at CUET, from December 2018 to July 2019, where he is currently working as a Lecturer with the Department of Computer Science and Engineering (CSE). He is very enthusiastic about competitive programming. During the academic year, he achieved noteworthy rank in many national and international level programming contests. His research interests include algorithms, machine learning (ML), and recommender systems. MOHAMMAD SHAMSUL AREFIN (Senior Member, IEEE) received the Doctor of Engineering degree in information engineering from Hiroshima University, Japan, with support of the scholarship of MEXT, Japan. As a part of his doctoral research, he was with the IBM Yamato Software Laboratory, Japan. He is affiliated with the Department of Computer Science and Engineering (CSE), Chittagong University of Engineering and Technology, Bangladesh. Earlier, he was the Head of the Department. He has more than 110 refereed publications in international journals, book series, and conference proceedings. His research interests include privacy preserving data publishing and mining, distributed and cloud computing, big data management, multilingual data management, semantic web, object-oriented systems development, and IT for agriculture and environment. He is a member of ACM and a fellow of IEB and BCS. He is the Organizing Chair of BIM 2021, the TPC Chair of ECCE 2017, the Organizing Co-Chair of ECCE 2019, and the Organizing Chair of BDML 2020. He visited Japan, Indonesia, Malaysia, Bhutan, Singapore, South Korea, Egypt, India, Saudi Arabia, and China for different professional and social activities. He is an author of two books, one book chapter, and one patent. His research interests include multimedia security, digital watermarking, steganography, multimedia data compression, sound synthesis, digital image processing, and digital signal processing. He is a member of the technical committees of several international conferences. He serves as a reviewer for various reputed journals including IEEE, IEICE, Elsevier, and Springer. TAKESHI KOSHIBA (Member, IEEE) received the B.E., M.E., and Ph.D. degrees from the Tokyo Institute of Technology, in 1990, 1992, and 2001, respectively. He is a Full Professor with the Department of Mathematics, Faculty of Education and Integrated Arts and Sciences, Waseda University, Japan. His research interests include theoretical and applied cryptography, the randomness in algorithms, and quantum computing and cryptography.
8,614.2
2022-01-01T00:00:00.000
[ "Computer Science" ]
Pattern Recognition on Vehicle Number Plates Using a Fast Match Algorithm - Computer Vision was the fast developing apps in the world, it is make people make a lot of new algorithm. Before we can use in out app, we need to test the algorithm to make sure how effective and optimal the algorithm to solve every case we given. A lot of traffic system has implemented computer vision, they need fast and can work in every condition, because every vehicle who pass needs to be recognized. In this research Fast Match algorithm was chosen because they can solve some test and make a lot of image have a similarity with the template. It makes accuracy of the data can be achieved with this algorithm. For example on of the sample was have a SAD point for 0.5 and Overlap Error for 0.5 and can run in standard computer just for a couple second. It makes the template and the original image has a little similarity. INTRODUCTION One of the requirements for a vehicle to be officially driven in Indonesia is to have a TNKB (motorized vehicle number sign) or better known as a number plate. Each number plate has the same physical characteristics [1], namely for public vehicles that are black, made of iron and have characters and numbers printed uniquely and a black background [2]. With the aim of being a sign or characteristic, that the vehicle has an official letter and has been registered with the police. The introduction of vehicle numbers [3]- [5] is an interesting topic and is one of the main things in a system that requires vehicle number data such as parking systems, toll systems, and even traffic systems in the event of traffic violations [1], [6]. By keeping up with the times, it is hoped that everything that exists can be done automatically, so an application is created that can recognize a number plate through the entered image to retrieve the data [7]- [9]. Automation is one of the important elements in today's informatics era, with the emergence of various kinds of technology causing a very fast flow of information. Therefore, there are benefits such as the ease with which a person can take photos in everyday life. Then came an image processing technique that could automatically recognize the shape, color and character of an object [8]. Image recognition technique is a series of processes consisting of several methods. With the aim that an image can produce information by utilizing a variety of forms, features, colors, with the condition that it must be explicit in order to make a decision. Image recognition or Image Recognition uses several steps before the image can be analyzed. One of them is making templates manually and automatically only through input images from the real world, that are taken using a cellphone camera. One of the algorithms used is the Fast Match Algorithm, where the fast match algorithm is an algorithm invented by [10]. It is an addition to the Template Matching algorithm by providing a 2D transformation of the input image. The Matching template uses the template as a reference in adjusting the parts of the image that need to be analyzed [11]. This method has a fairly short travel time in recognizing license plate images [12]. However, apart from a fairly short travel time, this algorithm has a weakness in the noise level in the image so that filtering is needed in this algorithm [5]. So, to cover the weaknesses of the Template Matching algorithm, an algorithm called Fast Match: Affine Template Matching was made, an algorithm proposed by [10] where this algorithm was created to suppress the SAD (Sum Of Difference) error value in measurement by taking a small part of the pixels in the input image. This algorithm has a weakness for images that have a high texture. By taking the exposure from the background regarding image recognition with the algorithm used is the Fast Match Algorithm. The algorithm is chosen as an algorithm for image recognition on the condition that the motorized vehicle displays part of the number plate. In writing this study, the authors used some of the previous literature as a reference in this research. According to [5] there is still the need for exploration of this algorithm to make this algorithm very interesting, it is stated that improvisation or filtering to reduce the noise level can make this algorithm work much better, so it is hoped that in future studies not only 2D images but images in the form of videos can be used in real time. The next literature is an idea [13] saying that because the availability of training data makes the feature extraction process shorter in calculating the correlation value with training data, the main method used in this algorithm is convolution which is a step to get features from imagery. Of the 20 images that have been trained has an accuracy rate of 85%. The position of the letters in the image can affect the weight value, from existing research. The higher the value of the Weight Value affects the success or failure of the algorithm in detecting images. Next is research from [14], he provides improvements to the template matching algorithm, namely by using gradient search assistance by changing the shape of the tempalate from a square to a round shape and then assisted by using Polar Coordinates as an effort to speed up the detection process quickly. From this research, it can be concluded that there is a time difference between the conventional template matching algorithm and the template matching algorithm that has been given 10 seconds gradient searching. According to [15], the research he did was to change a 2D image into a 1D image by totaling the intensity value of the image, both horizontally and vertically. Then the second stage is to detect a template matching algorithm image on 1D images, then end with a decision making based on the similarity value function. Adding Gausian Noise affects decision making, resulting in a result that is 92.1% more accurate than 3 2D algorithms compared to NCC, SAD & CTF. The last and which will be implemented in this study is a proprietary research [10] which is about the use of the Fast Match Algorithm in recognizing images. This algorithm is a development of a template matching algorithm with limitations in the form of unreadable images when given high noise, so that this algorithm is intended to reduce the SAD (sum of absolute difference) value and the overlap error value. Based on literature research,here we wants to propose the Application of the Fast Match Algorithm in the Introduction of Number Plate Patterns on Motor Vehicles, to try to what extent this algorithm can work, images with different conditions will greatly affect the performance of this algorithm. The result will be the value of SAD, Overlap Error and Display template as well as the image and the detected image area. This research has been conducted by Matlab. Data Collection In this study, the number of license plate images was 50 in unequal resolutions and then given the conditions, normal, blur and low light as shown in Figure 1. Fast Match Fast Match is an updating algorithm of template matching where the use of image smoothness density is used as a 2D transformation of the image. This algorithm suppresses the SAD (Sum of Absolute Difference) [16] value and the Overlap Error value. The idea sparked by [10] is to take a sample from affinne transformation, then evaluate each transformation of the sample that has been taken, then return the sample to the closest distance to the template through the illustrations as in Figure 1. Research Stages The tool used to analyze this algorithm is Matlab, then the input and output will be analyzed with existing conditions. Testing With the aim of testing, this algorithm will be analyzed through the SAD value, the Overlap Error value, and the output in the form of a cut image. SAD (Sum of Absolute Difference Error) is a widely used tool to assess the similarity of two images. By comparing the matrix value of one of the pixels in the image, and continuing to move along the entire image [16] as in (1). Another testing has been done by overlap eror. This is one of the mapping steps to determine where the exact position of the template location on the image with ground and truth values is exactly in the area bounded by the green and magenta lines as shown in Figure 4. RESULTS AND DISCUSSION The image used in this research is a photo taken through a camera, with a visual image of a motorbike or car with a visible number plate, both from the front and from the side. There are 50 images, 12 of which are cars with front or rear parts, the remaining 38 are motorbikes with front, rear, or tilt image taking. Each image has a different resolution and there are several conditions such as low light and blur. Images have been named according to numerical order manually and in JPG format, in order to make sorting easier. Following are the results of the analysis of the Fast Match algorithm with various conditions that can affect the measurement and results. Template Dimensions One of the sample images is the following image with a resolution of 1131 x 747, while the template made has a resolution of 374x374. The first step is to determine the dimensions of the template. Here is a comparison between the dimensions of the template as in Figure 4. It can be seen from the table that the larger the size of the dimensions is the value of the smaller the template accelerates the processing time, and the resulting SAD value will be smaller due to the narrower image range as shown in Table 1. Light Intesity Analysis of images with low light intensity and images that have high light intensity shown in Table 2. From the results of measurements that have been made, it can be seen that the level of similarity seen based on the SAD value is quite high in images that have low light intensity. Meanwhile, images that have high light intensity have a constant value of 0.3% of the predetermined template. CONCLUSION Based on a series of processes that have been carried out, several points had been raised. The geometry or positioning of the template greatly affects the SAD (Sum of Absolute Difference) and Overlap Error values where the wider the template is made, the more distance between the original image and the template, so that the similarity to the template image is very far away. The distance of the object being read affects the SAD value, compared to the close one, the distance of the object that is a little far away has a much better level of similarity to the template. Sufficient light intensity and not too high makes this algorithm work very well, as evidenced by the Overlap Error distance which is not too far from the template and the SAD value is not that big.
2,606.8
2021-12-06T00:00:00.000
[ "Computer Science" ]
Carrier Injection and Transport in Blue Phosphorescent Organic Light-Emitting Device with Oxadiazole Host In this paper, we investigate the carrier injection and transport characteristics in iridium(III)bis[4,6-(di-fluorophenyl)-pyridinato-N,C2′]picolinate (FIrpic) doped phosphorescent organic light-emitting devices (OLEDs) with oxadiazole (OXD) as the bipolar host material of the emitting layer (EML). When doping Firpic inside the OXD, the driving voltage of OLEDs greatly decreases because FIrpic dopants facilitate electron injection and electron transport from the electron-transporting layer (ETL) into the EML. With increasing dopant concentration, the recombination zone shifts toward the anode side, analyzed with electroluminescence (EL) spectra. Besides, EL redshifts were also observed with increasing driving voltage, which means the electron mobility is more sensitive to the electric field than the hole mobility. To further investigate carrier injection and transport characteristics, FIrpic was intentionally undoped at different positions inside the EML. When FIrpic was undoped close to the ETL, driving voltage increased significantly which proves the dopant-assisted-electron-injection characteristic in this OLED. When the undoped layer is near the electron blocking layer, the driving voltage is only slightly increased, but the current efficiency is greatly reduced because the main recombination zone was undoped. However, non-negligible FIrpic emission is still observed which means the recombination zone penetrates inside the EML due to certain hole-transporting characteristics of the OXD. Introduction Organic light-emitting devices have attracted lots of attention in display and lighting applications due to the various advantages such as self-emission, flexible-substrate compatibility, and large-sized fabrication [1][2][3][4][5].For efficient use of the triplet exciton for electroluminescence, phosphorescent dopant is employed in the matrix as the emitting layer (EML) of the OLED [6][7][8][9][10][11][12][13][14][15].Contrary to conventional fluorescent dopant materials, dopant concentrations of phosphorescent ones are high due to the short range Dexter energy transfer process [16,17].This high dopant concentration in phosphorescent OLEDs in turn affects the carrier injection and transport characteristics [18][19][20].Due to the better energy level alignments, dopants may help carrier injection by transporting layers into the emitting layer.Some phosphorescent materials are found to exhibit very high carrier mobility comparable to conventional transporting materials [21,22].On the other hand, sometimes dopant materials can be viewed as trap sites in the EML [23][24][25][26][27]. Overall speaking, in phosphorescent OLEDs, carrier transport should be regarded as two-channel conduction.The carrier may hop through the matrix or dopant sites.Hopping between these two channels is also possible depending on the different dopant concentrations. Oxadiazoles typically exhibit electron transporting characteristics which can be used as the host for phosphorescent OLEDs [28][29][30][31].In our previous study, we demonstrated an efficient blue phosphorescent OLED consisting of iridium(III)bis [4,6- possessing good electron transporting characteristics and a wide bandgap [32].In this paper, we investigate the carrier injection and transport characteristics in the EML of such an OLED.With doping FIrpic inside the OXD, the driving voltage is decreased which means the dopants help carrier injection and transport.From EL spectra analysis, it can also be found that the recombination zone shifts toward the anode side.This means the dopant material improves the electron injection and transport capability.When increasing the driving voltage, the relative intensity at longer wavelength of the EL spectra increases and the recombination zone shifts from inside the EML toward the anode side [33,34].This means: (1) the hole penetrates inside the EML (at low voltage); and (2) electron mobility increases faster than the hole one with increasing voltage.To further understand the electrical properties inside the OLED, we fabricated three devices with part of the EML undoped.The total thickness of the EML is 30 nm, which consists of 10 nm undoped region and 20 nm doped region.The driving voltages of the three OLEDs are higher than in the uniform doped case, which means the dopants are beneficial for voltage reduction in this case.When the undoped layer is near the cathode side, the driving voltage increases significantly, which means the dopant assisted electron injection plays an important role in our device.Although the J-V characteristics are only slightly shifted for the case with the undoped region close to the anode, the current efficiency decreases a lot because there are no dopants inside the main recombination zone.However, there is still observable light emission, which means the hole is transported over the undoped region (pure OXD) and recombines with an electron. Device Performances of Blue Phosphorescent OLEDs with Different Dopant Concentrations Figure 2a shows the J-V characteristics of OLEDs with different dopant concentrations.Compared to the non-doped case (device 1), doping FIrpic (devices 2-6) in the EML helps to reduce the driving voltage, as shown in Table II.Driving voltage is lowest for device 5. Figure 2b,c shows the current efficiency (in terms of cd/A) and power efficiency (in terms of lm/W), respectively.Device 5 also exhibits the highest maximum efficiency.Dopant material inside the matrix plays some role for better conduction, which may be: (1) better hole injection; (2) better hole transport; (3) better electron injection; (4) better electron transport; and (5) higher recombination current.In the following discussion, we will see that better electron injection is the main reason for the voltage reduction, and better electron transport shows only a minor effect.Figure 3 shows the EL spectra for the six devices.For the case of undoped EML (device 1 in Figure 3a), a clear peak at 410 nm originates from mCP emission at lower driving voltage (7 and 8 V).On increasing the driving voltage to 10 and 12 V, this peak redshifts to 420 nm and disappears.On the other hand, the broad exciplex emission due to mCP/OXD interface around 530 nm increases monotonically with increasing driving voltage, which also implies that the recombination zone shifts from inside the OXD to the interface of mCP/OXD.At low driving voltage, the location of the recombination zone inside the OXD can be anticipated because this OXD exhibits certain hole-transporting characteristics.Besides, one can also see an increase around 430 nm with higher driving voltage, which comes from NPB emission, due to the recombination zone shift with increasing driving voltage.The multiple peak spectra in Figure 3a also implies that the recombination zone inside this device is quite broad, and covers NPB, mCP, and OXD.With increasing driving voltage, the recombination zone shifts toward the anode side.This means the electron mobility increases faster than the hole mobility with increasing driving voltage in the mCP and OXD layer.When the OXD is doped with 3% FIrpic (device 2 in Figure 3b), one can see the clear double peak emission at 474 and 502 nm from FIrpic emission, combined with some leakage at short wavelength (also shown in the inset of Figure 3b) and exciplex emission at long wavelength, which both increase with increasing driving voltage.For the double emission peak of Firpic, one can also see a relative decrease at the shorter wavelength (474 nm) with increasing driving voltage, which comes from the interference effect due to the recombination shift toward the anode side.This also implies that the recombination zone takes place inside the OXD layer at lower driving voltage (6 V) and shifts toward the anode side with increasing driving voltage.When the dopant concentration further increases (6%-15% in devices 3 to 6, as shown in Figure 3c-f), the main recombination takes place in the highly efficient FIrpic dopants. Relative increase of longer wavelength peaks of Firpic emission with increasing driving voltages are observed for all the devices, which means the recombination zone shifts from inside the OXD toward the anode side.Besides, when comparing the EL spectra at low driving voltages (6 V) with different dopant concentrations (3%-15%), the relative intensity at shorter wavelength increases from 3% to 6%, and then decreases from 6 to 15%.Not only increasing electron injection capability, FIrpic also plays some role in hole injection and transport.Under low concentration (3%-6%), holes inject through the FIrpic molecules which results in a blueshift.With further increasing FIrpic concentrations, the redshift comes from better electron injection and transport, together with the retardation of the holes.Comparing the J-V curves in Figure 2a, the driving voltage is lowest for device 5 (12%), due to the better electron injection and transport.However, when further increasing the dopant concentration to 15%, an increase in driving voltage is observed which comes from the hole-trapping effect of FIrpic on the OXD matrix.Figure 4a shows the NPB emission at short wavelength (~430 nm) at 12 V for devices 2-6.One can see that the leakage decreases with increasing FIrpic concentration due to the increase of the recombination center.Figure 4b shows the photoluminescence (PL) spectra of OXD thin films doped with different concentrations (0%-15%).With increasing dopant concentration, emission from the OXD decreases which is transferred to the FIrpic emission.Besides, the emission spectra from FIrpic are always kept the same because the whole film is lit up with very little optical interference effect under optical pumping. Device Performances of Blue Phosphorescent OLEDs with Different Doping Positions To further analyze the effects of FIrpic molecules on electrical and optical characteristics of OLEDs, we doped 9% FIrpic at different positions of the EML.As shown in Table 1, there is an intentionally undoped region next to ETL, at the center, and next to EBL for devices 7, 8, and 9, respectively.Figure 5a shows the J-V characteristics of devices 1, 4, 7, 8, and 9. J-V curves of selective doped OLED (devices 7, 8, and 9) are all within the range between undoped (device 1) and uniform-doped (device 4), because the dopants assist voltage reduction.For the OLED with the undoped region close to ETL (device 7), the driving voltage is significant higher, which means the FIrpic dopant plays an important role in facilitating electron injection.Comparing device 8 and device 4, a small voltage increase indicates better electron transport characteristics in the FIrpic doped OXD layer.As shown in Table 2, for device 9 with the undoped region close to EBL, the driving voltage (9.88 V) is quite close to that of the uniformly doped one (9.67V).When the undoped layer is close to EBL (device 9), the hole trap (FIrpic) is removed which improves hole mobility.On the other hand, the electron mobility decreases.These two effects compete which results in the driving voltage of device 9 (9.88 V) being quite close to that of the uniformly doped one (9.67V in device 4).It also implies that the voltage reduction phenomenon with incorporation of FIrpic into OXD does not result from the increase of the recombination current.Figure 5b shows the curves of current efficiency under different current density.When the undoped region is close to the EBL (device 9), the maximum efficiency decreases to 2.21 cd/A.This value is low because the main recombination locates near the interface of the EBL/EML.However, this value is not very low which means the recombination zone is broad and extends inside EML towards the ETL at least 10 nm, which explains that when the undoped region is at the middle and close to the ETL, the maximum efficiency is ~10 cd/A, which is still lower than that of device 4 (13 cd/A).Table 2. Electrical and optical properties of the OLEDs. Device Volt. @5 mA/cm 2 Max.lm/W Max.cd/A 1 11.6 0.17 @ 5.5 V 0.77 @ 5 V 2 9.63 3.33 @ 6 V 6.38 @ 6.5 V 3 9.87 5.65 @ 6 V 11.4 @ 6.5 V 4 9.67 6.29 @ 6 V 13 @ 6.5 V 5 8.78 6.80 @ 6 V 14.2 @ 7 V 6 9.16 6.29 @ 6 V 14.1 @ 7.5 V 7 10.6 5.34 @ 6 V 10.6 @ 6.5 V 8 10 5.47 @ 6 V 10.9 @ 6.5 V 9 9.88 1.13 @ 6 V 2.21 @ 7 V Figure 6a-c shows the EL spectra under different driving voltages of devices 7, 8 and 9, respectively.Figure 6a,b are nearly identical except the exciplex hump around 550 nm is larger for the case of device 8, when the undoped region is at the middle of the EML.Because FIrpic dopants act as the recombination center, when there is no dopant at the middle of the EML, more electrons may transport to the interface of the EBL/EML interface for exciplex emission.Figure 6d shows the leakage emissions from device 4, 7 and 8, respectively.NPB leakage is nearly identical for devices 4 and 7 respectively, because the whole recombination zone (20 nm close to the EBL) is doped with FIrpic.On the other hand, NPB leakage is slightly higher for the case of device 8 with undoped region at the middle of the EML.Some electrons penetrate into the HTL for emission.For the EL spectra of device 9 with the undoped region close to the EBL interface, the spectra peak at shorter wavelength (474 nm) is higher than that in device 7 and 8, because the recombination zone locates far away from the EBL/EML interface (at least 10 nm) which blue-shifts the spectrum.Observing the light leakage at short wavelength, one can see the NPB emission increases from 6, 8, and 10 V, then decreases at 12 V.This implies the electron penetrates inside the NPB, and may further transport into the anode without recombination.After all, NPB is also a good electron transporter.Hence, here we may deduce that FIrpic inside the OXD serves as a recombination center to confine carrier not penetrating into the EBL, as well as HTL. Experimental Section Our devices were fabricated on the patterned indium-tin-oxide (ITO) substrate with pixel size of 2 mm × 2 mm.After O 2 plasma treatment, the device is transferred to the multisource evaporator for organic layer and cathode deposition under ultra high vacuum 5 × 10 −6 torr.Then, it is transferred to the glovebox for the encapsulation process.Electrical and optical characteristics are determined with a source meter (Keithley 2400) and spectroradiometer (Minolta CS-1000), respectively.Photoluminescence (PL) of organic thin films is carried out by Hitachi F-4500. Conclusions In summary, by analyzing the J-V characteristics, efficiency, and EL spectra of OLEDs with different concentration and dopant profile of FIrpic in the OLED, we can conclude that: (1) FIrpic aids electron injection from ETL into EML; (2) it also helps electron transport; (3) on the other hand, under low dopant concentrations (3%-6%), it may also assist hole injection; (4) additionally, it is a hole trap which retards hole transport. Figure 2 . Figure 2. Comparison of (a) current density versus voltage; (b) current efficiency (cd/A) versus current density; and (c) power efficiency (lm/W) versus current density for devices 1 to 6.
3,299.8
2012-06-19T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Surveillance of Human Astrovirus Infection in Brazil: The First Report of MLB1 Astrovirus Human astrovirus (HAstV) represents the third most common virus associated with acute diarrhea (AD). This study aimed to estimate the prevalence of HAstV infection in Brazilian children under 5 years of age with AD, investigate the presence of recently described HAstV strains, through extensive laboratory-based surveillance of enteric viral agents in three Brazilian coastal regions between 2005 and 2011. Using reverse transcription-polymerase chain reaction (RT-PCR), the overall HAstV detection rate reached 7.1% (207/2.913) with percentage varying according to the geographic region: 3.9% (36/921) in the northeast, 7.9% in the south (71/903) and 9.2% in the southeast (100/1.089) (p < 0.001). HAstV were detected in cases of all age groups. Detection rates were slightly higher during the spring. Nucleotide sequence analysis of a 320-bp ORF2 fragment revealed that HAstV-1 was the predominant genotype throughout the seven years of the study. The novel AstV-MLB1 was detected in two children with AD from a subset of 200 samples tested, demonstrating the circulation of this virus both the in northeastern and southeastern regions of Brazil. These results provide additional epidemiological and molecular data on HAstV circulation in three Brazilian coastal regions, highlighting its potential to cause infantile AD. Introduction Acute diarrhea (AD) remains a major cause of hospitalization and death in children worldwide, associated with almost 9.9% of the 6.9 million deaths among children under 5 years old in 2011 [1]. Among the AD etiologic agents, viruses play an important role and after rotavirus group A (RVA) and norovirus (NoV), human astrovirus (HAstV) represents the third most common virus found in children with AD, and is thought to be involved in 0.5 to 15% of AD outbreaks [2,3]. HAstV infections are more frequent in children, the elderly and among immunocompromised patients causing blunting of the tips of the microvilli as well as disruption of the intestinal epithelium [2,4,5]. Human astrovirus belongs to the Astroviridae family and contains single-stranded, positivesense, polyadenylated RNA 6.2-7.8 kilobases (kb) in length without an envelope, encased within an icosahedral capsid. The genome contains three open reading frames (ORFs) designated ORF1a, ORF1b and ORF2. The two first ORFs encode non-structural proteins, including viral proteinase and RNA polymerase and ORF2 encodes the capsid protein precursor [2,3]. The Astroviridae family contains two distinguished genera: Mamastrovirus and Avastrovirus. These viruses were originally classified into genera and species based only on the host of origin. Recently, the Astroviruses Study Group, 9 th Report ICTV (International Committee on Taxonomy of Viruses), 2011 [6], proposed a classification based on the amino acid sequence, which encodes the capsid polyprotein and represents the most variable region of the genome [2,3]. In this classification, Mamastrovirus genera associated to human disease is subdivided into four divergent species: MAstV 1 (the classical human astrovirus 1-8), MAstV 6 (AstV MLB1-3), MAstV 8 (AstV VA1 and VA3, also known as HMO-C and HMO-B, respectively) and MAstV 9 (AstV VA2, also Known as HMO-A, and VA4) [2]. In general, HAstV-1 is the most common type found in children, but the predominant genotype can vary with time and location [3,7,8]. The recent identification of novel AstV in humans using highly sensitive methods [9][10][11][12][13], highlights the need to analyze the prevalence of these viruses to recognize their actual impact in public health, since they were found to be genetically related to animal viruses and some were isolated from patients with more severe diseases, such as encephalitis [14][15][16]. This study aimed to associate the HAstV infection to Brazilian children under 5 years old affected with AD in three Brazilian coastal regions in a seven years period (2005-2011), providing epidemiological and molecular characteristics of HAstV genotypes. Recently described strains of HAstV were also studied. Ethical statement This study, including consent procedures, was approved by the Ethics Committee of Oswaldo Cruz Foundation (CEP 311/06) and is part of an ongoing official Brazilian Ministry of Health surveillance of enteric pathogens to investigate the viral etiology of AD. In this context, stool samples from AD cases were obtained by request of in and out patients attending health centers, including hospitals and public central laboratories, following outbreaks or sporadic cases of AD. For this study, consent was obtained verbally from the parents or relatives guardians on behalf of the children who were enrolled in this study. Data are maintained anonymously. Clinical samples and studied areas For this study, stool samples of 2.913 patients under 5 years old with negative diagnosis for RVA and NoV were obtained between January 2005 and December 2011 from three different Brazilian coastal regions (northeast, southeast and south) with distinct demographic and environmental scenarios and were selected for HAstV investigation. The population of the southern and southeastern regions is of a higher income compared to that of the northeastern region, where much of the population do not have access to sanitation, and where AD-related mortality is higher [17]. Acute diarrhea was defined when children presented three or more liquid or semi-liquid evacuations in a 24-h period. Screening of novel HAstVs from stool samples To search for novel and recently described HAstVs, 200 stool samples obtained between January and December 2011 from patients under two years old were randomly selected from three different Brazilian coastal regions (northeast, southeast and south). All samples were tested previously and were negative for RVA, NoV and HAstV1-8. Statistical analysis Rates of HAstV positivity and association between the frequency of symptoms and the different age groups were compared using the chi-squared test. Statistical significance was established at p < 0.05. Nucleic acid extraction and detection Fecal suspension was prepared in a 10% (w/v) Tris/HCL/Ca 2 + (0,01M, pH 7.2). RNA Extraction was performed using QIAamp Viral RNA extraction kit (Qiagen) methodology according to the manufacturer's instructions. Detection of classic HAstV and complementary DNA (cDNA) was synthesized using the Applied Biosystems High Capacity cDNA Reverse Transcription kit (Foster City, Ca, USA) with a previous denaturation step with dimethylsulfoxide for 7 min at 97°C. For classic HAstV detection, a PCR protocol was performed using a set of specific primers that targeted the ORF2 region capsid (Mon 269/270 [0.4 μM of each primer]) in a final reaction mixture of 50 μl in volume, according to the PCR conditions described previously [18]. For novel AstV detection, screening was carried out by amplifying a 409 bp segment of the ORF1b (RNA polymerase) region of the AstV genome with consensus primers SF0073/SF0076 designed with a bias for detection of a highly divergent AstV (Classic HAstV 1-8, MLB1-3 and VA1-3) [13]. Positive samples were sequenced and submitted to BLAST (software) [19] searching and those that indicated MLB1 homology were then subjected to amplification and sequencing of the 402 bp segment of the MLB1 ORF2 specific genome region using primers SF0053 and SF0061 for characterization [13]. All amplifications were performed using Super-Script III One-Step RT-PCR System (Invitrogen), according to the manufacturer´s instructions. For all PCR reactions, final primers concentrations were of 0.2 μM for both forward and reverse primers, in a final reaction volume of 25 uL. The products were analyzed in a 2% agarose gels and visualized by ethidium bromide staining. Molecular sequencing For the molecular characterization of HAstV strains, PCR products were sequenced in both directions using the ABI Prism 3100 Genetic Analyzer and Big Dye Terminator Cycle Sequencing Kit v. 3.1 (Applied Biosystems, Foster City, CA) with the same primers used for the amplification reactions [13,18]. Centri-Sep columns (Princeton Separations, Foster City, CA) were used to purify the sequencing reaction products, according to the manufacturer's instructions. Phylogenetic analysis The chromatograms obtained from sequencing reactions were carried out using BioEdit software version 7.1.3.0 [20] for sequence editing. Sequence similarity searches were performed with sequences deposited in GenBank using the basic local alignment search tool (BLAST) software [19]. For phylogenetic analysis, reference sequences available for the same genomic region (ORF2) and size were selected taking into account the standards of each genetic type of human and/or environmental AstV and other similar sequences representing different geographical regions and time periods were also used. Sequences analyzes were performed using MEGA software version 5.1 [21] and multiple sequence alignments were carried out using the ClustalW program. Phylogenetic trees based on nucleotide (nt) sequences were obtained using the neighbor-joining method with Kimura two-parameter model with the bootstrap probabilities of each node calculated using 2,000 replicates. Values above 70 were considered significant and are represented in the trees. The nucleotide sequence data reported in this study is available in GenBank under the accession numbers: KM269039-KM269070, KM408170-KM408171 and KC294576-KC294577. Differences in the frequency of some symptoms were observed between HAstV-positive and HAstV-negative patients ( Table 2). The presence of mucus in feces was significantly more frequent in HAstV-positive children, particularly in the 1-11-month age group (p = 0.012). Molecular Characterization Phylogenetic analysis of a 320-bp nucleotide sequence (ORF2) revealed that almost all HAstV genotypes (except HAstV-5 and HAstV-7) circulated in these three Brazilian coastal regions during the studied period. Considering HAstV-positive sequenced strains, HAstV-1 was the most frequent genotype identified, characterized in 77.7% strains. Brazilian HAstV-1 strains grouped into two different genetic clusters (Fig 3A). Cluster 1 strains were closely related to each other (98.4-100% identity at the nt level) and were detected in all studied periods in different geographic regions of Brazil and displayed the highest identity Screening of novel HAstVs Concerning novel HAstV screening, two samples were positive and characterized as AstV-MLB1 (Fig 4). AstV-VA1 or other unusual HAstV rather than MLB1 were not detected. The two positive MLB1 samples were collected from a one year old child admitted in casualty due to AD in February and November 2011, in two different Brazilian regions: Maranhão State (northeast) and Rio de Janeiro State (southeast). Discussion This study reports an overview of the distribution of HAstV genotypes in three coast regions of Brazil. It was the first time that the recently novel HAstV was investigated, increasing 3-5% [22][23][24], lower than those reported in the present survey. Nevertheless, our data showed similar rates to those in other studies carried out in Brazil [8,[25][26][27] probably due to the evolution of molecular detection methods with increased sensitivity improving diagnosis. High HAstV positivity rates is normally described in AD outbreaks, such as in a native Brazilian population and in southeastern Mexico, where the prevalence of HAstV infection reached 56% and 28%, respectively [28,29]. As expected, HAstV were commonly detected in all age group analyzed. The prevalence rate observed tended to be higher among children aged 40-60 months, although there was no statistical significance between age group and HAstV infection. HAstVs affect predominantly the pediatric population and the age of children infected with classic HAstV varies, ranging from newborns to over 5 years old [2,8,30]. HAstV-related AD is generally considered mild and self-limiting, but it can be sufficiently severe to require medical intervention in immunocompromised or malnourished patients [2,4,14,15,31]. All children enrolled in this study showed signs and symptoms of AD requiring at least one outpatient visit, thus the HAstV detection in AD cases remains of clinical importance. Fever and vomiting were the most common signs and interestingly, mucus in the stool samples were observed and their presence was statistically significant in HAstV AD cases, mainly in infants aged < 1 year. Data were similar to another study conducted in Brazil [8]. Concerning seasonality, it has been proposed that the peak of HAstV detection in temperate climate countries occurs in the colder months and in tropical areas, the maximum incidence of HAstV infections tends to occur in the rainy season (2). The current study was performed in a seven years' time span, and a slight trend towards higher detection rates during the rainy season was observed. Similar to our data, the detection of HAstV in tropical climate countries was more frequent in the rainy season [29,32]. Previous studies carried out in Brazil showed no seasonality or a higher detection frequency in warmer months [8,24,25], but the seasonal HAstV pattern is controversial and depends on the climate, geographical region analyzed and the year of study. Our data demonstrate the variability of circulating HAstV genotypes and that most of the samples were characterized as HAstV-1, reinforcing data from other surveys from several regions of the world, including Brazil that reported the predominance of HAstV-1 [7,8,[24][25][26][33][34][35][36][37][38][39][40]. Phylogenetic analyses showed that Brazilian strains share high nucleotide identity with global strains described in different countries and continents, demonstrating that the introduction of strains occurs continuously and has a great impact on local and regional epidemiology. The recent discovery from humans of novel AstV strains that show genetic similarities to animal strains has aroused interest and the perception that these potentially zoonotic agents might have a greater impact on public health and on the etiological profile of AD [9][10][11][12]41]. In Brazil, the diagnosis of novel and classic HAstV are restricted to research center, not available in laboratories routine and their importance as etiological agents of AD in both sporadic cases and outbreaks remains poorly understood. Our investigation of novel HAstV species was conducted in 200 stool samples from children under two years old. The samples tested were selected at random from 2011 cohorts, with equal numbers tested previously negative for RVA, NoV and classic HAstV 1-8 taken from each Brazilian coastal region. This strategy allowed the identification of AstV-MLB1 in two patients. The unusual AstV-MLB1 strain has been identified in fecal samples of patients in Australia and in countries in North America, Asia, Africa and Europe [9,10,[42][43][44][45]. Temporal and geographical relationships were revealed following phylogenetic ORF2 nucleotide analyzes homology among AstV-MLB1 strains (Fig 4). In this study, no other unusual AstV could be detected. Similar results were observed in other surveillance studies [9]. AstV-MLB1 is not yet a well-established AD agent and its role in human health remains unknown. A recent control case study conducted in India could not determine the association between AstV-MLB1 and AD [43]. Another important point is the fact that a large number of samples remain without an etiologic agent, drawing attention to other pathogens that may be involved. Other enteric viruses such as Adenovirus or Sapovirus, as well as bacteria and parasites may also be responsible. This study reinforces the role of distinct HAstV genotypes in the etiological AD profile in different Brazilian coastal regions, describing the detection of AstV-MLB1 in Brazil for the first time and suggesting potential changes in the etiological AD profile considering new pathogens agents as well as novel AstV.
3,350.2
2015-08-14T00:00:00.000
[ "Biology" ]
Closer to critical resting-state neural dynamics in individuals with higher fluid intelligence According to the critical brain hypothesis, the brain is considered to operate near criticality and realize efficient neural computations. Despite the prior theoretical and empirical evidence in favor of the hypothesis, no direct link has been provided between human cognitive performance and the neural criticality. Here we provide such a key link by analyzing resting-state dynamics of functional magnetic resonance imaging (fMRI) networks at a whole-brain level. We develop a data-driven analysis method, inspired from statistical physics theory of spin systems, to map out the whole-brain neural dynamics onto a phase diagram. Using this tool, we show evidence that neural dynamics of human participants with higher fluid intelligence quotient scores are closer to a critical state, i.e., the boundary between the paramagnetic phase and the spin-glass (SG) phase. The present results are consistent with the notion of “edge-of-chaos” neural computation. 1. My biggest concern is over the fitting of the Ising model. From my understanding of these methods, partly from an earlier paper by the authors (Ezaki et al. 2017, Ref. 37 here), data length is a big problem when the number of ROIs is high. With 264 ROIs there are 2^264 different states so the probability distribution over these states will necessarily be extremely sparsely sampled. In that earlier paper the guidance was that accuracy of the fit scales as a function of t_max/2^N, so that for typical fMRI values of t_max one can only look at N~5 accurately, or N~8 if pooling over 10 subjects (and perhaps N~12 with the 138 subjects here?). What is new here to overcome the earlier paper's advice that "Currently we cannot apply the method to relatively large brain systems (i.e. those with a larger number of ROIs)"? This would seem to be a big advance. 2. If I understand correctly, the single-subject estimation of position in the phase diagram proceeds by first calculating the phase diagram for the group concatenated data, then estimating each subject's sigma and mu via interpolation using empirically-calculated chi_SG and chi_uni. These latter quantities are presumably somewhat noisily estimated given the short data length (how exactly are they "calculated for each individual only from the covariance matrix of the data"?). In light of the potential inaccuracy of the fitting (point 1 above) and fact that the IQ correlation is found for a relatively narrow range of sigma where small shifts could change ordering, it seems plausible that the individual subjects may not necessarily behave the same as the group-level phase diagram. Is it possible to estimate the uncertainty in the group-level phase diagram (e.g. via a nonparametric bootstrap or leave-one-out method or similar), and propagate this forward to the values used in the IQ correlation? I worry that estimation errors could affect the robustness of the results. 3. There are some pieces of physics taken for granted here that should probably be explained for a biology journal, especially on p2-3 with lots of cites to a book Ref. 29. Should explain notions of paramagnetic and ferromagnetic phases, what exactly the SK model is, how large sigma corresponds to low temperature, what is meant by "transitory dynamics" and how this relates to criticality, what it means to not find a phase transition between the paramagnetic and ferromagnetic phases, and the discussion text on chaos in different regimes. 4. p4, 2nd para, bit incongruous to refer to r=0.21 as "mildly correlated" versus r=0.24 on the previous page being referred to as "significantly positively correlated". 5. p5, the discussion of how these results are consistent with balanced excitation and inhibition is unclear and unconvincing. 6. p5, re discrepancy with studies finding ferromagnetism in fMRI data, the explanation given in terms of the present research using fMRI data seems like faulty logic (same type of data no?) How do the results reconcile with those of e.g. Ref. 55? Reviewer #4 (Remarks to the Author): In the present article, Ezaki et al. used a maximum entropy model to map the whole-brain BOLD dynamics onto a phase diagram. They showed that dynamics were poised in the paramagnetic phase and close to a spin-glass phase transition. They then showed that subjects with larger performance IQ presented dynamics that were closer to that phase transition. I enjoyed reading the article, which is clearly written. The method presented therein will be very useful for the neuroscience community and, thus, I think that the code to learn the model should be made publicly available. However, I have some concerns that should be addressed in order to improve the article: 1) My first concern relates to the binarization of the fMRI data. The authors binarized the data by first using z-score and setting a value equal to 1 if fluctuations were positive and -1 otherwise. Thus, by construction of the binary data, the magnetization must be m = 0, forcing the system to be in the paramagnetic phase. I think that taking the standard deviation as the threshold (as it is the common practice) would imply m > 0. In figure S2, the authors varied the binarization threshold, but they only showed the correlation between the IQ and spin-glass susceptibility (which depends on the covariance of the data). The authors should show how the point representing the data in the phase diagram changes with different thresholds. 2) Also, it would be interesting to compare the results with those from surrogate data. Specifically, the authors could generate multivariate gaussian time-series with the same covariance as the empirical data, apply the threshold, and see where the surrogate data are placed on the phase diagram. My guess is that the surrogates would be placed close to the spin-glass transition. Moreover, the data from subjects with large and low IQ could be classified using a simple Gaussian decoder, based on the covariance of the data. If this simple decoder achieves better discrimination than using the model parameter sigma, the results would be weakened. In other words, if a simple model such as a multivariate Gaussian process leads to better discrimination and relatively same working point in the phase diagram, then the proposed model is somehow redundant (and conceptually more complex). 3) The maximum entropy model was built using the binary data of all subjects. Because the data could be highly variable between subjects, the model could be biased to replicate this diversity. To test this, the authors should learn two models using the halves of the data, test the model built from the firsthalf data on the second-half data and vice versa, and evaluate the likelihood ratios (train vs test). Likelihood ratios close to 1 would indicate that the model can generalize. 4) The mapping of the data in the phase diagram shows rather that the system is subcritical. Recent studies showed that subcritical dynamics are more likely during rest (Priesmann et al., 2014;Hahn et al., 2017), with potential functional advantages. This should be discussed. Note also that, in the finite size scaling analysis, the susceptibility grows with N' but there is no sign of divergence. 5) I am not familiar with IQ studies, but I think that sentences such as "more intelligent human individuals" or "more intelligent human brains" are not appropriate and should be avoid. I hope the authors find this comments useful. Responses to Reviewer #1 We appreciate the reviewer's useful suggestions. We have addressed all the comments by the reviewer. The page numbers in our responses refer to those in the revised manuscript unless otherwise mentioned. The deleted and added text is shown in red and blue, respectively, in the revised manuscript. Examination of the paramagnetic and ferromagnetic border in the Ising model was performed using simulations followed by a comparison with resting fMRI data from ~150 participants. The authors show that in the sigma/mu plane, individual participant's dynamical signatures cluster close to the paramagnetic border, which the authors state to be equivalent to an 'edge-of-chaos' border. Fluid intelligence scores based on the Wechsler Intelligence Scale correlated with closeness to the paramagnetic border. Results suggest that the ability to think logically and find solutions, but not crystallized intelligence (prior knowledge, experience and verbal expression) improves with the brain resting closer to criticality. Overall impression of the work Results seem robust and controls are provided that include threshold independence and robustness to removal of z-score normalization. Manuscript is reasonably well written (particularly Results). We are glad to hear the overall positive evaluation by the reviewer. I have a number of comments that might improve the manuscript in its present state: Introduction: Please reexamine the use of your references. When it comes to avalanches and criticality the main work is Beggs and Plenz, 2003. Indepth reviews with respect to brain dynamics and criticality besides Chialvo Nat Phys 2010 would be Plenz The European Physical Journal 2012. We had already cited Beggs and Plenz (2003) and Chialvo (2010) in the previous version. We additionally cited Plenz (2012) in the first sentence of Introduction. Second sentence in Introduction doesn't make sense " … including criticisms such as the lack of power laws in the relevant observables." This seems a bit off -it is more that identifying a power law is a necessary but not sufficient condition for critical dynamics. Please elaborate and reword. We revised this part to read as follows: "This hypothesis has been investigated for more than two decades including criticisms such as the presence of alternative mechanisms explaining power law scaling in the relevant observables [Touboul2010, Botcharova2012, Hesse2014, Markovic2014]. Experimental evidence such as the recovery of critical behavior after interventions, which is difficult to explain by alternative mechanisms, lends supports to the hypothesis [Hesse2014]." Langton 1994 could never prove that there is a second order phase transition in his simulations. The wording by the authors suggests otherwise. Please clarify. In the previous version, we cited Langton 1990cited Langton (not 1994 in the second paragraph of Introduction. Therefore, we assume that the reviewer points to this paragraph. The sentence that cited Langton 1990 read "These findings align with the idea of edge-of-chaos computation, with which computational ability of a system is maximized at criticality separating a chaotic phase and a non-chaotic phase [Langton1990, Bertschinger2004, Legenstein2007]". The reviewer may have considered that the word "criticality" connotes second-order transitions. Therefore, we replaced it by "phase transitions". "According to the critical brain hypothesis …" -better 'One prediction from the critical brain hypothesis …" We adopted the text suggested by the reviewer. Thanks. "However, neuronal avalanches do not imply transitory dynamics or their absence." This sentence is difficult to understand. The argument is not clear at all. Please elaborate. We rewrote this sentence as follows: "Neuronal avalanches are bursts of cascading activity of neurons, whose power-law properties have been related to criticality. However, studies of neuronal avalanches have focused on their scale-free dynamics in space and time, with which statistics of avalanches obey power laws. Scale-free dynamics of neuronal avalanches is a question orthogonal to patterns of transitions between discrete states." " …. chaotic dynamics … from … healthy controls more strongly than …" The construction of this argument and how it fits into the authors' logic is not clear. The whole paragraph needs profound reworking to make the authors' arguments more clear. We agree. The previous version was unclear on the hypothesis set out in the previous paragraph (i.e., whether or not the relevance of network interaction was part of the hypothesis). Therefore, we extensively revised this and the previous paragraphs. Specifically, we clearly stated that the network interaction was part of the hypothesis. Furthermore, we declined chaotic time series analysis methods due to its irrelevance to network interaction, without negating its potential to be able to extract state transitory dynamics (for non-interacting, single time series). The revised part (i.e., from middle in the previous paragraph to the end of the present paragraph) reads as follows: "Furthermore, these and other studies [Calhoun2014, Kopell2014, Rabinovich2015] support that state-transition dynamics in the brain involve large-scale brain networks. These arguments are consistent with the proposal that many cognitive functions seem to depend on network connectivity among various regions scattered over the whole brain [Barbey2018]. On these grounds, in the present study we hypothesize that complex and transitory neural dynamics of the brain network (i.e., dynamic transitions among discrete brain states) that are close to criticality are associated with high cognitive performance of humans. There are two major conventional methods for examining criticality and edge-ofchaos computation in empirical neural data. However, they do not correspond to the present hypothesis for their own reasons. First, many of the experimental studies testing the critical brain hypothesis have examined neuronal avalanches [Beggs2003, Beggs2008], including the case of humans [Yu2013, Tagliazucchi2012, Shriki2013]. Neuronal avalanches are bursts of cascading activity of neurons, whose power-law properties have been related to criticality. However, studies of neuronal avalanches have focused on their scale-free dynamics in space and time, with which statistics of avalanches obey power laws. Scale-free dynamics of neuronal avalanches is a question orthogonal to patterns of transitions between discrete states. Second, nonlinear time series analysis has found that electroencephalography (EEG) signals recorded from the brains of healthy controls are chaotic and that the degree of chaoticity is stronger for healthy controls than individuals with, for example, epilepsy, Alzheimer's disease, and schizophrenia [Stam2005]. However, this method is not usually for interacting time series. Therefore, it does not directly reveal how different brain regions interact or whether possible critical or chaotic dynamics are an outcome of the dynamics at a single region or interaction among different regions." " … different degrees of intelligence.' Consider rewording like ' correlate with IQ scores'. We replaced "different degrees of intelligence" by "different intelligence quotient (IQ) scores". Discussion: Needs rewording in many places. ' … more intelligent human brains'needs rewording. "The criticality view of the brain is not new.' Not sure what this sentence states beyond the obvious and not having any references doesn't help either. I recommend a native English speaker to comb through the discussion for rewording some of the most problematic statements regarding human intelligence. First, we replaced "more intelligent human brains" by "neural dynamics of humans with higher intellectual ability". Second, we just deleted the sentence "The criticality view of the brain itself is not new." because it is unnecessary and overlaps with the preceding paragraph. Third, we combed through the Discussion section regarding the usage of the terms related to human intelligence (and we do not believe that is the problem of English) and revised the text as follows:  Second line in the fourth paragraph: "intelligent performance" -> "intellectual ability"  Fourth line in the fourth paragraph: "the intelligence" -> "the intellectual score" Consider Meisel et al 2017 on critical slowing down changes in humans with wakefulness as this could pose some limits on measuring critical slowing down in humans as done by the authors in this study. Because approximate criticality was sustained for about 12 hours since the participants woke up in that study, we consider that their results generally support the brain critical hypothesis except when sleep is deprived. Furthermore, their main result is that sleep deprivation pulls the brain dynamics away from the criticality. This result is in fact consistent with our results because sleep deprivation is expected to make it difficult for participants to exercise cognitive functions at a normal level. We added the following short paragraph to the Discussion section to mention this point and cited the reference (p. 5, lines 215-216). "A previous study showed that sleep deprivation pulls the brain dynamics away from the criticality [Meisel2017]. This result is consistent with ours because sleep deprivation generally compromises one's cognitive and intellectual functions [Horne1988]." Responses to Reviewer #2 We appreciate the reviewer's useful suggestions. We have addressed all the comments by the reviewer. The page numbers in our responses refer to those in the revised manuscript unless otherwise mentioned. The deleted and added text is shown in red and blue, respectively, in the revised manuscript. This article fits maximum entropy models to binarized fMRI data, determines the proximity of each participant to a phase transition, and shows a correlation with the IQ across all subjects. The result is interesting. I would have expected IQ variability to be very subtle and difficult to find significant correlations with metrics from statistical physics where reports of changes across more drastically different brain states (e.g. anesthesia, sleep, coma) are yet to be published. We are glad to hear positive interests of the reviewer. My main criticism concerns the fact that these results are not informative in terms of the underlying neurobiology; the authors found correlations with observables extracted from whole brain imaging data, but I would expect that certain regions and circuit are more involved than others in fluid intelligence. Perhaps the authors could restrict their analysis to a subset of the regions of interest and try to find which regions are necessary for the reported correlation. Thanks for a valuable suggestion. We added the following text to the Discussion section to discuss this issue. "The literature also suggest that specific brain systems such as the fronto-parietal network [Finn2015] and the default-mode network [Song2009] predict intelligence of humans. Running the same analysis for these and other brain systems to seek specificity of the results warrants future work. Because the present method requires hundreds of ROIs, we may benefit from considering voxel-wise networks of a specific brain system that allow many ROIs for particular brain systems." My other criticism is that of presentation; I consider that the paper is difficult to read for non physicists. The methods section could be more didactic in this sense (perhaps extended in the supplementary information). Different ansatz could be more properly motivated, and worked examples provided as well. We added the following text after Eq. (6) to explain the ansatz underlying the pseudolikelihood estimation. "In Eq. (6), one determines the probability of each activity pattern under the assumption that Sj (j≠i) does not change when drawing the value of Si (i = 1,…,N)." We also expanded the first subsection of the Results section as follows to supply more pedagogical explanation for non-physicist readers such as biologists:  First paragraph in the Results section: We removed the mentioning to the Hamiltonian, as it is not necessary and would confuse biology readers who may not know what the Hamiltonian is.  First paragraph in the Results section: We replaced "To fit the model," by "Because the model assumes binary data,".  We added the following text in the sentence right after equation (2). "Although we refer to E as the energy, E does not represent the physical energy of a neural system but is a mathematical construct representing the frequency with which activity pattern S appears in the given data. Activity pattern S appears rarely in the data if E corresponding to S is large and vice versa. Parameter hi represents the tendency that Si = 1 is taken because a positive large value of hi implies that Si = 1 as opposed to Si = -1 lowers the energy and hence raises the probability that S with Si = 1 appears. Parameter Jij represents a functional connectivity between ROIs i and j because, if Jij is away from 0, Si and Sj would be correlated in general."  Third paragraph in Introduction: To explain what is meant by transitory dynamics, we added "(i.e., dynamic transitions among discrete brain states)". Because we expanded this part for non-physicists, we did not further modify the Methods sections such as to create worked examples or additional SI text. Responses to Reviewer #3 We appreciate the reviewer's useful suggestions. We have addressed all the comments by the reviewer. The page numbers in our responses refer to those in the revised manuscript unless otherwise mentioned. The deleted and added text is shown in red and blue, respectively, in the revised manuscript. This paper adds to the large body of evidence that the brain operates near a critical point. The authors build on their recent work on fitting the Ising model to functional neuroimaging data. They find they can map subjects in a phase diagram and identify the type of phase transition nearby. They also find a moderate correlation between a measure of distance to criticality and a measure of IQ, suggesting some functional relevance to their model findings. Given the high level of interest in criticality in the brain this paper should find a good audience. The links to IQ are not particularly strong (and the authors are appropriately circumspect on this), but nevertheless this is an intriguing finding that fits into the broader narrative on functional benefits of the nearcritical regime. However, I do have some methodological concerns, and some of the writing is not really geared for a biology audience. The paper would be improved if the authors address the following: We are glad to hear the overall positive evaluation by the reviewer. We addressed the reviewer's concerns one by one as follows. 1. My biggest concern is over the fitting of the Ising model. From my understanding of these methods, partly from an earlier paper by the authors (Ezaki et al. 2017, Ref. 37 here), data length is a big problem when the number of ROIs is high. With 264 ROIs there are 2^264 different states so the probability distribution over these states will necessarily be extremely sparsely sampled. In that earlier paper the guidance was that accuracy of the fit scales as a function of t_max/2^N, so that for typical fMRI values of t_max one can only look at N~5 accurately, or N~8 if pooling over 10 subjects (and perhaps N~12 with the 138 subjects here?). What is new here to overcome the earlier paper's advice that "Currently we cannot apply the method to relatively large brain systems (i.e. those with a larger number of ROIs)"? This would seem to be a big advance. The reviewer is correct in pointing this out. We added the following paragraph to explain this point. "In our previous paper, we posed the limited accuracy of fitting the PMEM to fMRI data when N is large [Ezaki2017]. The argument was based on the probability that each of the 2 N possible activity patterns appears compared between the empirical data and the estimated PMEM. In the present manuscript, we have not used this accuracy measure, because it cannot be calculated when N is large. Instead, we validated the model by confirming that the difference between the empirical data and estimated PMEM in terms of the signal average, ⟨Si⟩, and the pairwise correlation, ⟨SiSj⟩, is small ( Supplementary Fig. 8). This approach is based on the assumption that the average and second order correlation of signals explain most of the information contained in the given data, which has been confirmed for smaller N in previous studies using fMRI data [Watanabe2013, Watanabe2014, Ezaki2017, Ezaki2018]. Although only comparing ⟨Si⟩ and ⟨SiSj⟩ between the data and model is a weaker notion of accuracy of fit than using the accuracy measure 2. If I understand correctly, the single-subject estimation of position in the phase diagram proceeds by first calculating the phase diagram for the group concatenated data, then estimating each subject's sigma and mu via interpolation using empiricallycalculated chi_SG and chi_uni. These latter quantities are presumably somewhat noisily estimated given the short data length (how exactly are they "calculated for each individual only from the covariance matrix of the data"?). In light of the potential inaccuracy of the fitting (point 1 above) and fact that the IQ correlation is found for a relatively narrow range of sigma where small shifts could change ordering, it seems plausible that the individual subjects may not necessarily behave the same as the group-level phase diagram. Is it possible to estimate the uncertainty in the group-level phase diagram (e.g. via a nonparametric bootstrap or leave-one-out method or similar), and propagate this forward to the values used in the IQ correlation? I worry that estimation errors could affect the robustness of the results. The reviewer's understanding is correct, and we agree with potential limitations of the present study due to these factors. We figure that the reviewer worries about the uncertainty of the result both at the individual participant's level and the group level. Both are due to the short data length. To address these issues, we did the following. First, we investigated how the estimation of the individual participant's χSG, χuni, and the correlation of each with the IQ scores depended on the length of each participant's fMRI data (Fig. S4). We found that the results were qualitatively the same as those obtained with the full length of the data when we used approximately more than two thirds of the data (i.e., data length larger than ~ 150). Therefore, we do not consider that the results based on the χSG and χuni values estimated with the present data are too sensitive to noise. Second, as we did in our previous studies (Refs. [21,37] in the revised manuscript), we divided the group data into the halves and measured the covariance, ⟨SiSj⟩ for each (i, j) pair for each half to be compared between each other. The covariance obtained from the two halves was strongly correlated with each other (Fig. S5). We further estimated the PMEM and drew the phase diagram for each half. The phase diagrams were similar to each other (Fig. S6). Third, in fact, the uncertainty in the group-level phase diagram does not propagate forward to the values used in the IQ correlation. This is because the values used in the IQ correlation are χSG and χuni, which are directly calculated from the correlation matrices. However, to wipe out the reviewer's worry, we estimated and for the individual participants, the calculation of which does need phase diagrams, for the phase diagrams calculated only from a half of the participants. Then, we compared estimated from the half of the participants and estimated from all the participants. We did the same for . The results shown in the new Figure S7 support the robustness of our method regarding the estimation of and when we used only half the participants. Therefore, we conclude that the group-level estimation of the phase diagram and the calculation of the individual participants' μ and σ, which is based on the estimated phase diagram, are robust enough against noise, given the current length of the data. We added a paragraph in the main text to describe these results, referring to the SI for more details (p. 5, lines 189-204). 3. There are some pieces of physics taken for granted here that should probably be explained for a biology journal, especially on p2-3 with lots of cites to a book Ref.  First paragraph in the Results section: We replaced "To fit the model," by "Because the model assumes binary data,".  We added the following text in the sentence right after equation (2). "Although we refer to E as the energy, E does not represent the physical energy of a neural system but is a mathematical construct representing the frequency with which activity pattern S appears in the given data. Activity pattern S appears rarely in the data if E corresponding to S is large and vice versa. Parameter hi represents the tendency that Si = 1 is taken because a positive large value of hi implies that Si = 1 as opposed to Si = -1 lowers the energy and hence raises the probability that S with Si = 1 appears. Parameter Jij represents a functional connectivity between ROIs i and j because, if Jij is away from 0, Si and Sj would be correlated in general."  Right after Eq. (3): We supplied the definition of the SK model.  Right after Eq. (3): We removed a sentence mentioning the relationship between σ and the temperature, because it is unnecessary to understand this connection and because it was slightly inaccurate in its original form due to the μ term in Eq. (3).  First half of the third paragraph in the Results section: We added text to enhance the explanation of m, q, and each of the three phases.  Third paragraph in Introduction: To explain what is meant by transitory dynamics, we added "(i.e., dynamic transitions among discrete brain states)". To better explain "what it means to not find a phase transition between the paramagnetic and ferromagnegic phases" to non-physicists, we added the following text in the Discussion section (p. 6, lines 264-265). "Roughly speaking, paramagnetic and ferromagnetic phases correspond to active and quiescent phases, respectively." This added text together with the surrounded text explains that the transitions we found are not the ones between quiescent and active phases. We decided not to add similar explanation in the previous paragraph in the Discussion section (i.e. paragraph starting with "There are various types…"), which the reviewer probably pointed to. This is because it is difficult to explain what paramagnetic and ferromagnetic phases intuitively mean without referring to neural avalanches (as we did in the next paragraph). We considered that adding such discussion in the mentioned paragraph would rather confuse non-physicist readers than to aid their understanding. We attempted to improve "the discussion text on chaos in different regimes" in the latter half of the paragraph starting with "There are various types…" in the Discussion section. However, we opt not to extend this part because this is a specialist discussion anyways, if we get into details. As written in the present text, there are different types of chaos, which is difficult to be explained in intuitive terms. 4. p4, 2nd para, bit incongruous to refer to r=0.21 as "mildly correlated" versus r=0.24 on the previous page being referred to as "significantly positively correlated". 5. p5, the discussion of how these results are consistent with balanced excitation and inhibition is unclear and unconvincing. We removed the entire paragraph. 6. p5, re discrepancy with studies finding ferromagnetism in fMRI data, the explanation given in terms of the present research using fMRI data seems like faulty logic (same Fig. 1a-d). However, that phase transition point under the condition σ = 0 is far from the location of the empirical data when σ is allowed to deviate from 0 (crosses in Fig. 1a-d). Therefore, allowing heterogeneity in Jij may be key to further clarifying the nature of critical neural." As a side note, we found that it was inappropriate to refer to Marinazzo et al. (2014) here for supporting the ferromagnetism in fMRI data because they used a structural network, not a functional network, on which to run an Ising model. Therefore, we changed "Computational studies also support the ferromagnetism of fMRI data [Fraiman2009, Kitzbichler2009, Marinazzo2014]" to "Computational studies also support the ferromagnetism [Fraiman2009, Kitzbichler2009, Marinazzo2014]." Responses to Reviewer #4 We appreciate the reviewer's useful suggestions. We have addressed all the comments by the reviewer. The page numbers in our responses refer to those in the revised manuscript unless otherwise mentioned. The deleted and added text is shown in red and blue, respectively, in the revised manuscript. In the present article, Ezaki et al. used a maximum entropy model to map the wholebrain BOLD dynamics onto a phase diagram. They showed that dynamics were poised in the paramagnetic phase and close to a spin-glass phase transition. They then showed that subjects with larger performance IQ presented dynamics that were closer to that phase transition. I enjoyed reading the article, which is clearly written. The method presented therein will be very useful for the neuroscience community and, thus, I think that the code to learn the model should be made publicly available. However, I have some concerns that should be addressed in order to improve the article: We are glad to hear the overall positive evaluation by the reviewer. 1) My first concern relates to the binarization of the fMRI data. The authors binarized the data by first using z-score and setting a value equal to 1 if fluctuations were positive and -1 otherwise. Thus, by construction of the binary data, the magnetization must be m = 0, forcing the system to be in the paramagnetic phase. I think that taking the standard deviation as the threshold (as it is the common practice) would imply m > 0. In figure S2, the authors varied the binarization threshold, but they only showed the correlation between the IQ and spin-glass susceptibility (which depends on the covariance of the data). The authors should show how the point representing the data in the phase diagram changes with different thresholds. We computed the phase diagrams with two different binarization threshold values, θ = 1 and θ = -1. Note that, with these threshold values, the fraction of Si = +1 is equal to ~0.148 and ~0.853, respectively (see the caption of Fig. S2 Cambridge,1991). 2) Also, it would be interesting to compare the results with those from surrogate data. Specifically, the authors could generate multivariate gaussian time-series with the same covariance as the empirical data, apply the threshold, and see where the surrogate data are placed on the phase diagram. My guess is that the surrogates would be placed close to the spin-glass transition. Moreover, the data from subjects with large and low IQ could be classified using a simple Gaussian decoder, based on the covariance of the data. If this simple decoder achieves better discrimination than using the model parameter sigma, the results would be weakened. In other words, if a simple model such as a multivariate Gaussian process leads to better discrimination and relatively same working point in the phase diagram, then the proposed model is somehow redundant (and conceptually more complex). Technically speaking, we agree with all the reviewer says here. However, the aim of this paper is not to develop an efficient decoder but to provide empirical support of the critical brain hypothesis. We added the following new paragraph to the Discussion section to discuss this issue and articulate the aim of the present study. "One could classify the data from participants with high and low IQ scores using a simple multivariate Gaussian decoder [Bishop2006]. Such a decoder would assume as input the mean and covariance of the fMRI data for each participant or its random samples having the same mean and covariance. Because our PMEM also assumed the same input but was not optimized for classifying the participants, an optimized Gaussian decoder will probably be more efficient than our PMEM in explaining the IQ scores of the participants. This approach is conceptually much simpler than the present one, which employ the PMEM and its phase diagrams. However, the aim of the present study was to find empirical support of the critical brain hypothesis by relating the fMRI data to the phase diagrams of a prototypical spin system rather than to efficiently classify participant." 3) The maximum entropy model was built using the binary data of all subjects. Because the data could be highly variable between subjects, the model could be biased to replicate this diversity. To test this, the authors should learn two models using the halves of the data, test the model built from the first-half data on the second-half data and vice versa, and evaluate the likelihood ratios (train vs test). Likelihood ratios close to 1 would indicate that the model can generalize. We addressed this issue in the revised manuscript but did not use the likelihood ratio for the following reasons. The pairwise maximum entropy model (PMEM) adjusts the ⟨Si⟩ and ⟨Si Sj⟩ values to those of the empirical data and leaves higher order correlations unassumed. Although the two halves of the data had similar pairwise correlation values ( Fig. S5), the pairwise correlation values were not exactly the same between the halves, reflecting the heterogeneity in participants. This extent of similarity/dissimilarity in the correlation structure is inherited to the estimated PMEMs. The likelihood ratio suggested by the reviewer inevitably deviates from 1.0 and practically does so to a large extent even if the data sets are generated from exactly the same model [Hoel1984]. In fact, the likelihood ratios computed as follows were not standardized values comparable to unity: where ℒ and ℒ ( = 1, 2) are the likelihood functions of the models estimated for the i-th half of the data, which are calculated for the test data and train data, respectively. However, this does not mean that the model cannot generalize [Hoel1984]. To directly assess the generalizability of the model, we carried out the following cross validations. First, as the reviewer suggested, we split the participants into two subgroups and estimated PMEMs for each subgroup. We confirmed that the models predicted the correlation structure in the other subgroup with a reasonable accuracy (Fig. S5). Second, the phase diagrams estimated separately for the two subgroups were similar to each other and to the phase diagram estimated for the set of all the participants (Fig. S6). Finally, we estimated and (as we did in Fig. 2a) for each participant using the phase diagrams estimated for the subgroup of half the participants that the focal participant belonged to. The results were highly consistent with those reported in the main text produced using all the participants (Fig. S7). Collectively, we concluded that the estimation error caused by a finite number of participants did not considerably affect our main results. We added a subsection (with heading "Effects of data length and individual variability") in the Results section to explain these results. [Hoel1984] Hoel, P.G. Introduction to mathematical statistics. (Wiley, New York,1984) 4) The mapping of the data in the phase diagram shows rather that the system is subcritical. Recent studies showed that subcritical dynamics are more likely during rest (Priesmann et al., 2014;Hahn et al., 2017), with potential functional advantages. This should be discussed. Note also that, in the finite size scaling analysis, the susceptibility grows with N' but there is no sign of divergence. Thanks for drawing our attention to these important references. We added the following text to discuss these references in the Discussion section. "We showed that neural dynamics for each participant were close to but substantially off the criticality separating the paramagnetic and SG phases. Other studies using the PMEM [Hahn2017] and other models [Priesemann2014] also support off-critical as opposed to critical neural dynamics in the brain. The study applying the PMEM to local field potentials suggested that such off-critical dynamics may potentially have functional advantages because the off-critical situation would prevent the dynamics to get past the phase boundary to enter the other phase under the presence of noise [Priesemann2014]. The other phase may correspond to pathological neural dynamics such as epilepsy. The offcritical neural dynamics that we found for our participants, regardless of their IQ scores,
9,025.8
2020-02-03T00:00:00.000
[ "Biology" ]
BOUNDARY FEEDBACK STABILIZATION OF A SEMILINEAR MODEL FOR THE FLOW IN STAR-SHAPED GAS NETWORKS∗ The flow of gas through a pipeline network can be modelled by a coupled system of 1-d quasilinear hyperbolic equations. In this system, the influence of certain source terms that model friction effects is essential. Often for the solution of control problems it is convenient to replace the quasilinear model by a simpler semilinear model. In this paper, we analyze the behavior of such a semilinear model on a star-shaped network. The model is derived from the diagonal form of the quasilinear model by replacing the eigenvalues by the sound speed multiplied by 1 or −1 respectively. Thus in the corresponding eigenvalues the influence of the gas velocity is neglected, which is justified in the applications since it is much smaller than the sound speed in the gas. For a star-shaped network of horizontal pipes for suitable coupling conditions we present boundary feedback laws that stabilize the system state exponentially fast to a position of rest for sufficiently small initial data. We show the exponential decay of the L-norm for arbitrarily long pipes. This is remarkable since in general even for linear systems, for certain source terms the system can become exponentially unstable if the space interval is too long. Our proofs are based upon an observability inequality and suitably chosen Lyapunov functions. At the end of the paper, numerical examples are presented that include a comparison of the semilinear model and the quasilinear system. Mathematics Subject Classification. 93D15, 35L60. Received May 27, 2020. Accepted June 1, 2021. Introduction The flow of gas through pipelines is governed by a quasilinear system of balance laws (see for example [2]). We consider a model for a gas pipeline network where at the nodes the solutions for the adjacent pipes are coupled by algebraic node conditions that require the conservation of mass and the continuity of the pressure. The eigenvalues of the system have the form c + v and −c + v, where v denotes the velocity of the gas and c denotes the sound speed. In the operation of gas transportation systems the velocity of the gas flow is much smaller than the sound speed. Therefore, in order to obtain a semilinear model the eigenvalues are replaced by this sound speed, that is by −c and c. It is important to stress that the source term plays an essential role in the model of gas network flow, since if the gas is not at rest, the friction effects lead to a decrease of the pressure along the pipe in the direction of the flow. In this paper, consider the system with absorbing Riemann boundary conditions of Dirichlet type. We show that on a given finite time horizon [0, T ], the gas flow in a star-shaped network can be steered to a position of rest exponentially fast in the sense of the L 2 -norm if the L ∞ -norm of the initial state is sufficiently small. Moreover, we also show that on an infinite time interval the H 1 -norm decays exponentially fast if the H 1 -norm of the initial state is sufficiently small. From the point of view of applications it is desirable to know that is is possible to steer the flow close to a standstill since sometimes flow reversal is required, for example in the network in Belgium that is used for gas transit in different directions. For a smooth transition this requires to bring the gas almost a standstill before driving the flow in reverse direction. From the point of view of mathematical control theory this is important since while many stabilization results for systems of conservation laws have been studied before, for systems of balance laws with linear source terms exponential stabilization is in general not possible (see [9,14], Thm. 2). The exponential stabilization of the gas flow governed by the isothermal Euler equations in fan-shaped networks in the L 2 -sense has been studied in [12]. For a single pipe, a strict H 1 -Lyapunov function and feedback stabilization for the quasilinear isothermal Euler equations with friction have been studied in [8]. These results about the quasilinear system have required restrictive upper bounds on the lengths of the pipes. Similar results have been obtained for H 2 -Lyapunov functions, that allow an extension to an infinite time horizon (see [13]). The finite time stabilization of a network of strings is studied in [1,15]. Finite-time control for linear evolution equation in Hilbert space has been studied in [21]. The finite-time stabilization of hyperbolic systems with zero source term over a bounded interval and the exponential decay for small source terms has been analyzed in [20]. The stabilization of the wave equation on 1-d networks has been considered in [23]. The exponential stability of a linear model for the propagation of pressure waves in a network of pipes has been studied in [7]. The limits of stabilizability in networks of strings are studied in [9]. This paper has the following structure. In Section 2 we introduce the quasilinear isothermal Euler equations and node conditions that govern the flow through a gas pipeline network. In Section 3, we present the corresponding Riemann invariants and transform the system in diagonal form. Then we derive the semilinear model that provides an approximation for small gas velocities. In Section 4, a well-posedness result is presented. We are working in the framework of solutions that are defined through a fixed point iteration along the characteristics. The definition of the fixed point mapping is derived from the integral equations along the characteristic curves that are known a priori for the semilinear model. We show that with an absorbing Riemann Dirichlet boundary feedback, the flow through a star-shaped network of pipelines is driven to zero in H 1 exponentially fast. For the proof we first introduce a quadratic L 2 -Lyapunov function and show that it decays exponentially fast on finite time intervals without additional constraints on the lengths of the pipes. In the proof we use an observability inequality for the L 2 -norm. Then we define a quadratic Lyapunov function with exponential weights to show that the time derivatives also decay exponentially fast. This yields the exponential decay of the H 1 -norm of the state for initial states with sufficiently small H 1 -norm. In Section 7, numerical experiments are presented that illustrate the theoretical findings. We also show simulations for the original quasilinear model with the suggested boundary feedback that indicate that also in this case the system decays exponentially. The isothermal Euler equations The isothermal Euler equations as a model for the flow through gas pipelines have already been stated in [2]. We use the model for real gas as described in [17]. Let a directed star-shaped graph G = (V, E) of a pipeline network be given. Here V denotes the set of vertices and E denotes the set of edges. Each edge e ∈ E corresponds to an interval [0, L e ] that represents a pipe of length L e > 0. Let D e > 0 denote the diameter of the pipe and λ e f ric > 0 the friction coefficient in pipe e. Define θ e = λ e f ric D e . Let ρ e denote the gas density, p e the pressure and q e the mass flow rate. Let α e ∈ (−0.25, 0] be given and define the compressibility factor as z e (p e ) = 1 + α e p e . (2.1) We assume that α e is independent of e. Equation (2.1) is also stated in [22] as the model of the American Gas Association (AGA). In [5] it is stated that it is sufficiently accurate within the network operating range. Let R e s denote the gas constant and T e the temperature that are independent of e. We assume that We study the isothermal Euler equations that govern the flow through a single pipe. The Node conditions for the network flow In this section we introduce the coupling conditions that model the flow through the nodes of the network. The node conditions that determine the flow dynamics are given in [2,10]. At the vertices v ∈ V , the flow is governed by the node conditions that require the conservation of mass and the continuity of the pressure. Let E 0 (v) denote the set of edges in the graph that are incident to v ∈ V and x e (v) ∈ {0, L e } denote the end of the interval [0, L e ] that corresponds to the edge e that is adjacent to v. We assume that at the central node v ∈ V of the star-shaped graph we have x e (v) = 0 for all e ∈ E 0 (v). The continuity of the pressure at v means that for all e, f ∈ E 0 (v) we require the equation The conservation of mass is guaranteed by the Kirchhoff condition The system in terms of Riemann invariants As pointed out in [17], for e ∈ E the Riemann invariants R e − , R e + of the system are given by R e − (p e , q e ) = ln(p e ) − R e s T e q e p e (1 + α e p e ) , R e + (p e , q e ) = ln(p e ) + R e s T e q e p e (1 + α e p e ) . For the central node v of the star-shaped graph (that corresponds to x = 0 for all e ∈ E) the node conditions (2.4), (2.5), can be written in the form of the linear equation This can be seen as follows. Equation (3.1) implies that for all e ∈ E, the value of R e + (t, 0) + R e − (t, 0) is the same, which implies that the value of ln(p e ) is independent of e, hence (2.4) holds. Moreover, (3.1) implies Due to (2.4) this implies that equation (2.5) holds. In this case also for the difference of the squares of the Riemann invariants we have For a boundary node v of our star-shaped graph and e ∈ E 0 (v) we have x e (v) = L e . We state the Dirichlet boundary conditions in terms of Riemann invariants in the form Define the number For e ∈ E, let ν e = 1 8 R e s T e θ e be given and define Define∆ e as the diagonal 2 × 2 matrix that contains the eigenvalues (3.7) In terms of the Riemann invariants, the quasilinear system (2.3) has the following diagonal form: In order to simplify the model, we replace the eigenvalues by This definition implies that λ e − = −λ e + . Moreover, for all e, f ∈ E we have λ e + = λ f + Define ∆ e as the diagonal 2 × 2 matrix that contains the eigenvalues λ e + and λ e − . The approximation ofλ e + by λ e + andλ e − by λ e − is justified by the fact that in the practical applications, the velocity of the gas flow is much smaller than the sound speed. Indeed, in typical applications, the fluid velocity is several meters per second while the speed of sound is several hundred meters per second. Whereas v can be neglected relative to c, the friction term cannot be neglected as this would cause a large relative error. In this way, we obtain a semilinear model. We do not claim that solutions to the isothermal Euler equations and the semilinear system are close to each other for all times, but we do expect that solutions to both systems share important qualitative features such as decay to steady states. This expectation is in agreement with several numerical simulations that we have carried out, some of which are presented in Section 7. Let us note that the difference between the models becomes smaller the closer solutions get to the equilibrium. With the diagonal matrix ∆ e , the semilinear model that we will consider in the remainder of this work has the following form: Note that for given (R e + , R e − ) we have In particular this implies p e > 0. On account of the physical interpretation of the pressure it is very desirable that for the solutions we have p e > 0. This is an advantage of the model that is given by system (S). A similar semilinear model for gas transport has been studied in [18] in the context of identification problems. The model in [18] has the disadvantage that the matrix of the linearization of the source term is indefinite. However, the results from [18] can be adapted to the model that we consider in this paper. A well-posedness result In the semilinear model that we consider, the constant eigenvalues in the diagonal system matrix define two families of characteristics with constant slopes c and −c. For e ∈ E, define the sets For t ≥ 0, e ∈ E and the space variable x ∈ [0, L e ] we define the R 2 -valued function ξ e ± (s, x, t) as the solution of the initial value problem ξ e ± (t, x, t) = (t, x), ∂ s ξ e ± (t, x, t) = (1, ±c). Define the points For the t-component of P e± 0 (t, x) we use the notation t e± 0 (t, x) ≥ 0. The solution of (S) can be defined by rewriting the partial differential equation in the system in the form of integral equations along these characteristic curves, that is Note that almost everywhere the values of R e ± (P e± 0 (t, x)) are given on Γ e ± either by the initial data, that is is zero, this case only occurs for R e − ) or else by the node conditions (3.1) at x = 0. For a finite time interval [0, T ], the characteristic curves that start at t = 0 with the information from the initial data reach a point at the terminal time after a finite number of reflections at the boundaries x = L e (e ∈ E) or the central node x = 0. The definition of the solutions of semilinear hyperbolic boundary value problems based upon (4.1) is described for example in [3]. For L ∞ -solutions, we have the following theorem. there exists a unique solution of (S) that satisfies the integral equations (4.1) for all e ∈ E along the characteristic curves with R e + , R e − ∈ L ∞ ((0, L e ) × (0, T )) (e ∈ E) and the boundary conditions at x = L e and the node conditions at x = 0 almost everywhere in [0, T ] such that for all e ∈ E we have This solution depends in a stable way on the initial and boundary data in the sense that for initial data for the corresponding solution S e ± we have the inequality The proof is based upon Banach's fixed point theorem with the canonical fixed point iteration. It has to be shown that this map is a contraction in the Banach space 2 on a set of the form In order to show this, it suffices to derive an upper bound for the source term in (4.1) that is given by the continuously differentiable function σ e (R e + , R e − ). Moreover, it has to be shown that the iteration map maps from B(M ) into B(M ). This is true if M and ε are chosen sufficiently small. In this analysis, it has to be taken into account that the characteristic curves can cross the central node at x = 0 a finite number of times. Due to the linear node condition (3.1) in each such crossing the absolute value of the outgoing Riemann invariants can be at most three times as large as the largest absolute values of the ingoing Riemann invariants. For t ∈ [0, T ] almost everywhere we have the inequality The last assertion follows from the integral equations satisfied by R e ± − S e ± and an application of Gronwall's Lemma. Remark 4.2. An analogous existence result holds for solutions in , (e ∈ E) for initial and boundary data that are compatible with each other and with the node conditions such that y e ± − J and u e − J have sufficiently small H 1 -norms. An L 2 -observability inequality for a star-shaped network In this section we derive an observability inequality for a star-shaped network. The aim is to get an upper bound for the L 2 -norm of the system state at the time t in terms of the norms of complete observations at the boundary nodes on the time interval [t − T, t + T ] with T > 0 sufficiently large. For such an inequality, the observation have to be integrated over a sufficiently long time-interval. In [6], an observability inequality for a star-shaped network of strings (without source term) is derived. Such an observability is of interest since in the gas networks, usually very few sensors are available. The observability inequality implies that if the state is measured at the boundary nodes for a sufficiently long time, from these observations the system state can be determined completely. and that there exists a constantM such that for all e ∈ E and x almost everywhere in [0, L e ] for the solution of (S) we have the inequality Then there exists a constant C 0 (M ) such that for all t > T , we have the inequality Remark 5.2. The proof of Theorem 5.1 is based upon the particular structure of the source term σ e . In fact, we use the properties that σ e is an increasing function of R e + − R e − with value zero if R e + − R e − = 0 that is Lipschitz continuous on bounded intervals. Since (4.2) with M = 1 2M implies (5.1) , Theorem 4.1 yields sufficient conditions for (5.1) ifM is sufficiently small. An a priori upper bound (5.1) can also be obtained in the framework of semi-global classical solutions (even in the sense of a maximum), see [19]. Also if there is a solutions with C([0, T ], H 1 (0, L e )) regularity for all e ∈ E on [t − T, t + T ] × [0, L e ], then this solution automatically satisfies (5.1). Then we have H e (0) = 0. For the derivative of H e with respect to x we have almost everywhere Due to the partial differential This implies the inequality for all x ∈ [0, L e ]. Since for real numbers r 1 , r 2 we have ( Thus we have Hence using the definition (5.3) of H e (x) and inequality (5.4) we obtain Now we prove the analogous inequality for R e − . We have By integration on [0, L e ] this yields the inequality Similarly as above by transformation of the 2-dimensional integral over a triangle this yields An H 1 -semi-norm-observability inequality Now we prove an observability inequality for the time derivative. Together with the L 2 -observability inequality this will finally yield an observability inequality for the H 1 -norm. Theorem 5.3. Let a real number J be given. Assume that T > max e∈E L e c and t > T . Assume that system (S) with u e (t) = J has a solution on [0, t + T ] such that for all e ∈ E and x almost everywhere in [0, L e ] we have R e + (·, x), R e − (·, x), ∂ t R e + (·, x), ∂ t R e − (·, x) ∈ L 2 (0, T + t) and that there exists a constantM such that for all e ∈ E and x almost everywhere in [0, L e ] for the solution of (S) we have the inequality (5.1) for s almost everywhere in [0, t + T ]. Then there exists a constant C 1 (M ) such that for all t > T , we have the inequality Then we have K e (0) = 0. For the derivative of K e with respect to x we have almost everywhere Due to the partial differential equation in system (S) this yields d dx K e (x) This implies the inequality for x ∈ [0, L e ] almost everywhere. With (5.1), due to Gronwall's Lemma this implies the inequality for all x ∈ [0, L e ]. Since for real numbers r 1 , r 2 we have (r 1 + r 2 ) 2 ≤ 2 r 2 1 + 2 r 2 2 , for x ∈ [0, L e ] almost everywhere we obtain Thus we have Hence using the definition (5.7) of K e (x) and inequality (5.8) we obtain Now we prove the analogous inequality for R e − . We have By integration on [0, L e ] this yields the inequality Similarly as above by transformation of the 2-dimensional integral over a triangle this yields Stability of the state on the network In this section we present a Dirichlet boundary feedback that can be used to stabilize a constant state of system (S) exponentially fast. Note that in physical terms the constant state of system (S) are the states where the gas is at rest. In terms of Riemann invariants, this means that both Riemann invariants are equal. Hence in terms of Riemann invariants, the constant states are given by real numbers J that are equal to the logarithm of the (constant) pressure. Theorem 6.1 has two parts. In the first part a sufficient condition for the exponential decay of the L 2norm (6.2) on a finite time interval [0, T ] is provided under the assumption (5.1). This first part of Theorem 6.1 can be applied to L ∞ -solutions as discussed in Theorem 4.1. For the proof of the first part, the observability inequality from Theorem 5.1 is used. In the second part of Theorem 6.1 more regular H 1 solutions are considered. Together with (6.2), (6.3) implies the exponential decay of the H 1 -norm and allows a globalization in time. is exponentially stable in the sense that there exist constants C 1 > 0 and µ 0 > 0 such that for all t ∈ [0, T ] Hence the L 2 -norm of the difference between the constant state J and the actual state decays exponentially fast. Assume in addition that y e ± − J has a sufficiently small H 1 -norm and is compatible with the node condition and the boundary conditions such that a solution of system (S) exists on [0, T ] in × e∈E C([0, T ], H 1 (0, L e )) and satisfies (5.1). Then in addition to (6.2) also the L 2 -norm of the time-derivatives decay exponentially fast in the sense that there exist constants C 2 > 0 and µ 2 > 0 Remark 6.2. In terms of the physical variables the assertion in Theorem 6.1 implies that the state decays exponentially fast to a state with constant pressure p = exp(J). For all states with constant pressure we have the flow rates q e = 0. This means that the gas is at rest, that is the velocity is zero. It is important to note that for the exponential decay of the L 2 -norm we do not need any restrictions on the lengths of L e . In the frictionless case (that is for ν e = 0) System (S) can be seen as a system of coupled vibrating strings with absorbing boundary conditions. Similarly as in [13], it can be shown that in this case the system is steered to a position of rest in finite time. If T is chosen sufficiently large andM is sufficiently small, the exponential decay of the L 2 -norm of the state and of the derivatives in (6.3) implies that the H 1 -norm of the terminal state at time T is smaller than the H 1 -norm of the initial state. This implies that the solution can be extended to the time interval [0, 2T ] and so forth. In this way, for a sufficiently small norm of the initial state, we obtain a solution of the closed loop system that is global in time. First we state a Lemma in order to clarify the relation between the regularity and the exponential decay of ∂ t R e and ∂ x R e : Lemma 6.3. Let (R e + , R e − ) denote a solution of (S) such that for all e ∈ E we have R e ± ∈ C([0, T ], H 1 (0, L e )) and (5.1) holds. Then for all e ∈ E and t ∈ [0, T ] we have ∂ t R e ± ∈ L 2 (0, L e ). Moreover, we have the inequality Hence if ∂ t R e ± (t, ·) L 2 (0, L e ) and R e ± (t, ·) − J L 2 (0, L e ) decay exponentially fast, also ∂ x R e ± decays exponentially fast. Proof. We have ∂ t R e ± = −(±c)∂ x R e ± − (±σ e (R e + , R e − )). Since σ e (R e + , R e − )(t, ·) ∈ L ∞ (0, L e ), the first assertion follows. Moreover, we have Hence since R e ± ∈ L 2 (0, L e ), the triangle inequality, (5.1) and the definition of σ e imply inequality (6.4). Proof of Theorem 6.1. Lett ∈ (0, t) be given. For e ∈ E the partial differential equation implies that We multiply this equation by (R e + − J) and integrate over the interval (t −t, t +t) to obtain This yields Similarly, we obtain For e ∈ E and t ∈ [0, T ], define the function Then we have Hence due to the definition of σ e we obtain the inequality Due to the node conditions at x = 0 and (3.2) we have Hence due to the boundary condition at x = L e we have In particular, we have L 0 (t +t) ≤ L 0 (t −t). Since the above inequality can be derived for allt ∈ (0, t), this implies in particular that L 0 is decreasing. Now we chooset = T 0 > 0. For all e ∈ E the observability inequality (5.2) implies with C 0 (M ) as defined in (5.5). This yields the inequality Since L 0 is decreasing this yields Hence we have Similar as in Lemma 2 from [16], this implies that L 0 decays exponentially fast. Now we consider the evolution of the time-derivatives. For e ∈ E and t ∈ [0, T ], we consider Due to the partial differential equation in system (S), for solutions with H 2 -regularity we have Lett ∈ (0, t) be given. For e ∈ E the partial differential equation implies that We multiply this equation by ∂ t R e + and integrate over the interval (t −t, t +t) to obtain This yields Similarly, we obtain Hence we have Thus we obtain the inequality Due to the node conditions at x = 0 we have e∈E (D e ) 2 ∂ t R e + (τ, 0)) 2 − (∂ t R e − (τ, 0)) 2 = 0. Hence due to the boundary condition at x = L e we have In particular, we have L 1 (t +t) ≤ L 1 (t −t). Since the above inequality can be derived for allt ∈ (0, t), this implies in particular that L 1 is decreasing. Now we chooset = T 0 > 0. For all e ∈ E the observability inequality (5.6) implies with C 1 (M ) as defined in (5.9). This yields the inequality Since L 1 is decreasing this yields Hence we have Similar as in Lemma 2 from [16], this implies that L 1 decays exponentially fast. In this inequality, the second derivatives do not appear and the constants are also independent of the second derivatives. Therefore, by a density argument we conclude that it also holds for solutions with H 1 -regularity. Hence (6.3) follows. Numerical experiments In this section we will conduct numerical experiments in order to illustrate the theoretical findings. For the semi-linear model, we have used a finite difference upwind discretisation for the convective terms. The temporal discretisation of the convection is explicit Euler while the friction terms were discretized using implicit Euler. This allows us to use the maximal time step allowed for by the CFL condition so that discontinuities in the solution are not smoothed out by artificial diffusion. For the quasi-linear model (2.3), we have used a Lax-Friedrichs flux in space and the same discretisation in time as for the semi-linear model. In each of the experiments we have plotted L 0 (see Eq. (6.6)) over time. In order to test how sharp the decay estimates (6.2) are, we have also plotted In all computations the initial velocity field is constant zero. Our results confirm the solutions decay to equilibrium at least exponentially. Indeed, the experimentally observed decay rates are (as expected) larger than the lower bound from our analysis. Discontinuous initial data with friction In this experiment the initial pressure on pipes 1 and 3 is constant 60bar whereas in pipe 2 we have 60bar on the half of the pipe adjacent to the node and 80bar on the half of the pipe adjacent to the boundary. The results are displayed in Figure 1. The picture shows that, as predicted, L 0 decays exponentially. However, in our analysis we only obtain a lower bound for the decay rate. The picture shows that the observed rate is larger than our lower bound that holds for all initial data. We also plot snapshots of the numerical solutions at t = 0s, t = 28s and t = 56s in Figure 2. Note that we have R e + (0) = R e − (0) on all pipes. It can be seen that the discontinuities in the solution remain, since there is no diffusion, but the plateau values change due to the friction terms. Discontinuous initial data without friction We use the same parameters as in the previous section but set friction to zero. The results are displayed in Figure 3. We can see that, as expected, L 0 goes to zero in finite time, see Remark 6.2. We also plot snapshots of the numerical solutions at t = 0s, t = 28s and t = 56s in Figure 4. It can be seen that the discontinuities in the solution are not smoothed out and plateau values only change via interaction with nodes. Continuous initial data with friction In this experiment, the initial pressure on pipes 1 and 3 is p(x) = 60bar + 20bar sin(πx/km) while the initial pressure on pipe 2 is p(x) = 60bar + 20bar sin(πx/(2km)). All other parameters are as above. The results concerning decay of L 0 and L 1 are displayed in Figure 5. Note that in this case we have computed L 0 and L 1 for both the semilinear model (S) and the quasilinear isothermal Euler equations (2.3). Snapshots of solutions to both systems are shown in Figures 6 and 7, respectively. Figure 5 shows that L 0 and L 1 decay to zero exponentially. In both cases, the actually observed decay rates are larger than the lower bounds that we have proven. Our experiments, in particular, the snapshots at t = 28, show the difference between the solutions to the semi-and quasilinear systems due to the different wave speeds. . Continuous initial data with friction: Snapshots of the semilinear solution at times t = 0s, t = 28s and t = 56s. Pipes 1 and 3 go along the positive and negative x-axis, pipe 2 goes along the positive y-axis. On all pipes R + is a yellow continuous line and R − is a blue dashed line. Despite the quantitative differences, the solutions share important qualitative features such as exponential decay to the steady state. Indeed, Figure 5 shows that the isothermal Euler solution converges to the steady state nearly as quickly as the semilinear solution. Continuous initial data without friction We study the case without friction. In this case, it can be shown that both L 0 (t) and L 1 (t) vanish (for the semilinear solution) for t > 2 · T 0 ≈ 59, see Remark 6.2. Numerical results showing the decay of L 0 and L 1 for both the semilinear and the quasilinear (isothermal Euler) system are displayed in Figure 8. Snapshots of the solutions to both systems are shown in Figures 9 and 10, respectively. We observe, as expected, for the semilinear solution, both L 0 and L 1 have decayed to zero at t = 2T 0 ≈ 59. Our experiments, in particular, the snapshots at t = 28s and t = 56s, show the difference between both solutions due to the different wave speeds. It should be noted that, without friction, the semilinear model (S) is, in fact, linear and the evolution equations for both Riemann invariants are only coupled via the boundary conditions while in the isothermal Euler system there is coupling via the flux term. However, although there are quantitative differences, the solutions share important qualitative features such as exponential decay to the steady state. Indeed, Figure 5 shows that the solution to the isothermal Euler equations (2.3) converges to the steady state even quicker than the solution (S). It should be noted, however, that for the isothermal Euler solution L 1 is not monotone which makes sense since for quasilinear models (such as isothermal Euler) certain slopes grow steeper which, given sufficient time, might lead to the appearance of shocks. Figure 9. Continuous initial data without friction: Snapshots of the semilinear solution at times t = 0s, t = 28s and t = 56s. Pipes 1 and 3 go along the positive and negative x-axis, pipe 2 goes along the positive y-axis. On all pipes R + is a yellow continuous line and R − is a blue dashed line. Figure 10. Continuous initial data without friction: Snapshots of the quasilinear solution at times t = 0s, t = 28s and t = 56s. Pipes 1 and 3 go along the positive and negative x-axis, pipe 2 goes along the positive y-axis. On all pipes R + is a yellow continuous line and R − is a blue dashed line. Conclusions We have considered the flow of gas in a star-shaped network of pipelines that is governed by a hyperbolic semilinear model of partial differential equations that can be understood as a simplification of the isothermal Euler equations. The model under consideration is useful for simulations concerning the operation of gas networks. We have shown that locally around a state where the gas is at rest the system state can be steered towards this position of rest exponentially fast with a suitably chosen boundary feedback control. In fact the boundary control consists of absorbing boundary conditions. This result is remarkable, since we show the exponential decay of the H 1 -norm of the state for a system with a source term on a network. In the earlier work [12] only the decay of the L 2 -norm on a star-shaped network has been considered for the quasilinear model. In the present paper, we have shown the exponential decay of the L 2 -norm on the network without additional assumptions on the lengths of the pipes. Also the H 1 -norm decays exponentially on the network for arbitrarily long pipes if the H 1 -norm of the initial state is sufficiently small. Our theoretical findings are in agreement with numerical experiments that also indicate that the solutions to the semilinear model share certain qualitative features with solutions of the isothermal Euler equations. This analysis is motivated by the problem of flow reversal that occurs in certain scenarios in gas pipeline network operations. In order to avoid possible turbulence, in this situation it is a good strategy to control the flow to rest first and then start it again in the opposite direction. We expect that the result can be extended to more general tree-shaped graphs. This will be the subject of future research. We will also investigate to which extent similar estimates can be proven for (entropy) solutions of the quasilinear isothermal Euler equations. The treatment of discontinuous weak solutions will require new tools that go beyond what we have developed here.
8,745.8
2021-06-07T00:00:00.000
[ "Engineering", "Physics", "Mathematics" ]
Quantization condition from exact WKB for difference equations A well-motivated conjecture states that the open topological string partition function on toric geometries in the Nekrasov-Shatashvili limit is annihilated by a difference operator called the quantum mirror curve. Recently, the complex structure variables parameterizing the curve, which play the role of eigenvalues for related operators, were conjectured to satisfy a quantization condition non-perturbative in the NS parameter ħ. Here, we argue that this quantization condition arises from requiring single-valuedness of the partition function, combined with the requirement of smoothness in the parameter ħ. To determine the monodromy of the partition function, we study the underlying difference equation in the framework of exact WKB. Introduction A long standing goal of topological string theory is to obtain the topological string partition function Z top as an analytic function on the parameter space of the theory. The latter is the product of the coupling constant space C -or C 2 in the case of refinement -in which the genus counting parameter g s -or g s and the coupling constant of the refinement s -take values, and the appropriate moduli space M X of the underlying Calabi-Yau manifold X. This program has been most successful in the case of toric (hence non-compact) Calabi-Yau manifolds. The topological vertex [1] and its refined variants [2,3] permit the computation of Z top in the large radius regime of M X in a power series expansion in exponentiated flat coordinates of M X , with coefficients that are rational functions in e i 1 and e i 2 , with g 2 s = 1 2 , s = ( 1 + 2 ) 2 . The holomorphic anomaly equations [4] and their refinement [5][6][7][8] can be used to compute the coefficients of F top = log Z top in an asymptotic (g s , s) expansion as analytic functions on M X . In the compact case, some impressive all genus results for JHEP06(2016)180 certain directions in the Kähler cone have been obtained for Calabi-Yau manifolds that are elliptically fibered, see e.g. [9]. An open question is how to define Z top without recourse to any expansion. In [10,11], the open topological string partition function Z top,open on a toric Calabi-Yau manifold X was studied for a particular class of torically invariant branes, and the mirror curve C of X identified as the open string moduli space for this problem. This insight led to the computation of F top,open to leading order in g s . Ref. [12] proposed that to extend the computation beyond leading order in g s , the mirror curve C had to be elevated to an operator O C . In fact, it is the Nekrasov-Shatashvili (NS) limit [13] The idea to recover the closed topological string partition function from the monodromy of the open partition function was put forward in [12], and made more precise in [13][14][15]. In a remarkable series of papers [17][18][19][20][21][22][23][24], the quantization of the mirror curve has been taken as a framework within which to define Z top non-perturbatively. In the genus one case, the equation (1.1) can straightforwardly be rewritten as a spectral problem for the complex structure parameter z of the mirror geometry, Here, O C is put in the formÕ − z via appropriate variable redefinitions [22]. 2 Upon specifying the function space F in which Z NS top,open is to lie, this eigenvalue problem can be solved numerically. For the higher genus case, [24] identify the mirror curve C of the toric Calabi-Yau manifold X with the spectral curve of a quantum integrable system determined by the toric data of X. The underlying class of quantum integrable systems was introduced by Goncharov and Kenyon [28]. The complex structure parameters z i of the mirror curve map to the spectrum of the integrable system. Ref. [17] and follow-up works propose a quantization condition on the parameters z i based on a non-perturbative modification of F NS top,closed , which roughly takes the form (see equation (3.12) below for the precise statement) Here, F NS,pert top,closed and F NS,BPS top,closed are the conventional perturbative and enumerative contribution to the closed topological string partition function in the NS limit. F NS,BPS,np top,closed is a contribution included in the quantization condition to cancel poles of F NS,BPS top,closed in q = exp (i ). Condition (1.3), at real values of , has been shown to reproduce the numerical results obtained by diagonalizing the Hamiltonians of the associated Goncharov-Kenyon system numerically in a harmonic oscillator eigenbasis of L 2 (R) to high precision. 1 A different path towards such a quantization via the study of defects in five dimensional gauge theory is taken in [16]. 2 The significance of the choice of variables upon quantization of the mirror curve has been addressed in various works [12,[25][26][27], but a complete understanding is still lacking. JHEP06(2016)180 In this paper, we aim to establish that the quantization condition (1.3) arises upon imposing single-valuedness of the elements of the kernel of O C . To this end, we need to determine the monodromy of solutions to (1.1) as functions of the complex structure parameters z i on which O C depends. We propose to do this in the framework of exact WKB analysis applied to difference equations. The WKB analysis of difference equations has received some treatment in the literature (see e.g. [29,30]), however, to our knowledge, not in the form we require for our study. We thus attempt to generalize to difference equations the approach presented e.g. in [31] to the transition behavior of WKB solutions of differential equations. We find that the transition behavior in the case of linear potentials can be studied in detail. Difference equations lack the rich transformation theory required to lift the analysis rigorously to general potentials [31]. Our analysis hence relies on the conjecture that the transition behavior for potentials with simple turning points is governed only by these, and well approximated in their vicinity by the linear analysis. To explain the non-perturbative contribution to (1.3), we will argue that by the choice of harmonic oscillator states for the numerical diagonalization, the elements of the function space F are constrained to be L 2 functions in x with smooth dependence on . The latter condition requires adding a non-perturbative piece in to Z NS top,open as hitherto defined. Equation (1.3) arises from the constraint that the function thus obtained be single-valued. We will study the monodromy problem of difference equations in section 2. In section 3, we discuss the open topological string partition function from various perspectives and explain how we expect the quantization condition (1.3) to arise. This discussion is applied to the example O(K) → P 1 × P 1 in section 4, in which we also present numerical evidence for the quantization condition (1.3) in the case of complex . We end with conclusions. Monodromy from exact WKB In this section, we will provide evidence linking the monodromy problem of solutions to difference equations of the form (1.1) arising upon quantization of mirror curves to the so-called quantum B-periods. We first flesh out an argument provided by Dunham [32] in the case of differential equations using exact WKB methods. We then set out to generalize these methods to difference equations. Differential equation We will briefly review the basics of WKB analysis in this subsection, following [31]. A somewhat more detailed review in the same spirit can be found in [33]. The starting point of the analysis is a second order differential equation depending on a small parameter . Q(x) is a meromorphic function of x with possible dependence, which we for simplicity will take to be of the form Q(x) = N n=0 Q 2n (x) 2n . To solve this equation, we can make a WKB ansatz JHEP06(2016)180 with S considered as a formal power series in , Plugging this ansatz into the differential equation (2.1) yields the expansion coefficients S n recursively, The equation (2.4) has two solutions S −1 = ± Q 0 (x). The choice of sign propagates down to all expansion coefficients S 2n+1 . We thus obtain two formal WKB solutions ψ ± WKB to (2.1), reflecting the fact that the differential equation is of second order. Denoting S odd = n odd S n n , S even = n even S n n , (2.6) it is not hard to show that The two formal WKB solutions can thus be expressed as The two formal series ψ ± WKB will generically merely provide asymptotic expansions of two solutions to (2.1). Exact WKB analysis is concerned with recovering the functions underlying such expansions. Borel resummation is a technique to construct a function having a given asymptotic expansion as a power series It proceeds in two steps. The first is to improve the convergence behavior of (2.9) by considering the Borel transform The second step is to take the Laplace transform of (2.10), Here, θ is a half-line in the y-plane, emanating from the origin at angle θ to the abscissa. If the sum in (2.10) and the integral in (2.11) exist, then S θ [ψ]( ) defines a function with JHEP06(2016)180 asymptotic expansion given by (2.9), called a Borel resummation of the formal power series (2.9). The Laplace transform (2.11) can fail to exist if ψ B (y) exhibits a singularity on the integration path θ . Integrating along θ ± on either side of the singularity will then generically give rise to two functions S θ ± [ψ]( ), both with asymptotic expansion (2.9), but differing by exponentially suppressed pieces in 1 . The position of the singularities of the Borel transform ψ B thus leads to a subdivision of the y-plane into sectors. Choosing θ to lie in different sectors will give rise to different Borel resummations of (2.9). When the coefficients of the formal power series (2.9) depend on a variable x, the position of the poles of the Borel transform (2.10), and hence the subdivision of the y-plane into sectors, will depend on x. Keeping the integration path θ fixed, crossing certain lines in the x-plane will result in poles of ψ B crossing θ . These lines are called Stokes lines. They divide the x-plane into Stokes regions. The Borel resummation of (2.12) performed on either side of a Stokes line will yield functions whose analytic continuation to a mutual domain will differ by exponentially suppressed terms. Returning to the WKB analysis of a second order differential equation, the Stokes phenomenon implies that a Borel resummation of the formal WKB solutions (2.8) will yield a different basis of the solution space depending on the Stokes region in which the Borel resummation is performed. This behavior can be studied by first considering the case of a linear potential Q(x) = x, then using the transformation theory of differential equations to reduce the analysis of more general potentials to this case. For potentials with only simple zeros, the results are as follows: the Stokes lines emanate from zeros of the potential, called turning points. Simple turning points have three Stokes lines and a branch cut emanating from them. The trajectory of Stokes lines depends on the choice of integration path θ for the Laplace transform and is determined by the equation with x 0 the position of the turning point. As the Borel resummation of the formal WKB solutions ψ ± WKB in any Stokes region yields a basis of solutions to the differential equation (2.1), each such pair can be expressed as a linear combination of any other such pair. Neighboring Stokes regions are assigned transition matrices which enact the linear transformation relating the associated two pairs of solutions. The form of the transition matrices depends on the normalization of the WKB solutions, determined by the lower bound on the integration in the exponential of the WKB ansatz (2.2). Choosing this lower bound to be the turning point from which the Stokes line separating the two Stokes regions emanates yields independent transition matrices. The exact form of the transformation matrices can be determined, as mentioned above, by solving the differential equation with linear potential explicitly, and then mapping the general situation to this case. The space of solutions to the linear problem is spanned by Airy functions. For our purposes, we will only need the product T of the three transition matrices which arise when we circumnavigate a turning point in counter clockwise order, crossing three Stokes lines consecutively, but without crossing the branch cut. To compute T , it suffices to know that the Airy functions are single-valued in the vicinity of the turning point. It follows that B · T = id , (2.14) where the matrix B relates the Borel resummation of ψ ± WKB in the same Stokes region, but on either side of the branch cut. Crossing the branch cut interchanges ψ + WKB and ψ − WKB , and leads to a factor of i due to the square root in the denominator of (2.8). This reasoning yields The sign depends on conventions that we will not bother to fix, as it will cancel in our considerations. Let us now consider a potential with two simple turning points, giving rise to a Stokes pattern as depicted in figure 1. We want to consider the monodromy of a pair of WKB solution as we encircle the two turning points once. If we begin with a pair of WKB solutions normalized at the turning point x 1 , we must change their normalization by multiplying by the matrix (2. 16) in order for their transition behavior upon circumnavigating the turning point x 2 to be governed by (2.15). The superscript SR is to denote the Stokes region in which the integration path from x 1 to x 2 lies. The exponential entries in the normalization matrix are called Voros multipliers. They are to be understood as the Borel resummation of the indicated formal power series. Such Borel resummations exhibit interesting jumping behavior with regard to the choice of expansion parameter , as e.g. recently discussed in [33]. JHEP06(2016)180 In total, we obtain the monodromy matrix with the superscripts SRa and SR b indicating that the integration path connecting x 1 and x 2 is to be taken above/below the branch cut. The integration cycle is accordingly a path encircling the branch cut. Requiring a pair of WKB solutions in the situation depicted in figure 1 to be singlevalued is hence equivalent to demanding S odd dx = 2πi n + 1 2 . (2.18) Difference equation Unlike differential equations, difference equations have no obvious transformation theory. Under variable transformation other than linear, their form changes drastically. We will perform an exact WKB analysis in the case of linear potential in the following, but not be able to offer an intrinsic criterion determining for which potentials the linear approximation is justified. Generalities Consider a difference equation in the form for a potential Q(x) which we take to be independent for simplicity. With the WKB ansatz 20) N ( ) being a normalization factor which we shall fix below, we obtain we obtain and furthermore JHEP06(2016)180 Hence, where we have chosen a branch in (2.23). The analytic structure of the inverse cosh function is best understood by expressing it as The arccosh function has two branch points at ±1 respectively due to the square root functions, and one at −∞ due to the logarithm. Choosing the branch cuts for the square roots and the logarithm in the negative real direction, the branch cuts of the two square roots cancel beyond z = −1, at which point the branch cut of the logarithm begins. 3 The sheet structure of arccosh is hence such that the branch cut along [−1, 1] connects two sheets related via a sign flip, as whereas the branch cut beyond z = −1 connects sheets related via a shift of the imaginary part by 2π. Of the three branch points ±1, −∞, the point 1 is distinguished in that it is at finite distance and does not lie on a branch cut. The expansion of arccoshz around this point has √ z as leading term. We thus identify the points {x 0 : Q(x 0 ) = 1} as the turning points of the difference equation. As long as Q (x 0 ) = 0, we can approximate the behavior of the difference equation in the vicinity of such a turning point by a linear potential. WKB for a linear potential The difference equation with a linear potential is Setting Q(x) = x in (2.23) and (2.24), the WKB coefficients S n can be integrated, yielding By (2.25) and using (2.26), the leading order behavior of the WKB solution is thus In fact, we can check explicitly that this WKB solution provides an asymptotic expansion for a solution of the difference equation (2.29), as we can construct a solution to this JHEP06(2016)180 equation based on the Bessel function [34]. 4 Recall that the Bessel function satisfies the recursion relation The function J x ( 1 ) thus solves the difference equation (2.29). The Bessel function is known to have asymptotic behavior, for ν → ∞ along the real axis and constant positive z, given by (see e.g. [36]) this coincides with the WKB result ψ + WKB (2.31), with the normalization N ( ) fixed at We have hence matched the leading behavior of the WKB solution (2.31) to the asymptotic expansion of the Bessel function J x ( 1 ) for positive real x and small and positive. To study the Stokes phenomenon, we will now take advantage of the fact that the Bessel function is also the solution to a differential equation, to which we can apply the exact WKB methods reviewed in the previous section. Indeed, J ν (z) solves the differential equation We can eliminate the linear term and cast the equation in the form (2.1) by considering w ν (z) = √ z y(νz), which satisfies The conventional theory of exact WKB analysis of differential equations allows us to determine the Stokes behavior of the WKB expansion of the solutions to this differential equation. By relating this expansion to the WKB solution of the difference equation, we can derive the Stokes behavior of the latter. Making a WKB ansatz 5 w ∼ exp R ∂ , we obtain the leading terms Bessel functions appear in [35] in the analysis of Stokes curves of loop type. It would be interesting to explore connections to the analysis presented here. 5 The superscript ∂ is to distinguish quantities pertaining to the differential equation (2.37) from those pertaining to the difference equation (2.29). JHEP06(2016)180 By comparing to the asymptotic expansion (2.33), we can fix the normalization of the WKB expansion to the uniqueness of asymptotic expansions in power series implies (2.42) We have here assumed that the WKB expansions (2.41) and (2.40) yield asymptotic expansions to the indicated solutions of the difference and differential equation respectively. In the case of the differential equation, this is guaranteed by general theory. We have verified (2.42) to high order in . We next address the question of how the Stokes lines of the two asymptotic expansions are related. The Borel transforms of the two expansions are given by The Borel sum of the WKB series of the difference equation hence indeed equals, for real x and small positive , From the theory of exact WKB for differential equations, we know that the Borel transform ψ ∂ B (z, y) has a branch point in the y-plane at R ∂ −1 (z). The Laplace transform performed along the real axis, i.e. with θ = R + in the notation of (2.11), will hence be ill-defined for R ∂ −1 (z) ∈ R + , identifying this condition as determining the location of the Stokes line. By (2.45), ψ B (x, y) hence exhibits a branch point at y/x = R ∂ −1 1 x , i.e. y = R −1 (x). The condition determining the location of the Stokes line is therefore R −1 (z) ∈ R + . We conclude that the location of the Stokes lines of the difference equation is determined by the phase of R −1 (z), just as a naive generalization of the conventional WKB results would have suggested. By The behavior of the Borel resummed WKB solution Ψ ∂ WKB upon crossing Stokes lines emanating from the turning point z = 1 is governed by the general theory. In particular, the transition behavior of Ψ ∂ WKB upon circumnavigating a turning point is given by the matrix (2.15), and Ψ WKB inherits this behavior. If we assume that the monodromy of the WKB solutions of difference equations is governed by Stokes lines emanating from turning points {x 0 : Q(x 0 ) = 1}, and that the behavior upon crossing such lines is captured by the analysis for linear potential just presented, then the analysis of section 2.1 applies, leading to the single-valuedness condition (2.18) in the case of potentials with two turning points. JHEP06(2016)180 3 The open topological string and the mirror curve The conjectured quantization condition We begin this section by reviewing the quantization condition discussed in the introduction as presented in [24]. In this form, it applies to the topological string on an arbitrary toric Calabi-Yau manifold X. The mirror to such a space is given by a pair (C, λ), consisting of a complex curve C together with a meromorphic 1-form λ, the 5d analogue of the Seiberg-Witten differential [37]. C is given as the zero locus of a polynomial which can be constructed, up to linear redefinitions of the variables x and p, from the toric data of X [38]. The latter can be presented as a grid diagram, given by the intersection of the three dimensional fan of X with the x 3 = 1 plane. The number of interior points of the grid diagram corresponds to the genus g of C. Each such point gives rise to a modulus z i which enters as a parameter in P C . Each of the N boundary points of the grid diagram beyond the first three gives rise to an additional parameter m i or z g+i in P C , referred to as a mass parameter in [39]. 6 The moduli z i , i = 1, . . . , g, coincide in the large radius limit with Q i = exp(−T i ), the exponentials of the flat coordinates T i on the complexified Kähler moduli space of X. These are chosen among the g + N − 3 simply logarithmic solutions of the underlying Picard-Fuchs equations governing the periods of the meromorphic 1-form λ on the curve C. They are paired with doubly logarithmic solutions corresponding to Bperiods. In contrast, the mass parameters m i or z g+i correspond to residues of the 1-form and do not have dual partners. They are given as algebraic functions of the g + N − 3 exponentiated logarithmic solutions Q i to the Picard-Fuchs system. The polynomial P C can be promoted to an operator O C by setting This operator is conjectured to have the open topological string wave function on X in the NS limit, Z NS top,open , in its kernel, with x identified as the open string modulus. Ref. [24] identifies the equation as the quantum Baxter equation for the Goncharov-Kenyon integrable system determined by the toric data of X. The eigenvalues of the Hamiltonians of this system map to the complex structure parameters z i , i = 1, . . . , g, of C. Solving the quantum Baxter equation (3.4) with appropriate boundary conditions on Ψ is equivalent to solving the spectral problem. Numerical evidence for this beyond the genus one case is reported in [23]. JHEP06(2016)180 The conjectured quantization condition [24] is a set of equations, indexed by g integers n i , whose solution set of g-tuples is to coincide with the Goncharov-Kenyon spectrum. The ingredients that enter into the quantization condition are the Nekrasov-Shatashvili limit of the refined topological string free energy [2,13], F NS top,closed , and the quantum mirror map [15]. F NS top,closed encodes integer invariants associated to the Calabi-Yau X [40]. These appear most naturally when it is expressed in terms of the flat coordinates T i on the complexified Kähler moduli space of X. We can distinguish between two contributions to F NS top,closed . First, there is a perturbative contribution which depends on the triple intersection numbers a ijk of the compact toric divisors of X (suitably generalized to the non-compact setting) and integers b N S i , which have not been given a geometric interpretation yet, The second contribution depends on integer invariants N d j L ,j R of the geometry, with d a g N -tuple mapping to a class in H 2 (X) via the choice of coordinates T i , and the half-integers (j L , j R ) indicating a representation of SU(2) × SU(2), and has the form with Q i = exp(−T i ) as introduced above, and Q = (Q 1 , . . . , Q g ). Following [24], we have indexed this contribution with BPS due to its enumerative interpretation [2,6,40,41]. In the spirit of [13], one would then like to impose a quantization condition on the parameters C ij is the intersection matrix between a basis of curve classes in X, corresponding to a basis of the Mori cone of the toric geometry and the coordinates T i , and the torically invariant divisors of X. It arises in [42] to relate the derivatives of the prepotential to these divisors. The crucial ingredient in the quantization condition of [24], inspired by the so-called pole cancellation mechanism in [43], is to consider a third contribution to the quantization condition based on F NS,BPS top,closed , but evaluated at (up to a detail to which we return presently) The inspiration behind including this term stems from the observation that the contribution (3.6) to the free energy has poles, due to the sum over w, at = 2π r s for all integer values of r and s. As a function of q = exp(i ), it hence necessarily exhibits at best a natural boundary of analyticity on the unit circle (in fact, we will see in the example of local P 1 × P 1 in section 4.5 that even away from the unit circle, the expansion (3.6) is not convergent). The quantization condition (3.7) as it stands is hence ill-defined, at least for such values of . The evaluation point (3.8) is chosen to precisely cancel the contribution JHEP06(2016)180 from each of these poles: for = 2π r s , the pole which arises in (3.7) at w = ls, l ∈ N is canceled by the contribution of the corresponding derivative of the NS free energy evaluated at (3.8) at w = lr, l ∈ N. This almost works as is: the residues evaluate to The sign factors in (3.9) and (3.10) can be adjusted such that the two terms cancel if the Kähler parameters can be shifted by a B-field that satisfies for all pairs (j L , j R ) for which N d j L ,j R = 0 [44]. The existence of such a B-field has been shown for many classes of examples, but a proof of its existence for all toric geometries is still lacking. Combining these elements yields the conjectured quantization condition The equations (3.12) can be solved to express the Kähler parameters T in terms of the integers n i . The so-called quantum mirror map, discussed further in section 4, then maps these solutions to the eigenvalues z i of the Goncharov-Kenyon spectral problem. 7 The open topological string partition function The open topological string partition function, as defined in [45], serves as a generating function for open Gromov-Witten invariants, counting maps, in an appropriate sense, from Riemann surfaces with boundary to a Calabi-Yau manifold X with branes on which these boundaries are constrained to lie. We will call this partition function Z GW top,open = exp F GW top,open . When X is a toric Calabi-Yau manifold, this notion can be refined [46,47], and leads to a formal series in two expansion parameters 1 and 2 . JHEP06(2016)180 Beginning with [12], it has been gradually understood [14,15] that the monodromy of the open topological string partition function is intimately related to the corresponding closed topological string partition function. The mirror curve C to the toric Calabi-Yau X is identified as the open string moduli space [10,11], such that F GW top,open becomes a function on C. The leading order contribution to F GW top,open in an 1,2 expansion is then given by where λ is the meromorphic 1-form introduced in section 3.1. Thus, the monodromy of this leading contribution around the A-and B-cycles of the mirror curve coincide with the periods of λ. These determine the prepotential F 0 of X via the special geometry relations (3.14) In refined topological string theory, F 0 is the leading contribution in the formal expansion of F top in 1 and 2 . It was argued in [15], based on insights from [12][13][14], that the higher order corrections to F top in the NS limit 8 2 → 0, should arise as the monodromy of F GW,NS top,open , given by with λ q identified with the exponent S of the WKB ansatz discussed in section 2. The special geometry relation (3.14) now takes the form This proposal was checked explicitly in [14] for pure 4d SU(2) gauge theory. In the framework of the AGT correspondence [48], the necessity to take the NS limit to relate F top,open to λ q becomes particularly transparent, see [49]. Eq. (3.16) was further checked in both the 4d and 5d setting in [8,39,50]. It was shown to follow from the AGT correspondence in [51] for N = 2 * gauge theory. The refined open topological string partition function on toric geometries can also be defined as an index. In this incarnation, it takes the form [46,47] The need for such so-called flat open coordinates was first exposed in [10,11] (see also [52], where an alternate algorithm was proposed to compute these coordinates). In the Nekrasov-Shatashvili limit t → 1, we set with the A-monodromy due exclusively to the first term on the r.h.s. As the B-monodromy is due to the branch cut structure in X, it is not visible upon expanding in X. This can be seen at leading order in by studying x λ. Hence, the B-monodromy should be determined after combining the two terms in (3.19) by first summing the infinite series in exp(x) of their x derivative. The dependence on The convergence of the sums in (3.17) over d and m depends on the growth properties of the constants D s 1 ,s 2 m,d . But already the sum over multi-wrappings n is problematic: for 2π ∈ Q, the summand diverges for infinitely many n. F BPS top,open as presented in (3.17) hence exhibits poles at a dense set of points on the unit circle in the q-plane. This is the open string analogue of the behavior of the closed topological string amplitude discussed in section 3.1. To study this phenomenon, we will begin by considering the quantum dilogarithm [53]. For |X| < 1, this function can be defined via the exponential of an infinite sum, which takes the product form if |q| > 1 . (3.21) JHEP06(2016)180 (X; q) converges uniformly inside and outside the unit circle on the q-plane, but is ill-defined on a dense subset of the unit circle itself. To address this problem, Faddeev introduced what he called the modular quantum dilogarithm in [54], by considering the quotient (in our notation) The denominator is chosen to cancel the poles of the numerator. To see the mechanism at work, consider such a pole at 2π = r s . The sum entering in the dilogarithm in the numerator has a summand at k = ms that exhibits a pole with residue . (3.23) A corresponding term in the denominator which cancels this contribution stems from the summand at k = mr, with residue . (3.24) By re-ordering the two formal infinite sums that occur in the exponentials of (3.22), we obtain a function defined everywhere on the q-plane which coincides with the product (3.22) for q off the unit circle. Note that the pole cancellation mechanism works for any sum of the form for f k a rational function of its arguments, by subtracting a contribution , (3.26) i.e. as long as all parameters aside from q in the correction term are evaluated to the power of 2π . Returning to the exp(x) expansion of the open topological string partition function (3.17), we note that Z BPS top,open is almost of the form (3.25), up to the fact that q and t are evaluated to half-integer powers. Running through the pole cancellation argument for this case, we see that half-integer powers of q lead to a sign factor at = 2π r s , k = ms, 27) and likewise in the correction term, q − 4π 2 2 ks 1 = (−1) 2s 1 ms (3.28) at k = mr. For the cancellation mechanism to work, we can shift the Kähler parameters by a B-field that satisfies (−1) 2s 1 +B·d = 1 which can then be solved recursively and yields an expansion of Ξ(x) in the closed moduli parameters z i . We can extract Ψ(x) from Ξ(x) up to the ambiguity discussed. Expressing the moduli z i in Ξ(x) in terms of flat coordinates T i , this should coincide with Z BPS,NS top,open . We therefore refer to this formal series in z i (upon a choice of the ambiguity) as ψ BPS . We will apply both methods to the example of local P 1 × P 1 in section 4. The ambiguity of multiplying Ψ(x) via a periodic function in x can be reduced by specifying the function space on which the operator O C acts. Aside from the behavior in x, the dependence on the parameters q and z needs to be specified. The third method of computing Ψ(x) explicitly depends on this choice of function space. It proceeds by specifying a basis for this space, expressing O C as a matrix O N C in a truncation of this basis to N elements. The values of z for which the kernel of O N C is non-empty can then be determined by solving det O N C = 0, upon which the kernel in the approximation of this truncation easily follows. The choice of function space made in [18,23,24] is the L 2 (R) space spanned by the eigenstates of the harmonic oscillator, JHEP06(2016)180 The H n (x) are the Hermite polynomials, and ω and m are physical parameters which will play no role for our purposes and will be set to convenient values in the following. Upon computing the matrix elements of the monomials generated by e x and e p = e −i ∂ ∂x , k|e ax e bp |l = 2 the matrix elements of operators O C in this basis can easily be determined. By making the choice of basis (3.34), we are committing to a certain type of dependence. The states (3.34) depend continuously on (up to branch cuts) and are defined for any value ∈ C * . They are elements of L 2 (R x ) for Re ( ) > 0. The kernel of O N C is determined by solving a system of N linear equations with coefficients the matrix elements (3.35). The solution will be a linear combinations of the harmonic oscillator eigenstates (3.34) with coefficients that are rational functions of these matrix elements. We will call F x, the space of functions of the variables ( , x) of this form. The quantization condition presented in section 3.1 is to yield the tuples z for which the kernel of the operator O C has non-zero intersection with this function space. Consequences of imposing Ψ ∈ F x, Let us assume that ψ WKB , the formal power series in = i defined in (3.32), can be Borel resummed to a function Ψ WKB away from q = 1. For Ψ WKB to be an element of F x, , it must be single-valued as a function of x. We have argued in section 2 that the monodromy along a path C is given by exp Π C , with Π C the Borel resummation of the integral π C = C S, and S defined in (3.32). When C coincides with the B i -cycle of the geometry, the NS conjecture (3.16) identifies π C with ∂ t i F NS . The monodromy is thus of the form exp[φ pert (z, q) + φ BPS (z, q)]. The condition for single-valuedness of Ψ WKB around the cycle B i is hence φ pert (z, q) + φ BPS (z, q) = 2πi n , n ∈ Z . (3.37) In a z i expansion, ψ WKB reproduces the expansion of ψ BPS , which was defined below (3.33). While for real 2π / ∈ Q, it has been noted [55,56] that the Borel resummation of ψ WKB is locally smooth in , there exists no argument that Borel resummation at complex will eliminate the poles in q plaguing ψ BPS . If this indeed does not occur, Ψ WKB / ∈ F x, . To proceed, we will assume that ψ BPS is also Borel summable, in its expansion parameters z, to the function Ψ BPS . To enforce smooth behavior upon approaching the unit q-circle, we take our cue from the discussion in 3.2.1 and consider the quotient (3.38) Due to the periodicity of the denominator under x → x + i , this is still a solution to the difference equation (3.31). 9 The condition for the single-valuedness of Ψ upon circumnavigating the cycle B i is given by for m, n ∈ Z, and arbitrary κ ∈ C, or equivalently, We are here assuming that the perturbative contribution φ pert (z(Q), q) to the quantization condition arises upon combining Ψ BPS with an additional contribution, as in (3.19), and is not modified by the denominator of (3.38). We will argue in section 4.4 that the half-integer shift on the r.h.s. of (3.12) is due to φ pert containing a contribution φ pert = πi + . . .. This explanation of the quantization condition predicts a relation between the closed invariants 4 Example: local P 1 × P 1 In this section, we will apply our analysis to the geometry X = O(−K) → P 1 × P 1 . This geometry has been studied extensively in the literature with regard to its closed string invariants [40,58,59], and in the context of the quantization condition (3.12) for real [19]. Here, we will be interested in the WKB analysis of the difference equation (3.31) for this geometry. As our analysis in section 3.2.3 relies on complex , we will extend the study of (3.12) to this case. In passing, we will also compute some open string invariants of this geometry and verify their integrality upon appropriate choice of flat open variables and invoking the quantum mirror map. The mirror curve and classical periods via Picard-Fuchs The toric grid diagram describing the local P 1 × P 1 geometry is depicted in figure 3, with vertices corresponding to one dimensional cones of the fan enumerated from 0 to 4. The diagram exhibits one interior point and one boundary point beyond three. The underlying geometry is therefore described by one modulus and one mass parameter, in the terminology introduced in section 3.1. Following the standard algorithm [38], each independent relation among the one dimensional cones is assigned a parameter z i , and the equation for the mirror curve C is obtained as P C (e x , e p , e −x , e −p ) = e p + z 1 e −p + e x + z 2 e −x + 1 = 0 . Figure 3. The toric grid diagram for local P 1 × P 1 . Flat coordinates t 1 and t 2 on the complexified Kähler moduli space of X, encoding the size of the two P 1 curves respectively, are identified as the logarithmic solutions of the corresponding Picard-Fuchs system. These can be determined at small z 1 , z 2 (the large radius regime on X) via the Frobenius method [38] to be As above, we will also introduce exponentiated coordinates Q i = exp(−t i ), such that small z i corresponds to large t i and small Q i . The quotient z m = z 2 z 1 = Q 2 Q 1 is an algebraic function in the exponentials of the flat coordinates, identifying it as a mass parameter. By inverting (4.4), we obtain the so-called mirror map The doubly logarithmic solutions of the Picard-Fuchs system can also be determined via the Frobenius method at large radius, and allow the computation of the prepotential F 0 in this regime. Introducing coordinates T = t 1 , T m = t 2 − t 1 to distinguish between modulus and mass parameter and expressing the doubly logarithmic solutions in terms of these, an appropriate linear combination of them yields ∂ T F 0 . The correct linear combination can be determined e.g. by matching some low lying Gromov-Witten invariants (obtained e.g. by geometric means, or via the topological vertex). The quantum mirror curve and F NS top,open via recursion This equation can be solved recursively, yielding a formal series in z i which we call ξ(x). The quantum mirror curve and quantum periods The analytic structure of the mirror curve (4.3) becomes clearer if we redefine variables by settingx This gives rise to the curve The kernels of the quantization of the curves (4.3) and (4.13) are related via As the leading contribution S −1 to the WKB ansatz (3.32) coincides with the solution of (4.15) forp, the analytic structure of this curve is captured by The dependence of S −1 onx is via coshx, a fundamental domain of the function hence lies between Imx = −π and Imx = π. Within this interval, S −1 requires two branch cuts, in accord with the discussion of the sheet structure of the arccosh function in section 2.2.1. We have sketched this sheet structure in figure 4. Following the discussion of section 2.2.1, both branch cuts are divided into two segments: the initial segment is the preimage of the interval [−1, 1] under the argument of the arccosh. Crossing this branch cut changes the sign of the function; it is associated to a branch point of order 2. Crossing the branch cut beyond this point takes one to a sheet with imaginary part shifted by 2π; it is associated to a branch point of order infinity. We can define two conjugate cycles on this geometry, labeled by A and B in figure 4. The A-cycle reflects the periodicity of coshx. The B-cycle passes through the order 2 segment of the branch cuts. JHEP06(2016)180 The quantum mirror map [15] is obtained by defining the A-cycle integral of the WKB exponent S(x) as a flat coordinate. By our discussion in section 3.2, this coincides with the conventional mirror map to leading order in = i . It is possible to compute this integral to all orders in in a z i expansion by noting The integrand on the r.h.s. is understood in an expansion in z i . The integral of log ξ(x) along the A-cycle is easy to perform. Only ζ const in (4.7) contributes, and yields [15,39] Π A = − 1 2πi x 0 +πi Comparing to the result (4.4) obtained via the Picard-Fuchs equation at leading order in allows us to fix the normalization for the quantum corrected period to be Inverting this relation yields the quantum mirror map, the first terms of which are This expression is used to obtain the expansion of ζ BPS,+ in Q i in (4.9). The integral along the B-cycle is more difficult to perform directly, as the branch cuts degenerate in the limit of vanishing z 1 and z 2 . The non-logarithmic contributions to this period can be obtained from log ξ by performing the indefinite integral over x order by order in a z i expansion, and extracting the finite contribution at x → −∞ [15,39]. A more elegant computation of the period is clearly desirable. Note that an expansion in exp(x), as has been performed to obtain the form (3.17), does not commute with this integration. Exact WKB analysis To perform a WKB analysis along the lines of section 2.2.1, we consider the curve in the form (4.15), allowing us to identify This yields the WKB expansion coefficients Figure 6. The Stokes graphs for local P 1 × P 1 . in the analysis of the ABJM matrix model in [60] and related works. In this section, we will extend this study to complex . Note that the operator O C obtained from quantization of P C in (4.3) is invariant under the conjugation →¯ , as is the quantization condition (3.12). Every study at complex hence tests the quantization condition simultaneously inside and outside the q unit circle. To determine the eigenvalues numerically, we use the formula (3.35) to compute the matrix elements of the operatorÕ C given in (4.17) in the basis (3.34) of harmonic oscillator eigenstates up to a fixed level n, and then diagonalize the matrix numerically. The dependence on the choice of ω and m in (3.34) decreases with increasing matrix size. To evaluate the quantization condition, we first compute the refined topological string partition function on local P 1 × P 1 using the refined topological vertex [2,61]. This computation is detailed in [23] for the general case of A n singularities fibered over P 1 , and will not be reviewed here. The vertex formalism computes the series coefficients a n in Z top ∼ Q 1 n a n (q, t, Q 2 )Q n 1 (4.29) as rational function in the variables Q 2 , q = e i 1 , and t = e −i 2 . The 1 → − 2 limit reproduces the conventional topological string partition function, as computed in [62,63], with modulus Q = Q 1 and mass parameter Q m = Q 2 /Q 1 . The limit yields the NS limit of the topological string amplitude that enters into the quantization condition. The first few terms are given by Here, the first term is the leading contribution in a series in Q 2 of Q 1 independent terms, and the second is the order 1 term in a Q 1 expansion. The choice of for which we can test the quantization condition must satisfy several constraints. As the quantization condition is implemented as a truncated series in Q i and Q 2π i , we need to ensure that the solution to the quantization condition lies at sufficiently small values of Q such that both expansion parameters are small. Also, values of for which either |e i | or |e 4π 2 i | are large (order 100 or more) lead to unstable numerics. We first consider the eigenvalue problem at we find the coefficients b n (q) to be rational functions of the form b n (q) = c n,k cos(d n,k ) sin( n 2 ) , c n,k ≥ 1 , (4.33) where max k d n,k grows faster than linearly in n, see figure 7. It follows that for complex , |b n (q)| is unbounded and the series (4.32) that enters into the quantization condition does not converge. We see this behavior reflected in table 2, where we have evaluated the quantization condition at successive orders in Q. Had F NS top,closed been convergent, we would have expected an increasing number of digits of z to stabilize with increasing order. Instead, we see that the result appears to stabilize to a certain number of digits, but then oscillates around this value. Never the less, the prediction of the quantization condition, evaluated at optimal truncation in the expansion in Q, reproduces the result obtained for z via numerical diagonalization to numerous significant digits, see table 3. For the examples that we consider, it turns out that the solutions of (3.12) for Q at larger n, i.e. for higher lying eigenvalues, have smaller absolute value. This explains the improved accuracy of the results at larger n in table 3. In fact, beyond n = 0, the results via the quantization condition stabilize to more significant digits than those from numerical diagonalization up to matrix size 500 × 500. Table 3. Numerical diagonalization with matrix size 500×500, best approximation via quantization condition is given, with the order at which the approximation is attained indicated in parentheses When more digits stabilize up to the maximal order (13) considered via the quantization condition than via diagonalization, these are indicated, even though the stabilization will be lost at higher order. We can also check the quantization condition away from the Q 1 = Q 2 locus. To this end, we diagonalize the operator (4.15) at a fixed value of z m , and evaluate the quantization condition at Q 2 = z m Q 1 . Note that F NS,BPS top,closed as determined by the refined vertex is exact in Q 2 . The quantum mirror map however is only known in an expansion in this parameter. For consistency, we hence also expand F NS,BPS top,closed in Q 2 before evaluation. The results for two choices of z m are recorded in table 4. Conclusions We have argued that the rules of exact WKB analysis carry over to difference equations, and used these to determine the monodromy behavior of WKB solutions. The quantization condition (3.12) then reduces to a question regarding the monodromy of the elements of the kernel of the quantized mirror curve O C . We have argued that the contribution non-perturbative in to the quantization condition (3.12) proposed in [24] arises when requiring that the kernel of the quantum mirror curve O C have non-trivial intersection with a particular function space F x, defined in section 3.2.2. The analysis performed in this paper should be enhanced in several directions: To accumulate evidence for the exact WKB rules as applied to difference equations, or to discover their limitations, they should be tested in the case of difference equations with known exact solutions. The relation between the quantum B-period and F NS top,closed which enters centrally in the quantization condition (3.12) relies on the Nekrasov-Shatashvili conjecture (3.16). It would be important to have a proof of this conjecture, perhaps along the lines of the proof in [51] in the case of N = 2 * 4d gauge theory. This might help clarify the required B-field dependent shift in the quantum mirror map alluded to in footnote 7. The numerical manifestation of the quantization of the complex structure parameters z i in the higher genus case should be clarified. We have emphasized the need of specifying the function space F x, on which the equation O C Ψ = 0 is to be solved. The possibility has been raised in the literature that the Borel resummation of the naive WKB solution automatically imposes Ψ WKB ∈ F x, [64]. In the case of real , evidence was presented in [55] that the Borel-Padé resummation of the expansion of F NS top,closed on local P 1 ×P 1 at real is smooth. See also [56] for an analysis of the conifold geometry for real . This issue merits further study for general . The relationship between flat open coordinates and distinguished forms of the operator O C should be further explored. Also, the correlation between the B-field required for the pole cancellation mechanism in the open and the closed case deserves further study. For a recent study linking open to closed string invariants, see [65]. Very recently, an article [66] appeared on the arXiv studying the monodromy of difference equations in very different language from that employed in this paper. It would be interesting to see how the two analyses are related.
12,336.8
2016-06-01T00:00:00.000
[ "Mathematics" ]
Shear-Response of the Spectrin Dimer-Tetramer Equilibrium in the Red Blood Cell Membrane* The red cell membrane derives its elasticity and resistance to mechanical stresses from the membrane skeleton, a network composed of spectrin tetramers. These are formed by the head-to-head association of pairs of heterodimers attached at their ends to junctional complexes of several proteins. Here we examine the dynamics of the spectrin dimer-dimer association in the intact membrane. We show that univalent fragments of spectrin, containing the dimer self-association site, will bind to spectrin on the membrane and thereby disrupt the continuity of the protein network. This results in impairment of the mechanical stability of the membrane. When, moreover, the cells are subjected to a continuous low level of shear, even at room temperature, the incorporation of the fragments and the consequent destabilization of the membrane are greatly accentuated. It follows that a modest shearing force, well below that experienced by the red cell in the circulation, is sufficient to sever dimer-dimer links in the network. Our results imply 1) that the membrane accommodates the enormous distortions imposed on it during the passage of the cell through the microvasculature by means of local dissociation of spectrin tetramers to dimers, 2) that the network in situ is in a dynamic state and undergoes a “breathing” action of tetramer dissociation and re-formation. Cells that are required to withstand high mechanical stresses rely for their capacity to accommodate to distortion without structural damage on a membrane-associated complex of proteins. The archetypal example is the red cell membrane, which is subject to large shearing forces throughout its lifetime in the circulation, and responds to these by elastic deformation. The lipid bilayer itself is essentially devoid of elasticity and a protein-free bilayer membrane rapidly vesiculates under even mild shear stress. The red cell membrane skeleton, which gives the membrane its characteristic mechanical properties, is a roughly hexagonal lattice, composed of spectrin tetramers attached at their ends to junctional complexes consisting of several globular proteins (1). The spectrin tertramers are formed by the head-to-head association of pairs of ␣␤ heterodimers. The self-association of such dimers in free solution is weak (K a ϳ3 ϫ 10 5 M Ϫ1 at physiological temperature and ionic strength) (2,3), but the apposition of the association sites on the immobilized proteins in situ ensures that the spectrin remains overwhelmingly in the tetrameric state (some higher association states, especially the hexamer, appear also to exist in the network (4). We know little, however, about the dynamics of the spectrin dimer-dimer interaction in the intact red cell membrane; more especially, the possible effects of membrane deformation on this interaction have not been considered. The origins of the elastic properties of the membrane remain a matter of debate. The end-to-end distance of the spectrin tetramers is constrained by the separation of the junction points to about half the equilibrium root-mean-square end-toend distance of the protein in solution (5,6), and this circumstance gave rise to the conjecture that the spectrin behaves as an entropy spring. More recent evidence, however, implied that this was not the predominant source of the elasticity (7), and there is indeed some structural evidence to suggest that spectrin may act as a helical Hookean spring (8). The maximum permitted local extension of the membrane, in the absence of unfolding of spectrin secondary structure (9), should be equivalent to the difference between the average separation of the junctions and the contour length of the tetramers, which is known from electron microscopy (10). This amounts to an extension of about 3-fold, sufficient to explain the large distortions that the cell undergoes in vivo (11,12) and which can be simulated in vitro (13). There are currently, however, no experimental data to discriminate between these or other kinds of local changes accompanying distortions of the membrane. Here we show that under physiological conditions spectrin tetramers in the unstressed intact membrane exist in rapid equilibrium with dimers. Importantly, shear-induced membrane deformation markedly displaces the equilibrium in favor of the dimer. Based on these findings we suggest that such dissociation of spectrin tetramers is a primary part of the mechanism by which the membrane can accommodate the large reversible distortions that it suffers in the circulation. Such perturbation of protein-protein interactions under the action of external forces may be a general phenomenon. Materials Human venous blood was drawn, with informed consent, from healthy volunteers. Glutathione-Sepharose 4B was purchased from Amersham Biosciences, Dextran T40 from Amersham Biosciences AB (Uppsala, Sweden), electrophoresis reagents from Bio-Rad, and GelCode Blue Reagent from Pierce. All other chemicals were reagent grade and obtained from commercial sources. 1-154, were prepared by cloning and expression in Escherichia coli, as described by Nicolas et al. (14). The two ␣-spectrin constructs were cloned into the pGEX-2T vector, using BamHI and EcoRI restriction sites, upstream and downstream, respectively. The cDNAs were introduced into the BL21 (DE3) expression strain of the bacterium. The purification of the GST 1 fusion proteins and cleavage of the GST followed the established procedure (14). Peptide concentrations were determined spectrophotometrically. Materials were screened for purity by gel electrophoresis in the presence of SDS. Methods Introduction of Spectrin Fragments into Erythrocyte Ghosts-Red cells were isolated from freshly drawn blood by centrifugation and washed three with Tris-buffered isotonic saline (0.12 M potassium chloride, 10 mM Tris, pH 7.4). The cells were lysed with 35 volumes of ice-cold hypotonic buffer A (5 mM Tris, 5 mM potassium chloride, pH 7.4). The resulting ghosts were collected by centrifugation and washed once in cold lysis buffer. The ghosts (5 ϫ 10 9 cells/ml) were incubated for 40 min at 37°C with the required concentrations of the spectrin fragments, and 0.1 volume of 1.5 M potassium chloride, 50 mM Tris, pH 7.4, was added to restore isotonicity. Measurement of Membrane Stability-To evaluate the effect of peptide incorporation on the resistance of the cells to shear, the resealed ghosts were suspended in 40% dextran, and membrane mechanical stability was quantitated using an ektacytometer, as described previously. The measure of membrane stability was taken as the rate of decrease of deformability index (DI) at a constant applied shear stress of 750 dynes cm Ϫ2 . To examine the effect of shear stress on the incorporation of peptides into the membrane skeletal network, the resealed ghosts, containing the peptide, were suspended in isotonic buffer, supplemented with 40% dextran, and sheared at a low stress of 250 dynes cm Ϫ2 at room temperature in the couette cell of the ektacytometer (15). Extraction and Analysis of Spectrin from Resealed Ghosts-The resealed ghosts were washed with isotonic buffer. The binding of the peptide to the spectrin on the membrane, with and without a shearing step, was analyzed by re-lysing the ghosts with 30 volumes of ice-cold Buffer A, washing three times with the same buffer, and extracting the spectrin. Extraction was accomplished by suspension of the ghosts in 0.25 mM sodium phosphate, pH 7.4, and dialysis at 4°C overnight. The spectrin was collected by centrifugation (21,000 ϫ g) and examined by gel electrophoresis in the native state in a Tris-Bicine buffer system, run in the cold (16). The spectrin tetramer, dimer, and the complex of the dimer with the peptide fragment were well resolved, and the absence of a zone corresponding to the free fragment or of a trail of stained material showed that no dissociation had occurred during migration. The gels, stained with GelCode Blue, were evaluated by densitometry. Interaction of Spectrin Peptides with Inside-out Vesicles-Inside-out vesicles (IOVs) were prepared as described previously (17). The binding of spectrin peptide fragments to IOVs was examined by pelleting the IOVs from the reaction mixture at 47,800 ϫ g. The pellet was analyzed by electrophoresis in SDS gels of 9% acrylamide, followed by staining with GelCode Blue. RESULTS Binding of ␣-Chain Peptide Fragment to Spectrin Self-association Sites in Situ-The dimer-tetramer equilibrium of spectrin is characterized by an unusually high activation energy (2). Thus at room temperature many hours are required to approach equilibrium, and in the cold the half-time is measured in weeks or months. The explanation of this phenomenon is that formation of the tetramer from its constituent ␣␤ dimers through a pair of intermolecular ␣-␤ bonds requires the prior rupture of two intramolecular ␣-␤ bonds, one in each of the antiparallel dimers (3,18). Because the intra-dimer bond has to open to allow the N-terminal ␣-chain fragment to bind to its C-terminal site on the ␤-chain of a dimer, the high activation energy persists in the fragment-dimer interaction. Therefore to approach binding equilibrium within a reasonable time the experiment must be carried out at elevated temperatures (30 -37°C) (16,19,20). Fig. 1A shows that at these temperatures (but not at the lower temperature of 24°C) incorporation of the ␣1-154 peptide into the spectrin in the membrane network does indeed occur. (A trace of spectrin dimer is always seen in the gel; its amount varies, and it is probably a consequence, at least in part, of proteolytic damage before or during extraction (21)). The peptides with and without the GST fusion domain were tested and no differences were found (data not shown). As a control, we also examined the short N-terminal ␣1-50 peptide, which does not enter the native fold and does not therefore bind to the ␤-chain in solution (14,22); this peptide was not incorporated into the membrane (Fig. 1B). We have further established that the long peptide does not bind to spectrin-and actin-free inside-out membrane vesicles (data not shown), which retain all other intrinsic and extrinsic membrane proteins. Thus any possibility that the peptide exerts its effect by binding to other proteins can be excluded. From the kinetics of incorporation of the ␣1-154 peptide (Fig. 1C), it is clear that the binding is an equilibrium process, 2. Reversibility of peptide incorporation. Resealed ghosts containing the peptide were re-lysed, washed thoroughly before, and resealed without peptide. Spectrin extracts were analyzed by electrophoresis in 5% non-denaturing gels. Lane 1, spectrin extract from ghosts incubated and resealed in the presence of the ␣1-154 peptide; lanes 2-4, spectrin extracts from ghosts re-lysed and again resealed at 24, 30, and 37°C, respectively. The dimer-peptide complex largely disappeared at both 30 and 37°C, demonstrating reversibility of peptide incorporation. effectively reaching completion after about 50 min at 37°C. The absence of a tetramer-fragment complex shows that the binding of a single peptide molecule causes dissociation of the tetramer into dimers, one or probably both (since the amount of free, uncomplexed dimer generated does not significantly increase) associated with the peptide. Fig. 2 reveals that the binding of the peptide to the spectrin in situ is reversible, for when the cells with incorporated peptide were washed free of unbound peptide and warmed to 37°C, the peptide was released from the membrane. Slower release ensued at 30°C, but none could be detected at 24°C over a period of 40 min. Effect of Peptide Incorporation on Membrane Stability-The effect of increasing concentrations of the long peptide on the mechanical stability of the membrane was assessed by shearing in the ektacytometer at room temperature (15). As Fig. 3A shows, membrane stability is markedly reduced in ghosts con-taining the peptide, as reflected by a faster rate of decay of the DI. Increasing concentrations of the peptide in the resealing buffer resulted in a progressive decrease in membrane mechanical stability. The peptides with and without GST fusion domain were again tested and no differences were found. The decreased membrane mechanical stability was paralleled by a progressive increase in the incorporation of the peptide into the membrane skeleton (Fig. 3B). Fig. 3C shows that increasing concentrations of peptide in the resealing buffer led to a progressive accretion of the spectrin dimer-peptide complex, with a corresponding decrease in the concentration of spectrin tetramers. In Fig. 3D we show the impairment of membrane stability, measured by the half-time of breakdown under shear as a function of the extent of tetramer dissociation. A similar relationship between decreased membrane mechanical stability and elevated dimer content has previously been observed in red cells of subjects with hemolytic anemias, caused by spectrin mutations that result in defective dimer self-association (23). The short N-terminal ␣1-50 peptide, which did not incorporate into the membrane, had no effect on membrane mechanical stability at concentrations up to 100 M in the resealing buffer (data not shown). Incorporation of the ␣-Chain Peptide into the Membrane under Shear-Resealed ghosts containing the ␣1-154 peptide were subjected to varying periods of low shear in the ektacytometer at room temperature. Because of the high activation energy of the binding and tetramer dissociation reactions, there is, on the time scale of these experiments, no detectable incorporation of the peptide into the membrane network in static cells (or of course binding to spectrin in free solution). Ghosts resealed with ␣1-154 peptide at 30°C were subjected to a constant low shear stress of 250 dynes cm Ϫ2 for the indicated periods of time. Spectrin extracts were analyzed as described above. The amount of spectrin dimer-peptide complex, reflecting the proportion of spectrin tetramers dissociated, is seen to increase with time of exposure to shear. Nevertheless, a time-dependent incorporation of the peptide was observed (Fig. 4). This striking effect demonstrates that mild shearing stress is sufficient to induce dissociation (presumably local) of spectrin tetramers, thus overcoming an activation energy of some 100 kcal mol Ϫ1 (2). DISCUSSION While the self-association of spectrin dimers in free solution is weak, especially at physiological temperature, the cohesion of the membrane skeletal network in situ is ensured by the close apposition of the binding sites. The strong linkage of the distal dimer ends to the network junctions, and especially the tight attachment of one dimer in each tetramer to membrane-bound ankyrin at a position close to the selfassociation site (24), can be assumed to restrict severely the excluded volume available to the dimer association sites. The dimer-tertramer equilibrium in situ is thus expected (and observed) to be grossly shifted in favor of the tetramer on entropic grounds. Binding of the univalent fragments at the dimer-dimer association sites was thus expected, if it occurred at all, to require a very large molar excess of the fragment, as was indeed found. The association constant for binding of a univalent ␣-chain fragment to spectrin dimer in dilute solution is only a little lower than that for dimer self-association (16,19,20). This may be because only one intra-dimer interaction has to be broken to allow the fragment to bind. In any case, the fragment would in principle be expected to bind to any available dimers on the membrane. For dimers to become available, the spectrin in the network must undergo a continuous association-dissociation, or "breathing" process of a frequency compatible with entry of the fragment. The apparent association constant for the binding of the fragment to its sites in the network in situ should be defined by a simple Langmuir adsorption isotherm, formally equivalent to the Scatchard equation (25). However, the concentration of available dimers depends on the in situ dimer-tetramer equilibrium. This is concentration-independent, since no diffusion of the reactants on the membrane is permitted. It can thus be treated as a conformational equilibrium between an open and a sequestered state of the dimers. The system is therefore defined by two equilibria: S ϩ F ϭ SF and S ϭ S c , where S and S c represent the available and sequestered states of the spectrin dimer, respectively, and F the univalent fragment. Then if K and K s are the equilibrium constants for these two reactions, and writing ␣ for the fractional saturation of spectrin on the membrane with the fragment (expressing spectrin concentration in molar units of dimers), the binding of the fragment to the membrane is described by the relation: ␣ ϭ KK s f/(KK s f ϩ 1), where f is the concentration of the fragment. This equation was used to fit the data points of Fig. 3C and gave a value for KЈ ϭ KK s of 1.5 ϫ 10 4 M Ϫ1 . To extract K s we need to know K, but its value for the interaction in free solution cannot be equated with that for binding on the membrane, for it may well be grossly influenced by steric and electrostatic factors. A value in the range of those obtained from solution studies (16,19,20), say 10 6 m Ϫ1 , would lead to K s in the region of 0.015; that is about 1% of the tetramer population would be dissociated in the unperturbed cell. This proportion would almost certainly be further reduced by molecular crowding caused by hemoglobin (26). A more soundly based estimate must await a direct experimental determination of K. The most striking outcome of this study is the observation that the dissociation of tetramers into dimers can be induced by shear, even at room temperature at which the equilibrium in solution is essentially frozen over the period of the experiment. This implies that a very modest mechanical force is sufficient to break the dimer-dimer interaction. It also implies that this association-dissociation, or breathing process operates continuously in the circulation, in which the cells are nearly always under shear. Our data do not as yet permit a rigorous quantitative description of this previously unsuspected effect. The influence of the membrane environment on the equilibrium and rate constants for the self-association of spectrin dimers and the interaction of spectrin dimers with a univalent fragment, as measured in dilute solution, is still uncertain. We can also not exclude that some additional dissociation of spectrin tetramers through entry of more peptide into the spectrin network could have occurred during the brief period of the highshear assay at room temperature. Thus the fractional dissociations of tetramers engendering the observed reductions in membrane stability should be regarded as minimum values. The likelihood of dissociation of bound peptide during the time of experimental manipulations after the cells were restored to the static state is remote. Various explanations have been advanced for the elasticity and stability of the membrane. One type of model is based on the stretching of spectrin from its relatively crumpled (27) or compressed (8) state at rest up to its fully extended length (but see also Ref. 7), allowing for an extension factor of about 3. Another suggestion is that the secondary structure of the protein, which is composed primarily of three-stranded ␣-helical elements (28), can be unfolded under the action of a tensile force (9). The question of whether protein-protein interactions in the network can be disrupted by mechanical forces has not previously been addressed. It is unlikely that dissociation would occur at the lattice junctions, for the ternary complex of spectrin, actin, and protein 4.1 (irrespective of contributions of other proteins present at the junctions) is very tightly associated (29). The results presented here indicate that rupture of spectrin tetramers is a likely mechanism for the capacity of the membrane to adapt to very large distortions. These observations offer a rationale for the evolutionary advantage of the tetrameric structure of spectrin. If it functioned only as a simple elastic element of the network a fragile dimer-dimer link at the center would afford no advantage. It may be recalled that neuronal spectrin, fodrin, which is probably not exposed to high shearing forces during its lifetime in the cell, has the form of a stable tetramer, which cannot be dissociated into dimers by known physical means, short of denaturation (30).
4,522
2002-08-30T00:00:00.000
[ "Biology", "Chemistry" ]
Channel Acquisition for Massive MIMO-OFDM with Adjustable Phase Shift Pilots We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios. I. INTRODUCTION F ORTHCOMING 5G cellular wireless systems are expected to support 1000 times faster data rates than the currently deployed 4G long-term evolution (LTE) system. To achieve the high data rates required by 5G, many technologies have been proposed [1]- [3]. Among them, massive multipleinput multiple-output (MIMO) systems, which deploy unprecedented numbers of antennas at the base stations (BSs) to simultaneously serve a relatively large number of user terminals (UTs), are believed to be one of the key candidate technologies for 5G [4]- [6]. Orthogonal frequency division multiplexing (OFDM) is a multi-carrier modulation technology suited for high data rate wideband wireless transmission [7], [8]. Due to its robustness to channel frequency selectivity and relatively efficient implementation, OFDM combined with massive MIMO is a promising technique for wideband massive MIMO transmission [4]. As in conventional MIMO-OFDM, the performance of massive MIMO-OFDM is highly dependant on the quality of the channel acquisition. Pilot design and channel acquisition for massive MIMO-OFDM is of great practical importance. Optimal pilot design and channel acquisition for conventional MIMO-OFDM has been extensively investigated in the literature. The most common approach is to estimate the channel response in the delay domain, and optimal pilots sent from different transmit antennas are typically assumed to satisfy the phase shift orthogonality condition in both the single-user case [9]- [11] and the multi-user case [12]. Note that such phase shift orthogonal pilots (PSOPs) have been adopted in LTE [13]. When channel spatial correlations are taken into account, optimal pilot design has been investigated for both the single-user case [14] and multi-user case [15]. Although these orthogonal pilot approaches can eliminate pilot interference in the same cell, they do not take into account the pilot overhead issue, which is thought to be one of the limiting factors for throughput in massive MIMO-OFDM [4]. When such approaches are directly adopted in time-division duplex (TDD) massive MIMO-OFDM, the corresponding pilot overhead is proportional to the sum of the number of UT antennas, and would be prohibitively large as the number of UTs becomes large. This becomes the system bottleneck, especially in high mobility scenarios where pilots must be transmitted more frequently. Therefore, a pilot approach that takes the pilot overhead issue into account is of importance for massive MIMO-OFDM systems. In this paper, we propose adjustable phase shift pilots (AP-SPs) for massive MIMO-OFDM to reduce the pilot overhead. For APSPs, one sequence along with different adjustable phase shifted versions of itself in the frequency domain are adopted as pilots for different UTs. The proposed APSPs are different from conventional PSOPs [9], [10], [12], in which phase shifts for different pilots are fixed, and phase shift differences between different pilots are no less than the maximum channel delay (divided by the system sampling duration) of all the UTs. Since in our approach the phase shifts for different pilots are adjustable, more pilots are available compared with conventional PSOPs, which leads to significantly reduced pilot overhead. The proposed APSPs exploit the following two channel properties: First, wireless channels are sparse in many typical propagation scenarios; most channel power is concentrated in a finite region of delays and/or angles due to limited scattering [16]- [19]. Such channel sparsity can be resolved in the angle domain in massive MIMO due to the relatively large antenna array apertures, which has been observed in recent massive MIMO channel measurement results [20], [21]. Second, channel sparsity patterns, i.e., channel power distributions in the angle-delay domain, for different UTs are usually different. 1 For APSPs, when the phase shifts for pilots employed by different UTs are properly scheduled according to the above channel properties, channel acquisition can be achieved simultaneously in an almost interference-free manner as with conventional PSOPs. There has recently been increased research interest on utilizing channel sparsity for channel acquisition in massive MIMO. For instance, a timefrequency training scheme [25] and a distributed Bayesian channel estimation scheme [24] were proposed for massive MIMO-OFDM by exploiting the channel sparsity. As the approaches in [24] and [25] focus on channel acquisition for a single UT, the corresponding pilot overhead would still grow linearly with the number of UTs. Channel sparsity has also been exploited to mitigate pilot contamination in multi-cell massive MIMO [26], [27]. Note that compressive sensing has been applied to sparse channel acquisition in some recent works (see, e.g., [19], [22], [23], [28] and references therein), in which the corresponding pilot signals are usually assumed to be randomly generated. However, it is usually quite difficult to implement random pilot signals in practical systems [29]. For example, adopting large dimensional random pilot signals in the massive MIMO-OFDM systems considered here requires huge storage space and high complexity channel acquisition algorithms. In addition, a low peak-to-average power ratio (PAPR) for randomly generated pilot signals usually cannot be guaranteed. These drawbacks can be mitigated via proper design of the deterministic sensing matrices (see, e.g., [30], [31] and references therein). The main contributions of this paper are summarized as follows: • Based on a physically motivated channel model, we establish a relationship between the space-frequency domain channel covariance matrix (SFCCM) and the channel power angle-delay spectrum for massive MIMO-OFDM. We show that when the number of BS antennas is sufficiently large, the eigenvectors of the SFCCMs for different UTs tend to be equal, while the eigenvalues depend on the respective channel power angle-delay spectra, which reveals the channel sparsity in the angle-delay domain. Then we propose the angle-delay domain channel response matrix (ADCRM) and the corresponding angledelay domain channel power matrix (ADCPM), which can model the massive MIMO-OFDM channel sparsity 1 There has been recent work that considers channels with a sparse common support [22], [23]. However, for massive MIMO channels, the common support assumption might not hold due to the increased angle resolution [22], [24]. Thus, in this work we assume that the channel sparsity patterns of different UTs are different (but not necessarily totally different), although the proposed APSP approach can also be applied to the common support cases. in the angle-delay domain, and are convenient for further analyses. • With the presented channel model, we propose APSPbased channel acquisition (APSP-CA) for massive MIMO-OFDM in TDD mode. For APSPs, equivalent channels for different UTs will experience corresponding cyclic shifts in the delay domain. Using this property, we show that the sum mean square error (MSE) of channel estimation (MSE-CE) can be minimized if the UTs' channel power distributions in the angle-delay domain can be made non-overlapping with proper pilot phase shift scheduling. Taking the time-varying nature of the channel into account, we further investigate channel prediction during the data segment using the received pilot signals. We show that the sum MSE of channel prediction (MSE-CP) can also be minimized if the UTs' channel power distributions in the angle-delay domain can be made non-overlapping with proper pilot phase shift scheduling, which coincides with the optimal channel estimation condition. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The proposed APSP-CA approach is investigated for cases involving both one symbol and multiple consecutive symbols. • The proposed APSP-CA is evaluated in several typical propagation scenarios, and significant performance gains in terms of achievable spectral efficiency over the conventional PSOP-based channel acquisition (PSOP-CA) are demonstrated, especially in high mobility scenarios. Portions of this work previously appeared in the conference paper [32]. A. Notations We adopt the following notation throughout the paper. We use = √ −1 to denote the imaginary unit. ⌊x⌋ (⌈x⌉) denotes the largest (smallest) integer not greater (smaller) than x. · N denotes the modulo-N operation. δ(·) denotes the delta function. Upper (lower) case boldface letters denote matrices (column vectors). The notation is used for definitions. Notations ∼ and ∝ represent "distributed as" and "proportional to", respectively. We adopt I N to denote the N × N dimensional identity matrix, and I N ×G to denote the matrix composed of the first G (≤ N ) columns of I N . We adopt 0 to denote the all-zero vector or matrix. The superscripts (·) H , (·) T , and (·) * denote the conjugate-transpose, transpose, and conjugate operations, respectively. B. Outline The rest of the paper is organized as follows. In Section II, we investigate the sparse nature of the massive MIMO-OFDM channel model. In Section III, we propose APSP-CA over one OFDM symbol in massive MIMO-OFDM, including channel estimation and prediction. We investigate the multiple consecutive pilot symbol case in Section IV. Simulation results are presented in Section V, and conclusions are given in Section VI. II. MASSIVE MIMO-OFDM CHANNEL MODEL In this section, we propose a physically motivated massive MIMO-OFDM channel model, and investigate the inherent channel sparsity property. We consider a single-cell TDD wideband massive MIMO wireless system which consists of one BS equipped with M antennas and K single-antenna UTs. We denote the UT set as K = {0, 1, . . . , K − 1} where k ∈ K represents the UT index. We assume that the channels of different UTs are statistically independent. We assume that the BS is equipped with a one-dimensional uniform linear array (ULA), 2 with antennas separated by one-half wavelength. Then the BS array response vector corresponding to the incidence angle θ with respect to the perpendicular to the array is given by [17] v M,θ = 1 exp (−π sin (θ)) . . . We assume that the signals seen at the BS are constrained to lie in the angle interval A = [−π/2, π/2], which can be achieved through the use of directional antennas at the BS, and thus no signal is received at the BS for incidence angles θ / ∈ A [33]. We consider OFDM modulation with N c subcarriers, performed via the N c -point inverse DFT operation, appended with a guard interval (a.k.a. cyclic prefix) of length N g (≤ N c ) samples. We employ T sym = (N c + N g ) T s and T c = N c T s to denote the OFDM symbol duration with and without the guard interval, respectively, where T s is the system sampling duration [13]. We assume that the guard interval length T g = N g T s is longer than the maximum channel delay of all the UTs [34], [35]. We assume that the channels remain constant during one OFDM symbol, and evolve from symbol to symbol. We denote the uplink (UL) channel gain between the antenna of the kth UT and the mth antenna of the BS over OFDM symbol ℓ and subcarrier n as [g k,ℓ,n ] m . Using a physical channel modeling approach (see, e.g., [17], [36]- [39]), the channel response vector g k,ℓ,n ∈ C M×1 can be described as where v M,θ is given in (1), g k (θ, τ, ν) is the complex-valued joint angle-delay-Doppler channel gain function of UT k corresponding to the incidence angle θ, delay τ , and Doppler frequency ν. Note that the number of significant channel taps in the delay domain is usually limited, and smaller than N g ; i.e., |g k (θ, qT s , ν)| is approximately 0 for most q. Since the locations of the significant channel taps in the delay domain are usually different for different UTs, we adopt (2) in this paper to obtain a general channel representation applicable for all the UTs. We write the kth UT's channel at OFDM symbol ℓ over all subcarriers as which will be referred to as the space-frequency domain channel response matrix (SFCRM). From (2), it is not hard to show that We assume that channels with different incidence angles, delays, and/or Doppler frequencies are uncorrelated [17], [38], [39]. We also assume that the temporal correlations and joint space-frequency domain correlations of the channels can be separated [35], [38], i.e., where S ADD k (θ, τ, ν), S AD k (θ, τ ), and S Dop k (ν) represent the power angle-delay-Doppler spectrum, power angle-delay spectrum, and power Doppler spectrum of UT k, respectively [17], [40]. From (4) and (5), we can obtain the following channel statistical property (see Appendix A for the derivations) where ̺ k (∆ ℓ ) is the channel temporal correlation function (TCF) given by and R k is the space-frequency domain channel covariance matrix (SFCCM) given by In this work, we consider the widely accepted Clarke-Jakes channel power Doppler spectrum, 3 with the corresponding channel TCF given by [40], [41] where J 0 (·) is the zeroth-order Bessel function of the first kind, and ν k is the Doppler frequency of UT k. Note that the Clarke-Jakes power Doppler spectrum is an even function, i.e., ̺ k (∆ ℓ ) = ̺ k (−∆ ℓ ), and satisfies ̺ k (0) = 1. Also, we assume that according to the law of large numbers, the channel elements exhibit a joint Gaussian distribution, i.e., Before proceeding, we investigate in the following proposition a property of the large dimensional SFCCM, and present a relationship between the SFCCM and the power angle-delay spectrum for massive MIMO-OFDM channels. Proposition 1: where θ m arcsin (2m/M − 1), and τ n nT s . Then when the number of antennas M → ∞, the SFCCM R k tends to sense that, for fixed non-negative integers i and j, Proof: See Appendix B. The relationship between the space-frequency domain channel joint correlation property and the channel power distribution in the angle-delay domain for massive MIMO-OFDM is established in Proposition 1. Specifically, for massive MIMO-OFDM channels in the asymptotically large array regime, the eigenvectors of the SFCCMs for different UTs tend to be the same, which shows that massive MIMO-OFDM channels can be asymptotically decorrelated by the fixed space-frequency domain statistical eigendirections, while the eigenvalues depend on the corresponding channel power angle-delay spectra. Proposition 1 indicates that, for massive MIMO-OFDM channels, when the number of BS antennas M is sufficiently large, the SFCCM can be well approximated by Although the waves impinging on the BS are assumed to be sparsely distributed in the angle domain due to limited scattering around the BS (typically mounted at an elevated position), the waves departing the mobile UTs are usually uniformly distributed in angle of departure. Thus the Clarke-Jakes spectrum is suitable to model the time variation of the channel [40], [41]. It is worth noting that the approximation in (12) is consistent with existing results in the literature. For frequency-selective single-input single-output channels, (12) agrees with the results in [35], [42]. For frequency-flat massive MIMO channels, the approximation given in (12) has been shown to be accurate enough for a practical number of antennas, which usually ranges from 64 to 512 [27], [33], [43], [44], and a detailed numerical example can be found in [27]. Since the SFCCM model given in (12) is a good approximation to the more complex physical channel model in (8) when the number of BS antennas is sufficiently large, we will thus exclusively use the simplified SFCCM model in (12) in the rest of the paper. Realistic wireless channels are usually not wide-sense stationary [17], i.e., R k varies as time evolves, although with a relatively large time scale. 4 In practice, acquisition of the large dimensional R k is rather difficult and resource-intensive for massive MIMO-OFDM. However, when we shift our focus from the space-frequency domain to the angle-delay domain, the problem can be significantly simplified. Motivated by the eigenvalue decomposition of the SFCCM given in (12), we decompose the SFCRM as follows is referred to as the angle-delay domain channel response matrix (ADCRM) of UT k at OFDM symbol ℓ. In the following proposition, we derive a statistical property of the ADCRM. Proposition 2: For massive MIMO-OFDM channels, when the number of antennas M → ∞, elements of the ADCRM H k,ℓ satisfy where Ω k is given in (10). Proof: See Appendix C. Proposition 2 shows that, for massive MIMO-OFDM channels, different elements of the ADCRM H k,ℓ are approximately mutually statistically uncorrelated, which lends the ADCRM in (14) its physical interpretation. Specifically, different elements of the ADCRM correspond to the channel gains for different incidence angles and delays, which can be resolved in massive MIMO-OFDM with a sufficiently large antenna array aperture. Note that [Ω k ] i,j corresponds to the average power of [H k ] i,j , and can describe the sparsity of the wireless channels in the angle-delay domain. Hereafter we will refer to Ω k as the angle-delay domain channel power matrix (ADCPM) of UT k. The dimension of the ADCPM Ω k is much smaller than that of the SFCCM R k , and most elements in Ω k are approximately zero due to the channel sparsity. In addition, Ω k is composed of the variances of independent angle-delay domain channel elements, and thus can be estimated in an element-wise manner. Therefore, in practice there will be enough resources for one to obtain an estimate of Ω k with guaranteed accuracy. In the rest of the paper, we will assume that the ADCPMs of all the UTs are known by the BS. Before we conclude this section, we define the extended ADCRM as follows Similarly, the extended ADCPM, which corresponds to the power distribution of the extended ADCRMH k,ℓ,(Nc) , is defined asΩ Such definitions will be employed to simplify the analyses in the following sections. III. CHANNEL ACQUISITION WITH APSPS OVER ONE SYMBOL Based on the sparse massive MIMO-OFDM channel model presented in the previous section, we propose APSP-CA for massive MIMO-OFDM, including channel estimation and prediction. In this section, we first investigate the case where the APSPs are sent over one OFDM symbol, while the multiple symbol case will be investigated in the next section. A. APSPs over One Symbol We assume that all the UTs are synchronized. During the UL pilot segment, namely, the ℓth OFDM symbol of each frame, all the UTs transmit the scheduled pilots simultaneously, and the space-frequency domain signal received at the BS can be represented as where [Y ℓ ] i,j denotes the received pilot signal at the ith antenna over the jth subcarrier, G k,ℓ is the SFCRM defined in (3), X k = diag {x k } ∈ C Nc×Nc denotes the frequency domain pilot signal sent from the kth UT, Z ℓ is the additive white Gaussian noise (AWGN) matrix during the UL pilot segment with elements identically and independently distributed (i.i.d.) as CN (0, σ ztr ), and σ ztr is the noise power. The proposed APSP over one OFDM symbol for a given UT k is given by where X = diag {x} ∈ C Nc×Nc satisfying XX H = I Nc is the basic pilot matrix shared by all UTs in the same cell, and σ xtr is the pilot signal transmit power. The APSP signal given in (19) can be seen as a phase shifted version of √ σ xtr X with phase shift φ k in the frequency domain. Note that the proposed APSP has the same PAPR as that of X in the time domain, thus existing low PAPR sequence designs can be easily incorporated into our approach. In addition, as the basic pilot matrix X can be predetermined, only X and the pilot phase shift indices rather than the entire pilot matrices are required to be stored, and the required storage space can be significantly reduced. From (19), it can be readily obtained that, for ∀k, k ′ ∈ K, which indicates that cross correlations of the proposed APSPs for different UTs depend only on the associated phase shift difference. It is worth noting that, for conventional PSOPs, the phase shift differences for different pilots are set to satisfy the orthogonality condition However, for our APSPs, the phase shifts for different pilots are adjustable, and pilots for different UTs can even share the same phase shift, which leads to more available pilots, and thus pilot overhead can be significantly reduced. B. Channel Estimation with APSPs In this section we investigate channel estimation during the pilot segment under the minimum MSE (MMSE) criterion using the proposed APSPs. Direct MMSE estimation of the SFCRM G k,ℓ requires information about the large dimensional SFCCM R k and a large dimensional matrix inversion, which is difficult to implement in practice. However, with the sparse massive MIMO-OFDM channel model presented above, when we shift our focus from the space-frequency domain to the angle-delay domain, channel estimation can be greatly simplified. The BS can first estimate the ADCRM to obtainĤ k,ℓ , then the SFCRM estimates can be readily obtained asĜ k,ℓ = V MĤk,ℓ F T Nc×Ng via exploiting the unitary equivalence between the angle-delay domain channels and the space-frequency domain channels given in (13), while the same MSE-CE performance can be maintained. In the following, we focus on estimation of the ADCRM H k,ℓ under the MMSE criterion. Recalling (13), the received pilot signal at the BS in (18) can be rewritten as After decorrelation and power normalization of Y ℓ , the BS can obtain an observation of the UL channel H k,ℓ , given by (22) shown at the top of the next page, where (a) follows from (20). Using the unitary transformation property, it can be readily shown that the pilot noise term in (22) exhibits a Gaussian distribution with i.i.d. elements distributed as CN (0, σ ztr /σ xtr ), and (22) can be simplified as where ρ tr σ xtr /σ ztr is the signal-to-noise ratio (SNR) during the pilot segment, and Z iid ∈ C M×Ng is the normalized AWGN matrix with i.i.d. elements distributed as CN (0, 1). Note that the pilot interference term H where (a) follows from (16), and (b) follows from the permutation matrix definition given in Section I-A. Thus, the pilot interference term H (23) is a column truncated version of the extended ADCRMH k ′ ,ℓ,(Nc) with a cyclic column shift, where the shift factor depends on the corresponding pilot Recalling Proposition 2, elements of the ADCRM H k ′ ,ℓ are statistically uncorrelated. Consequently, elements of the pilot interference term H , a column truncated copy of H k ′ ,ℓ with cyclic column shift, are also statistically uncorrelated. Thus, using the same methodology as in the previous section, the corresponding power matrix of the pilot interference term H which is a column truncated version of the extended ADCPM Ω k ′ ,(Nc) defined in (17) with cyclic column shift φ k ′ − φ k . With the channel observation Y k,ℓ in (23), and the fact that the angle-delay domain channel elements are uncorrelated as derived in Proposition 2, the MMSE estimateĤ k,ℓ can be obtained in an element-wise manner as follows [47] Ĥ k,ℓ LetH k,ℓ = H k,ℓ −Ĥ k,ℓ be the angle-delay domain channel estimation error of the kth UT, then the corresponding MSE-CE can be obtained as where (a) follows from the orthogonality principle of MMSE estimation [47]. Before we proceed, we define the sum MSE-CE of all the UTs as Due to the incurred pilot interference, performance of the APSP-based channel estimation might deteriorate. However, we will show in the following proposition that such effects can be eliminated with proper phase shift scheduling for different pilots. Proposition 3: The sum MSE-CE ǫ CE is lower bounded by and the lower bound can be achieved under the condition that, for ∀k, k ′ ∈ K and k = k ′ , Proof: See Appendix D. Proposition 3 shows that with the proposed APSPs, the sum MSE-CE can be minimized when phase shifts for different pilots are properly scheduled according to the condition given in (31). The interpretation is very intuitive. With frequency domain phase shifted pilots, equivalent channels will exhibit corresponding cyclic shifts in the delay domain, as seen from (24). If the equivalent channel power distributions in the angledelay domain for different UTs can be made non-overlapping after pilot phase shift scheduling, the pilot interference effect can be eliminated, and the sum MSE-CE can be minimized. Wireless channels are approximately sparse in the angledelay domain in many practical propagation scenarios, and typically only a few elements of the ADCPM Ω k are dominant in massive MIMO-OFDM. When such channel sparsity is properly taken into account, the equivalent angle-delay domain channels for different UTs are almost non-overlapping with high probability, assuming proper pilot phase shifts. This suggests the feasibility of the proposed APSPs for massive MIMO-OFDM. Note that performance of the proposed APSP approach is related to the channel sparsity level. For the case where channels of different UTs have a sparse common support with s (≤ N g ) representing the number of the columns containing non-zero elements in the ADCPM [22], [23], the maximum number of UTs that can be served without pilot interference is ⌊N c /s⌋. However, for practical wireless channels, most of the channel elements in the angle-delay domain are close to zero, and the condition in (31) usually cannot be satisfied exactly, which will lead to degradation of the channel acquisition performance. In such cases, it is clear that the more sparse the channels are, the better performance can be achieved by the proposed APSP approach. Before we conclude this subsection, we remark here that several existing pilot approaches satisfy the optimal condition given in Proposition 3. For the case where channel sparsity property is not known, it is reasonable to assume that all the angle-delay domain channel elements are identically distributed, i.e., all the ADCPM elements are equal, in which case the optimal condition in (31) can be achieved when |φ k − φ k ′ | ≥ N g for ∀k = k ′ , i.e., the extended channels in the delay domain for different UTs are totally separated, which coincides with the conventional PSOPs [12]. For frequencyflat massive MIMO channels, i.e., N c = 1, the condition in (31) can be achieved when Ω k ⊙ Ω k ′ = 0 for ∀k = k ′ , i.e., different UTs can share the same pilot when the respective channels have non-overlapping support in the angle domain, which coincides with previous works such as [33], [43]. In our work, the proposed APSPs exploit the joint angle-delay domain channel sparsity in massive MIMO-OFDM, and are more efficient and general from the pilot overhead point of view. C. Channel Prediction with APSPs In the previous subsection, we investigated channel estimation during the pilot segment. Directly employing the pilot segment channel estimates in the data segment might not always be appropriate [48], especially in high mobility scenarios, which are the main focus of the APSPs. In this subsection, we investigate channel prediction during the data segment based on the received pilot signals, using the proposed APSPs. For frame-based massive MIMO-OFDM transmission, the BS utilizes the received signals during the pilot segment to acquire the channels in the current frame. If the pilot segment channel estimateĤ k,ℓ is directly employed as the estimate of the channel H k,ℓ+∆ ℓ during the data segment, the corresponding sum MSE-CE for a given delay ∆ ℓ between the pilot symbol and data symbol can be written as In high mobility scenarios, the channel TCF satisfies ̺ k (∆ ℓ ) → 0 for relatively large delay |∆ ℓ |. When ̺ k (∆ ℓ ) < 1/2, i.e., 1 − 2̺ k (∆ ℓ ) > 0, it can be observed from (32) that the sum MSE-CE expression ǫ CE (∆ ℓ ) is even larger than the sum channel power [Ω k ] i,j , and channel estimation performance cannot be guaranteed, which motivates the need for channel prediction. For channel prediction, the BS utilizes the received pilot signals as well as the channel TCF to get estimates of the channels during the data segment. Under the MMSE criterion, with the angle-delay domain channel property of massive MIMO-OFDM given in Proposition 2, it is not hard to show that an estimate of the ADCRM H k,ℓ+∆ ℓ based on Y k,ℓ can be obtained in an element-wise manner as follows Recalling the pilot segment channel estimate in (27), it can be seen thatĤ which indicates that optimal channel estimates during the data segment can be easily obtained via prediction with initial channel estimates obtained during the pilot segment, and the complexity of channel prediction in massive MIMO-OFDM can be further reduced. Similar to (29), the sum MSE-CP for a given delay ∆ ℓ between the data symbol and pilot symbol can be defined as From (35), it can be seen that pilot interference will affect channel prediction performance similar to the channel estimation case. However, we will show in the following proposition that such effects can still be eliminated with proper pilot phase shift scheduling. Proposition 4: The sum MSE-CP ǫ CP (∆ ℓ ) ∀∆ ℓ is lower bounded by and the lower bound can be achieved under the condition that, for ∀k, k ′ ∈ K and k = k ′ , Proof: The proof is similar to that of Proposition 3, and is omitted for brevity. D. Frame Structure There exist two typical frame structures for TDD massive MIMO transmission [49]. One type of frame structure (which will be referred to as type-A) begins with the UL pilot segment, followed by the UL and downlink (DL) data segments, as shown in Fig. 1(a). In the second type (which will be referred to as type-B), the UL pilot segment is placed between the UL data segment and DL data segment, as shown in Fig. 1(b). For the proposed APSP approach, the delay between the tail-end symbols of the data segment and the pilot segment will be longer than the PSOP approach due to the reduced pilot segment length. In addition, the APSP approach focuses on high mobility scenarios where channels vary relatively quickly. Thus the type-B frame structure is well-suited for the proposed APSP approach. E. Pilot Phase Shift Scheduling In the previous subsections, we investigated channel estimation and prediction for massive MIMO-OFDM with APSPs, and obtained the optimal pilot phase shift scheduling condition applicable to both channel estimation and prediction. However, such an optimal condition cannot always be met in practice, but pilot phase shift scheduling can still be beneficial. Several scheduling criteria can be adopted. For example, if we schedule the pilot phase shifts based on the MMSE-CE criterion, the problem can be formulated as arg min where ǫ CE is defined in (29). Such a scheduling problem is combinatorial, and optimal solutions must be found through an exhaustive search. Note that the optimal phase shift scheduling conditions for channel estimation and prediction are the same, thus solution of the problem (38) can also be expected to perform well under the MMSE-CP criterion. Algorithm 1 Pilot Phase Shift Scheduling Algorithm Input: The UT set K and the corresponding ADCPMs {Ω k : k ∈ K}; the preset threshold γ Output: Pilot phase shift pattern {φ k : k ∈ K} 1: Initialization: φ 0 = 0, scheduled UT set K sch = {0}, unscheduled UT set K un = K\K sch 2: for k ∈ K un do 3: Search for a phase shift φ that satisfies If φ cannot be found in step 3, then φ = arg min 5: Update φ k = φ, K sch ← K sch ∪{k}, K un ← K un \ {k} 6: end for Motivated by the optimal condition for channel estimation and prediction obtained in previous subsections, a simplified pilot phase shift scheduling algorithm can be developed. We first define the following function that measures the degree of overlap between two real matrices A, B ∈ R M×N as follows From the Cauchy-Schwarz inequality, it is obvious that the overlapping degree function in (39) In our algorithm, we preset a threshold to balance the tradeoff between the algorithm complexity and channel acquisition performance. Specifically, we schedule the pilot phase shifts for different UTs to make the overlap function between the ADCPMs for different UTs smaller than the preset threshold γ. Intuitively, the smaller the threshold γ, the better the channel acquisition performance will be, although with a higher algorithm complexity. The description of the proposed algorithm is summarized in Algorithm 1. IV. CHANNEL ACQUISITION WITH APSPS OVER MULTIPLE SYMBOLS In the previous section, we investigated channel acquisition for massive MIMO-OFDM with the proposed APSPs over one OFDM symbol. Sometimes pilots over one symbol might be not sufficient to accommodate a large number of UTs. In this section, we extend the use of APSPs to the case of multiple consecutive OFDM symbols. We assume that the pilots are sent over Q consecutive OFDM symbols starting with the ℓth symbol in each frame. In practice, the pilot segment length Q is usually short, and we adopt the widely accepted assumption that the channels remain constant during the pilot segment [10]- [12]. Then the received signals by the BS during the pilot segment can be written as represents the received pilot signal at the BS during the ℓth symbol, X k,(Q) [X k,0 X k,1 . . . X k,Q−1 ] represents the pilot signals and X k,q = diag {x k,q } ∈ C Nc×Nc represents the signal sent from the kth UT during the qth symbol of the pilot segment, Z ℓ,(Q) is AWGN with i.i.d. elements distributed as CN (0, σ ztr ) and σ ztr is the noise power. Recalling (19), the maximum adjustable phase shift for different pilots over one OFDM symbol is N c − 1. For the Q pilot symbol case, the maximum adjustable pilot phase shift can be extended to QN c − 1. By exploiting the modulo operation, we construct the APSPs over multiple OFDM symbols as follows where U is an arbitrary Q × Q dimensional unitary matrix, and X ⌊φ k /Q⌋ is the APSP signal over one symbol defined in (19). Then it can be obtained that, for ∀k, k ′ ∈ K, where ( (20). This shows that the available phase shifts for the Q symbol case are divided into Q groups for the proposed APSPs in (41), and the group index depends on the residue of the pilot phase shift φ with respect to the pilot segment length Q. Pilot interference can only affect the UTs using APSPs with phase shifts in the same group. For example, if φ k ′ Q = φ k Q , then phase shifts φ k ′ and φ k are within the same group, and the corresponding channel acquisition of UTs k ′ and k might be mutually affected. Given the APSP correlation property over multiple symbols in (42), the channel estimation and prediction operations can be performed similarly to the single-symbol case investigated in the previous section, and we will briefly discuss such issues below. After decorrelation and power normalization with Y ℓ,(Q) given in (40), the BS can obtain an observation of the pilot segment ADCRM H k,ℓ as Y k,ℓ,(Q) where (a) follows from (42), ρ tr σ xtr /σ ztr is the pilot segment SNR, Z iid is the normalized AWGN matrix with i.i.d. elements distributed as CN (0, 1), and (b) follows from (24). With the channel observation Y k,ℓ,(Q) in (43), the MMSE estimate of the ADCRM H k,ℓ can be readily obtained in an element-wise manner as (44) shown at the top of the next page, and the corresponding sum MSE-CE is given by (45) shown at the top of the next page, In addition, prediction of the ADCRM H k,ℓ+∆ ℓ based on Y k,ℓ,(Q) can be performed as (46) shown at the top of the next page, and the corresponding sum MSE-CP with a given delay ∆ ℓ is given by (47) shown at the top of the next page. Based on the above sum MSE-CE and MSE-CP expressions for the multiple symbol APSP case, we can readily obtain the following proposition. Proposition 5: The sum MSE-CE ǫ CE (Q) is lower bounded by Both the lower bounds in (48) and (49) can be achieved under the condition that, for ∀k, k ′ ∈ K and k = k ′ , Proof: The proof is similar to that of Proposition 3, and is omitted for brevity. Proposition 5 extends the single-symbol APSP case in the previous section to the multiple symbol case. Actually, when Q = 1, Proposition 5 reduces to the results in Proposition 3 and Proposition 4. The interpretation of Proposition 5 is straightforward. For multiple symbol APSPs, different pilot phase shifts are divided into several groups, and pilot interference only affects the UTs using the phase shifts within the same group. If pilot interference can be eliminated through proper phase shift scheduling in all the groups, then optimal channel estimation and prediction performance can be achieved. When the optimal pilot phase shift scheduling condition in Proposition 5 cannot be met, a straightforward extension of the pilot phase shift scheduling algorithm in the previous section can be applied. Specifically, the UT set can be divided into Q groups, and pilot phase shift scheduling can be performed within each UT group using Algorithm 1. The tradeoff between channel acquisition performance and algorithm complexity can still be balanced with the preset threshold to determine the degree of allowable channel overlap. V. NUMERICAL RESULTS In this section, we present numerical simulations to evaluate the performance of the proposed APSP-CA in massive MIMO-OFDM. The major OFDM parameters, which are based on 3GPP LTE [46], are summarized in Table I. The massive MIMO-OFDM system considered is assumed to be equipped with a 128-antenna ULA at the BS with half wavelength antenna spacing. The number of UTs is set to K = 42 as in [4]. We consider channels with 20 taps in the delay domain, which exhibit an exponential power delay profile [18], [51] where ς k denotes the channel delay spread of UT k. We assume that transmissions from all the UTs are synchronized [13], [18]. The qth channel tap of UT k is assumed to exhibit a Laplacian power angle spectrum [18], [33], [51] S ang k,q (θ) ∝ exp − √ 2 |θ − θ k,q | /ϕ k,q , where θ k,q and ϕ k,q represent the corresponding mean angle of arrival (AoA) and angle spread for the given channel tap, respectively. We assume that the UTs are uniformly distributed in a 120 • sector, and the mean AoA θ k,q is uniformly [18], [38], and are summarized in Table II. We assume that all UTs exhibit the same Doppler, delay, and angle spread in the simulations. With the above settings, we compare the performance of the proposed APSP-CA approach with that of the conventional PSOP-CA approach, which serves as the benchmark for comparison of channel acquisition performance. For the conventional PSOP-CA, the required pilot segment length is Q = ⌈K/ (N c /N g )⌉ = 3 OFDM symbols [4]. For the proposed APSP-CA, the pilot segment length can be set to Q = 1 or 2. We adopt Algorithm 1 to schedule the pilot phase shifts in the simulations, and the overlap threshold in the algorithm is set as γ = 10 −4 . Although this algorithm is suboptimal in general compared with exhaustive search, Fig. 2, the pilot segment MSE-CE performance 5 obtained by the proposed APSPs (with Q = 1 and 2) are compared with those for conventional PSOPs (Q = 3) under several typical propagation scenarios. It can be observed that, in all the considered scenarios, the MSE-CE performance with APSPs approaches the performance obtained with PSOPs, while the pilot overhead is reduced by 66.7% (Q = 1) and 33.3% (Q = 2), respectively. In Fig. 3, we compare the channel acquisition performance during the data segment in terms of MSE versus the delay ∆ ℓ between the data symbol and pilot segment. Both the APSP-CA (Q = 1) and PSOP-CA (Q = 3) are evaluated. Also, for APSPs, both the channel estimation and prediction MSE performance are calculated. It can be observed that the MSE-CP performance obtained with APSPs approaches that for PSOPs, with the pilot overhead reduced by 66.7%. In addition, with APSPs, channel prediction outperforms channel estimation in all the evaluated scenarios. Note that the channel acquisition performance in terms of both MSE-CE and MSE-CP grows almost linearly with delay, and thus the channel acquisition performance can be improved when combined 5 All the simulated MSE results are normalized by the number of subcarriers Nc and the number of UTs K. with the type-B frame structure, as shown in the following simulation results. At the end of this section, we compare the achievable spectral efficiency of the proposed APSP and the conventional PSOP approaches. 6 We assume that the frame length equals 500 µs as in [4], which is equal to the length of 7 OFDM symbols [46], and that UL and DL data transmission each occupies half of the data segment length. For the conventional PSOP-CA approach, channel estimation and the type-A frame structure in Fig. 1(a) are adopted. For the proposed APSP-CA approach, both APSPs (Q = 1) and channel prediction are adopted, and both type-A and type-B frame structures are considered. A MMSE receiver and precoder are employed for both UL and DL data transmissions, and the SNR is assumed to be equal to the pilot SNR. In Fig. 4, the achieved spectral efficiency 7 of the APSP-CA and PSOP-CA approaches are depicted. It can be observed that the proposed APSP-CA approach shows substantial performance gain in terms of the achievable spectral efficiency over the conventional PSOP-CA approach, especially in the high mobility regime where pilot overhead dominates and the high SNR regime where pilot interference dominates. Specifically, in the high mobility SU scenario (250 km/h) with an SNR of 10 dB, the proposed APSPs can provide about 69% in average spectral efficiency gains over the conventional PSOPs. In addition, the type-B frame structure can provide a gain of about 64% over the type-A frame structure when APSPs are adopted. VI. CONCLUSION In this paper, we proposed a channel acquisition approach with adjustable phase shift pilots (APSPs) for massive MIMO-OFDM to reduce the pilot overhead. We first investigated the channel sparsity in massive MIMO-OFDM based on a physically motivated channel model. With this channel model, we investigated channel estimation and prediction for massive MIMO-OFDM with APSPs, and provided an optimal pilot phase shift scheduling condition applicable to both channel estimation and prediction. We further developed a simplified pilot phase shift scheduling algorithm based on this optimal channel acquisition condition with APSPs. The proposed APSP-CA implemented over both one and multiple symbols were investigated. Significant performance gains in terms of achievable spectral efficiency were observed for the proposed APSP-CA approach over the conventional PSOP-CA approach in several typical mobility scenarios. APPENDIX A DERIVATION OF (6) The derivation of (6) is detailed in (53), shown at the top of the next page, where (a) follows from (5), and (b) follows from the definition of the delta function. APPENDIX B PROOF OF PROPOSITION 1 We start by defining some auxiliary variables to simplify the derivations. We define n d ⌊d/M ⌋ and m d d M for an arbitrary non-negative integer d. Note that the element indices start from 0 in this paper. Then we can readily obtain that for We can also obtain that for matrices F ∈ C Nc×Ng and V ∈ C M×M , [F ⊗ V] i,j = [F] ni,nj [V] mi,mj from the definition of the Kronecker product. With the above definitions and related properties, the proof can be obtained as follows: [f Nc,q ⊗ v M,θ ] · exp (2πν (ℓ + ∆ ℓ ) T sym ) · g k (θ, qT s , ν) dθdν Before concluding the proof, we also have to show that both of the limits in the first equation of (54) exist and are finite. For this purpose, as can be seen from (e) of (54), we only need to show that exp −2π (n i − n j ) N c q · exp (−π (m i − m j ) sin (θ)) · S AD k (θ, τ q ) dθ To show (15), it suffices to show that From the definition of H k,ℓ given in (14) where (a) follows from the fact that F Nc×Ng and V M are both deterministic matrices, (b) follows from (6), and (c) follows from Proposition 1. This concludes the proof. APPENDIX D PROOF OF PROPOSITION 3 Due to the fact that the elements of Ω φ k ′ −φ k k ′ are nonnegative, we can obtain
10,704.4
2015-11-12T00:00:00.000
[ "Engineering", "Computer Science" ]
Control of femtosecond multi-filamentation in glass by designable patterned optical fields Control of femtosecond multi-filamentation in glass by designable patterned optical fields Ping-Ping Li,1 Meng-Qiang Cai,1 Jia-Qi Lü,1 Dan Wang,1 Gui-Geng Liu,1 Sheng-Xia Qian,1 Yongnan Li,1 Chenghou Tu,1,a and Hui-Tian Wang1,2,3,b 1MOE Key Laboratory of Weak Light Nonlinear Photonics and School of Physics, Nankai University, Tianjin 300071, China 2National Laboratory of Solid State Microstructures and School of Physics, Nanjing University, Nanjing 210093, China 3Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China I. INTRODUCTION 3][4][5] Many efforts have focused on the control of multiple filaments, such as by using the deformable mirror, 6 the axicon, [7][8][9][10] the phase plate, [11][12][13][14] the pinhole, 15 the diffractive elements, 16,17 the mesh, 18,19 the astigmatic focusing, 20,21 the beam size, [22][23][24] the ellipticity, [25][26][27][28] the cylindrical lens, 29 the microlens array, 30 and the hybridly polarized vector fields. 31n this article, we present a scheme for controlling the fs multi-filamentation with the engineerable quantity and locations in a K9 glass (it is almost the same as BK-7 glass, so for all parameters used for K9 here, we can adopt those for BK-7 32,33 ).We used a computer-controlled spatial light modulator (SLM) to manipulate the patterned optical fields (POFs) composed of multiple individual optical fields (IOFs) and then to control the multi-focal spots in the focal plane.We demonstrate experimentally the realization of the designable, engineerable, and stable multi-filamentation in the K9 glass. II. EXPERIMENT The experimental configuration is shown in Fig. 1.The used light source was a fs Ti-sapphire regenerative amplifier (Coherent Inc.) operating at a central wavelength of 800 nm, with a pulse duration of 125 fs and a repetition rate of 1 kHz, which delivers a fundamental Gaussian mode.An achromatic half-wave plate (HWP) and a Glan-Taylor prism (GTP) are used to control the energy and a<EMAIL_ADDRESS><EMAIL_ADDRESS>the polarization direction of the laser incident into the K9 glass.After the fs laser beam is expanded by a beam expander (BE) composed of a pair of achromatic lenses, it is incident on the reflectiontype SLM (with a dimension of 1920 × 1080 pixels and a size of p × p = 8 × 8 µm 2 per pixel). The computer-generated holograms (CGHs) displayed on the SLM are used to produce the POFs composed of multiple IOFs.Behind the SLM, another pair of achromatic lenses is used for the spatial filtering.A focusing achromatic lens with a focal length of 200 mm is used to increase the energy density inside the K9 glass (with a length of ∼15 mm) and the focal plane is inside the K9 glass (with a distance of ∼2 mm from the incident plane of the K9 glass).The produced multi-filaments are imaged by an achromatic lens with a focal length of 50 mm.The CCD camera is used to capture the images, for recording a frame of image is in 10 shots.The images we captured by the CCD camera were the end plane of the filaments instead of the exit plane of the glass sample.After the end plane of the filaments, the filaments will exhibit rapid divergence, so the local intensity becomes very low, which results in the very low nonlinear effect.Therefore, the images captured on the CCD camera will have only a little distortion, because the propagation of the divergent filaments is near linear behind the end plane of the filaments.The imaging system is an achromatic lens with a focal length of 50 mm.The very low nonlinearity can only gives rise to a little change in size of filament, while cannot influence on the location of the filament.The computer-controlled SLM is used to generate the demanded POFs.The reflection function of the CGH loaded on the SLM is written as The demanded information, which is included in δ(x, y), is carried by the carrying-frequency grating cos(2πf x x) along the x direction, with a grating period of Λ = 10p (with a corresponding spatial frequency f x = 1/Λ).δ(x, y) has the following form where G j = (2π/Λ j ) kj , kj = cos φ j x + sin φ j ŷ, r = x x + yŷ, x and ŷ are the unit vectors in the x and y directions, and φ j is an angle formed with the x direction, respectively. An uniformly x-polarized optical field is incident on the SLM with a reflection function given in Eq. (1), to be diffracted into many orders.Only its +1st order is chosen as the input field as follows Λ j (x cos φ j +y sin φ j ) (4) After such an input POF field composed of n IOFs is focused, it is incident into the K9 glass to produce the multi-filamentation.Each IOF is a circular top-hat field with the same radius of R 0 and the same amplitude of E 0 .The location of the jth IOF is defined by the coordinate of its center, as r 0j = (r 0j , φ 0j ).In particular, one should be pointed out that each IOF carries a blazed long-period phase grating G j .The jth IOF has a grating period of Λ j and an orientation (its grating vector kj has an orientation angle of φ j with respect to the +x direction).The focused input POF will include n focal spots from the n IOFs.Λ j and φ j of the jth IOF are used to engineering the arrangement of the n focal spots, and to achieve the engineering of the fs multi-filamentation in the nonlinear medium (K9 glass).The period Λ j can be defined as Λ j = L j p, where L j indicates the number of pixels within one period. After focusing, the jth IOF in the input plane (x,y) or (r, φ) is focused into a focal spot in the focal plane (ρ, ϕ), which is located at (ρ j , ϕ j ).We can calculate the distance ρ j of the focal spot P j from the field center (ρ = 0) by the formula tan θ j = ρ j /f , where f is the focal length of the focusing lens.Under the paraxial condition, the focal spot of the jth IOF is located at (ρ j , ϕ j ) = (f λ/Λ j , φ j ) in the focal plane (ρ, ϕ).The diffraction angle (θ j ), which is an angle formed by the propagation direction z, is determined by sin θ j = λ/Λ j .So the distance of the focal spot from the center (ρ = 0) can be controlled by changing the period Λ j of the phase grating in the jth IOF for a given f, while the orientation of the focal spot can be changed by the orientation angle φ j . We first generate a POF composed of three closely arranged IOFs with the same radius of R 0 = 1872 µm.With its focal field recorded by the CCD camera, as shown in Fig. 2(a), we measured the distance between focal spots 1 and 2 is d ∼ 328.2 µm, and then estimated to be ρ 1 = d/2 cos 30 • ∼ 189.5 µm.We also calculated the theoretical value of ρ 1 to be ρ 1 = f λ/Λ 1 = 200 µm (where f = 200 mm, λ = 800 nm and Λ 1 = 100p = 800 µm).When such a focused field is incident into the K9 glass, three filaments are produced and its pattern is also recorded, as shown in Fig. 2(b).We measured the distance between the corresponding filaments 1 and 2 to be ∼317.8µm (correspondingly, the distance of the filament 1 or 2 to the center is ρ 1 ∼ 183.5 µm).Since ρ 1 ≈ ρ 1 , the filaments are produced in the vicinity of focal field.The filaments 1 and 2 had the size of ∼53.0 and ∼61.0 µm in the short dimension. We now explore the engineering of quantity of multi-filaments.In the following, n and m are used to define the quantity of the grating units (or focal spots) on the CGH (in the focal plane) and of the filaments produced in the K9 glass, respectively.Figures 3(a located at (r 0j , φ 0j ) = (310p, 2jπ/4) (j = 0,1,2,3), and in Fig. 3(c) the five closely arranged IOFs are located at (r 0j , φ 0j ) = (335p, 2jπ/5) (j = 0,1,2,3,4), respectively.One should be pointed out that the orientation angle of the phase grating in any IOF is φ j = φ 0j in Figs.3(a)-(c), implying that any phase grating is oriented toward the origin.Figures 3(d 3(c) and (f), respectively.Correspondingly, the pulse peak power P for producing the single filament is estimated by P = (ε/n)/τ to be 28.7,21.5 and 23.0 MW, for the three cases, which are 15.6PC , 11.7P C and 12.5P C (we estimated the critical power of self-focusing for the K9 glass to be P C = αλ 2 /8πn 0 n 2 ∼ 1.84 MW, by using α = 3.77, λ = 800 nm, n 0 = 1.51 and n 2 = 3.45 × 10 −20 m 2 /W 32,33 ), respectively.We should pointed out that the input power of the IOF is not the power contained in a single filament.However, the filament contains a fixed amount of power, roughly equal to P C (the critical power of self-focusing for the nonlinear medium). 1,34The filaments had a size of ∼52.0 µm in the short dimension. We now explore the whole engineering of the multi-filamentation, including the whole rotation as shown in Fig. 4 and the interval between the filaments as shown in Fig. 5.We choose the POFs composed of three closely arranged IOFs as examples, under the pulse peak power of P = 15.6PC = 28.7 MW. Figure 4(a) shows the intensity pattern of filamentation, which is produced by the focused POF composed of three closely arranged IOFs, with the same radius of R 0 = 234p = 1872 µm and the same grating period Λ 1,2,3 = Lp = 50p (L = 50) = 400 µm.The centers of three IOFs are located at (r 0j , φ 0j ) = (270p, 2jπ/3) (j = 0,1,2) and the orientation angles of the three grating are φ j = φ 0j (j = 0,1,2) (implying that the three gratings are orientated toward the origin).Figures 4(b)-(h) show a series of the intensity patterns of filamentation, produced by a series of the focused POFs, which are counterclockwise rotated by a step of π/12 with respect to the POF used in Fig. 4(a) in turn, as schematically shown in the inset in the right.The filaments had a size of ∼56.0 µm in the short dimension.Clearly, the intensity patterns of the produced filaments are also rotated by the same angle synchronously.So it is neatly to change the locations of filaments by rotating the grating units of the CGH loaded on the SLM.For the control on the interval between the filaments, the arrangement of the used POF is similar to that used in Fig. 4(a), the phase gratings of all the three IOFs are still oriented toward the origin while all the grating periods are changed synchronously.As shown in Figs.5(a)-(l), the intervals between two neighbor filaments decrease as the grating periods Λ 1,2,3 = Lp increase, because the larger grating period in the IOF makes the focal spot has the smaller deflection angle.The filaments had a size of ∼66.0 µm in the short dimension.In Fig. 6, when the phase periods Λ 1,2,3 = Lp are changed from L = 40 to L = 250 pixels, the interval between the two neighbor filaments decreases from d = 824 to 132 µm (experimental values from Fig. 5) and from d = 867 to 138 µm (theoretical results).Clearly, the experimental results are in good agreement with the theoretical ones. We now explore the control on a single filament, as shown in Fig. 7, under the pulse peak power of P = 15.6PC = 28.7 MW.In this case, any POF is composed of three closely arranged IOFs.The centers of the three IOFs with the same radius of R 0 = 234p are always located at (r 0j , φ 0j ) = (270p, 2jπ/3), where j = 0,1,2.The phase gratings in the three IOFs are always oriented toward the origin.In particular, the phase gratings of the 1st and 3rd IOFs located at (r 01 , φ 01 ) = (270p, 0) FIG. 6. Dependence of the interval between the two neighbor filaments among the three filaments on the grating period of the IOF.The opened squares are the measured intervals (from Fig. 5) and the solid line shows the calculated intervals between the two neighbor focal spots in the focal plane.and (r 03 , φ 03 ) = (270p, 4π/3) always keep the grating period of Λ 1 = Λ 3 = 50p.In contrast, the grating period Λ 2 of the 2nd IOF is changed from L 2 = 50 to L 2 = 500, and then further from Fig. 7, in any photo the two filaments are always stationary, while another filament moves from the top left corner to the bottom right corner along the bisector of the two stationary filaments as shown in Figs.7(a)-(i).In addition, we also produce two patterned multi-filaments, as shown in Fig. 8. Multi-filaments exhibit the shapes of letters "Z" and "L", as shown in Figs.8(a) and (b), respectively. III. DISCUSSION We show two examples of the top views of propagation process of light in the glass in Fig. 9. Clearly, it is indubitable that the light propagates indeed in the self-trapping channels in the glass, in Figs.9(a) and (b), there have two and three filaments with a length of ∼12 mm, respectively.If there has no nonlinear effect, the light in glass will be divergent or diffuse.The higher-order nonlinear effect plays an important role in filamentation, because it can balance the self-diffraction to form the filaments. As another proof, the filamentation is always accompanied by supercontinuum generation.Figure 10 shows the color conical supercontinuum patterns captured on a white screen placed behind the glass.Since if there has no filamentation, it is very difficult to generate the supercontinuum, which is an indication of the filamentation.We also measure the supercontinuum spectra generated by the POF composed of three IOFs (n = m = 3) with different grating period Λ = Lp, as shown in Fig. 11.It can be found that as the grating period Λ (L) increases, the intensity of the supercontinuum exhibits a trend of slightly stronger, and the supercontinuum spectra have two peaks located at the shorter wavelength of ∼600 nm and the longer wavelength of ∼734 nm, respectively.As the period Λ (L) increases, the shorter wavelength peak exhibits a bule shift from 618 nm for L = 250 pixels to 590 nm for L = 850 pixels and its intensity increases, while the longer wavelength peak has no almost shift and its intensity increases quickly.As the period increases, as shown in Fig. 5, the filaments are close to each other until partially overlap, resulting in the stronger interaction between the filaments and the enhancement of supercontinuum generation. At last, the precision of the aim in terms of angle and distance depends dominantly on the size of each pixel of the spatial light modulator.The smaller size of pixel will get the higher precision.Since each pixel had a size of p × p = 8 × 8 µm 2 , the maximum angular uncertainty and the maximum distance uncertainty are lower than 0.5 seconds and 0.6 µm in our experimental conditions, respectively. IV. CONCLUSION We have demonstrated the multi-filamentation produced by the focused POFs composed of multiple IOFs in the solid glass.In particular, each IOF includes a blazed phase grating, its period and orientation, as degrees of freedom, can flexibly engineer the location of focal spot of the IOF.The quantity of IOFs consisting the POF can determine the quantity of focal spots.The computer-controlled SLM can be used to achieve our aim.Due to the engineerable patterns of the multi-focal spots, the multi-filamentation produced by the fs POF composed of multiple IOFs can be flexibly engineered.Although our idea has proved in the solid glass, our scheme should have some reference significance for producing the engineerable multi-filamentation in air.Inasmuch as our scheme is able to easily control the intervals between the filaments by setting the locations of individual optical fields forming the patterned optical field, which allows us to tune the strength of interaction between the filaments.When the filaments are close to each other, the interaction becomes stronger.In this article, however, we do not deeply investigate the interaction between the filaments.Due to the flexible controllability, our scheme should be a promising one for developing the potential applications of filaments.For example, the laser control triggering lightning maybe become more convenient and flexible in the operability of quantity and locations of filaments. FIG. 2. The patterns of the focal spots and the filaments produced by the POF composed of three (n = 3) closely arranged IOFs.(a) The focused field pattern with a size of 1835 × 1835 µm 2 .(b) The corresponding intensity pattern of filamentation with a size of 611.7 × 611.7 µm 2 . )-(f) illustrate the spatial distribution of the filaments produced by the focused POFs shown in Figs.3(a)-(c), respectively.Clearly, the quantity of the produced filaments are in good agreement with that of the IOFs m = n.The total energy per the single pulse incident into the K9 glass is ε = 10.75, 10.75 and 14.40 µJ, for the three cases of n = m = 3 in Fig. 3(a) and (d), n = m = 4 in Fig. 3(b) and (e), and n = m = 5 in Fig. FIG. 4 . FIG. 4. The intensity patterns of the filaments produced by a series of the POFs composed of the three closely arranged IOFs, under the pulse peak power of P = 15.3PC= 28.2MW.In (a)-(h), the POFs are in turn rotated by a step of π/12.Any photo had a size of 1138 × 1138 µm 2 . FIG. 8 . FIG. 8.The patterned filaments produced by the two special POFs.Any photo has a size of 1844 × 1844 µm 2 . FIG. 9 . FIG. 9.The top views of the light filaments propagating inside the glass.(a) The case of n = m = 2 and L = 40 pixels and (b) the case of n = m = 3 and L = 40 pixels.
4,305
2016-12-05T00:00:00.000
[ "Physics" ]