id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
17965569 | pes2o/s2orc | v3-fos-license | Roles of Trpa1 in Pain Pathophysiology and Implications for the Development of a New Class of Analgesic Drugs
The Transient Receptor Potential A1 (TRPA1) ion channel has evolved in animals to respond to signals from a variety of sensory stimuli. Many structural determinants of its multimodal activation have been identified to date. TRPA1 activities include responses to exogenous chemical irritants, responses to endogenous inflammatory mediators, zinc, voltage , temperature or stretch and subtle yet critical modulation by calcium ions. TRPA1 has emerged as an important target for several types of pain and inflammatory conditions because of its limited expression profile and its demonstrated roles in mediating different types of pain and sensitization of peripheral sensory afferents. Despite observed species differences in channel pharmacology, recent genetic evidence in human brings some hope that preclinical efficacy in disease models will translate to patient condition. During the past decade, various groups have investigated the development of a new class of analgesic drugs or anti-tussive agents aimed at blocking TRPA1 activity in primary sensory afferents. Several companies are advancing toward clinical proof of concept studies. This review aims to summarize key advances in the understanding of TRPA1 with regard to its roles and implications for patient conditions.
The discovery of the Transient Receptor Potential Ankyrin 1 (TRPA1) protein and its characterization parallel that for the Transient Receptor Potential Vanilloid 1 (TRPV1) protein.Reports of the pro-inflammatory and noxious properties of their ligands preceded the discovery of the protein targets and of their mechanistic roles in pain signaling [1].In 1986, almost 20 years before it was identified as a primary TRPA1 activator, mustard oil application was shown to cause cutaneous vasodilation and inflammatory responses, comparable to those triggered by antidromic nerve stimulation [2,3].Mustard oil also induces nociceptor sensitization to mechanical and thermal stimuli [4].TRPA1 cDNA, originally named ANKTM1 for Ankyrin-like with Transmembrane domains protein 1, was identified and cloned in 1999 from cultured human fibroblasts and included in the Transient Receptor Potential (TRP) channels family based on sequence identity [5].Its primary structure exhibited some typical channel-like features including six transmembrane segments (TM) and an unusually high number (>14) of ankyrin-repeats in its N-terminal part.The channels typically have low-selectivity for calcium and show multimodal activation [6].The TRP channels family was originally divided into five subfamilies by Montell [7], it further extended to include seven subfamilies [8][9][10].These seven groups are as *Address correspondence to this author at Galderma R&D, 2400 route des Colles -Les Templiers, 06410 Biot, France; Tel: +33(0)492954762; Fax: +33(0)493957071; E-mail: patrick.raboisson.1@gmail.comfollow: TRPC (TRP Canonical), TRPV (TRP Vanilloid), TRPM (TRP Melastatin), TRPA (TRP Ankyrin), TRPN (TRP Non-mechanoreceptor C; which has not been found in mammals), the less closely related TRPP (TRP Polycystin; associated with polycystic kidney disease) and TRPML (TRP MucoLipin; associated with mucolipidosis type IV).TRPA1 is the only member of the TRPA group.In 2003, TRPA1 expression was detected among a variety of tissues and shown to be present in a subpopulation of mice nociceptive sensory neurons that also expressed TRPV1 (see below).A potential role in mechanotransduction was inferred in 2004 from TRPA1 expression profile and was implicated in hair-cell mechanotransduction in mice [11].In nonmammals like the zebrafish, drosophila or nematode (C elegans), a similar role was proposed for TRPN1 orthologs that also contained an unusually high number (29) of ankyrin repeats.The associated phenotypes of TRPN deficient animals were of developmental deafness or imbalance [12][13][14][15].In mammals where TRPN genes are absent, it was initially thought that TRPA1 would take over TRPN function and mediate signal transduction for audition and balance [11,16].Despite localized TRPA1 expression and activity profile in cultured hair-cells, it appeared later that the auditory function in TRPA1-knockout mice was fully normal and no different than that of the wild-type [17,18].Instead, TRPA1 knockouts exhibited reduced peripheral sensitization following mechanical stimulation in nociceptors (C-fibers, A-delta) and A-beta sensory neurons [19].The early description of mustard oil causing pain sensitization upon crude application to the skin could then begin to be reconciled with a mecha-nistic role of TRPA1 in pain sensitization.This review is an attempt to summarize the current evidence supporting a role for TRPA1 in the pathophysiology of pain, from the bench to the bedside.
TRPA1: Gene, Primary Structure and Protein Domains
The TRPA1 gene in mammals is made up of 27 exons totaling about 50kb and is located on chromosome 8q13 [5].Human TRPA1 consists of 1119 a.a.whereas mouse and rat orthologs comprise 1125 a.a.The typical tetrameric assembly of TRPs was confirmed in the recent electron microscopy 16 Å resolution three-dimensional TRPA1 structure from mice [20].Each monomer is made up of six-membrane spanning segments (TM1-TM6) forming a functional channel by the assembly of a typical pore loop between TM5 and TM6 that forms the channel cation pore Fig. (1).Two potential sites for N-linked glycosylation are found in the extracellular loops (TM1-TM2) and both C-and N terminals are located on the intracellular side of the membrane.A special feature of TRPA1 is the long N-terminal domain with at least 14 ankyrin repeats (AR) An AR is a 33 amino-acid long motif containing two alpha-helices that appears as tandem arrangements in bacterial and eukaryotic proteins.The ARD forms a super helical spiral structure that may play a role in protein-protein interactions and also mechanotransduction [21,22].A potential role of ARD in TRPA1 translocation and functional surface expression was inferred from impaired truncated mutants [23].
Distinct stretches of the ARD were shown to be involved in regulation of heat sensitivity and activation by covalent ligands and intracellular Ca ++ [24].A distinct regulatory site has recently been identified and characterized in the distal cytosolic C-terminal end of human TRPA1 where a cluster of acidic residues (E1077 and D1080-D1082) affected the voltage-dependency of TRPA1 and its potentiation by Ca ++ or chemical irritants (see below).Truncation of the 20 last Cterminal a.a.decreased Ca ++ -induced inactivation by threefold [25].A different stretch of basic residues in the Cterminal region proved critical for voltage activation when mutated (L969, K975, L988 and L989, L1092 and L1099) [26].The cytosolic C-terminal end of TRPA1 seemed thus to play an essential role for the voltage activation of the channel.
Little is known about cellular trafficking of TRPA1; however, it has been demonstrated that direct in vitro application of protein kinase A (PKA) activators, phospholipase C (PLC) or TRPA1 agonist like allyl-isothiocyanate (AITC), the active component in mustard oil, led to increased immunodetection at the surface of recombinant cells.This process was partly dependent on SNARE-mediated vesicle fusion [27].The same stimuli, applied in vivo, led to sensitization of the nocifensive response.
TRPA1 can form Heterotetramers with TRPV1 in vitro
In vitro TRPA1 subunits can form functional heteromeric complexes with TRPV1.The TRPA1/TRPV1 heteromers exhibit altered pharmacology and channel kinetics compared to either of the homomeric TRPA1 or homomeric TRPV1 [28][29][30][31].In recombinant cells like CHO, TRPA1-TRPV1 complexes form on the cell surface; therefore, such complexes may serve as potential models to compare the pharmacology of the various TRPs oligomers in a recombinant system [32].It has been suggested that a TRPV1 interaction plays a role as a regulator of AITC-induced desensitization of TRPA1 (see below), both in recombinant cells and sensory neurons, and that AITC-induced internalization of TRPA1 could be prevented by TRPV1 co-expression [33].Heterotetramers pharmacology and sensitivity to multimodal activators are of potential physiological relevance for the peripheral sensory system as TRPA1 and TRPV1 mRNAs co-express in sub-populations of rat dorsal root ganglia and trigeminal ganglia neurons although it is unknown whether TRPA1-TRPV1 heteromers form in native tissues [34,35].
Cross-species Differences and Relevance to TRPA1 Drug Discovery
The sequence identity between human-and rat-, mouseor dog proteins are only about 80% and discrete crossspecies variations may explain some key pharmacological and functional differences among TRPA1 orthologs.It is well documented that some compounds identified as antagonists at the human isoform show very different pharmacology at the rat receptor; this highlights the critical importance of surrogate model species in assessing the in vivo pharmacology [36,37].Several electrophilic compounds blocking human TRPA1 receptor have been shown to either lack activity or even activate the rat receptor (AMG7160, AMG2504, AMG9090, AMG5445, CMP1, CMP2 and CMP3) [36,38].Analogous discrepancies have been seen for non-reactive ligands like caffeine and menthol [39,40].The in vitro pharmacology (potency and type of effect) of caffeine, menthol and CMP1 on TRPA1 were similar between rhesus monkey and human channels (sharing 96.9% a.a.identity) but neatly distinct between human and rat or mouse channels (nonetheless sharing 96.6% a.a.identity) indicating that, rather than rat or mice, rhesus monkey may be a good surrogate species to human in preclinical studies [41].Residues in the distal N-terminus between amino acid 231 and 287 are critical to explain the shift of some ligands from antagonist to agonist.Mouse TRPA1 mutant M268P resembles the human ortholog and this single-point mutation was sufficient to shift caffeine activity from activator to inhibitor on the rodent channel.Yet, the corresponding reverse mutation in human TRPA1 (P267M) was not sufficient to trigger a shift in the opposite direction, suggesting that complex interactions between distinct parts of the channels e.g.TM5 and TM6 explain the specific pharmacology of orthologs and the activity of some ligands like menthol [38,42,43].Within the same species, the pharmacological profile may also follow a bimodal trend of concentration dependence like in the case of menthol that was shown to activate mice TRPA1 at low concentrations and block it at higher ones [44].
The ARD is another region of importance for pharmacological species specificity, particularly for thermal and chemical opening modalities.Rattlesnake and drosophila TRPA1 orthologs are activated by heat and seem less sensitive to chemicals such as AITC than their human counterpart.Swapping the first 10 ankyrin repeats in the human isoform by those of the snake rendered the chimera heat sensitive while it retained its original human AITC sensitivity.Closer mutational analysis revealed that rattlesnake TRPA1 contained two functional AR regions (AR 3-8 and AR10-15); each of these was sufficient to confer heat sensitivity to the human orthologs, yet with different temperature thresholds of activation.The N-terminal ARD of TRPA1, by integrating thermosensation, ligand modulation and also calcium sensitivity (see below) seems thus to act as a molecular integrator of physiological signals in TRPA1 expressing neurons [24].
All these findings stress the need for caution when ligands developed against the human channel are evaluated in other species during preclinical development; careful assessment of the cross-species reactivity is warranted.
BIOPHYSICAL PROPERTIES OF TRPA1
Electrophysiological Properties of TRPA1 Channels TRPA1 channels are characterized by a relatively high single channel conductance.Homomeric channels have a conductance of about 110-173 pS [16,[45][46][47].Extracellular physiological Ca ++ and Mg ++ concentrations reduce the inward single channel conductance at negative potentials to about 65 pS.A gradual rundown of activity has been reported when the channel is recorded in cell-free membrane patches.This was seemingly caused by the absence of intracellular inorganic polyphosphates in excised patches; this changes the functional state of the channel making it insensitive to pungent chemicals and Ca ++ activation [46,48].The biophysical properties of the pore have been explored by permeability studies with inorganic and organic cations.The monovalent cation permeability follows the sequence (Rb + > K + > Cs + > Na + > Li + ) when constitutive TRPA1 activity is recorded.When activated by AITC, the relative permeability of inorganic cations in excised patches has been determined as (Ca ++ > Ba ++ > Mg ++ > NH 4+ > Li + > Na + ≥ K+ ≥ Rb + > Cs + ) [48].
As previously observed for TRPV1, activated TRPA1 undergoes agonist-dependent pore dilation from an original 11 Å diameter to 14 Å in its narrowest portion upon persistent stimulation by AITC [49].TRPA1 pore dilation eventually allowed for permeation of large cationic molecules like Yo-Pro (376 Da; fluorescent probe), N-methyl-D-glucamine (195.2 Da) or the lidocaine derivative QX314 (363Da) [50] [51].Probe permeation was reversed by TRPA1 antagonists like ruthenium red (RuR) and HC-030031 and, regardless of their reactive nature, pore dilation was observed both with electrophiles (AITC, cinnamaldehyde, 4-hydroxy nonenal) and non-electrophile compounds (farnesyl thiosalicylic acid and URB597).Both calcium and lasting agonist application dynamically and reversibly regulated pore dilation in TRPA1, similar to what was shown previously for TRPV1 [52,53].As a correlate of pore dilation, a 30% increase in divalent cation selectivity (PCa/PNa) and an increase in fractional Ca ++ current were also observed [49].Even modest changes in Ca ++ permeability occurring from dynamic pore dilation could play a role in TRPA1 (and TRPV1) regulations in nociceptive neurons to alter their excitability.Following stimulation with voltage ramps, TRPA1 showed marked outward rectification at positive membrane potentials, indicative of the voltage-dependency of the channel [26,44].In absence of intracellular Ca ++ and in recombinant system, the outward current is measured above +60mV while the voltage for half-maximal activation (V half ) corresponded to +155mV.Interestingly, this voltage dependence is strongly shifted leftward by increasing internal Ca ++ .For instance, at 6 µM intracellular Ca ++ , TRPA1 V half = 14mV and is reduced to -1mV at 100 µM [54].Physiological intracellular Ca ++ level ([Ca ++ ]i) usually reaches 100 nM in most DRG cells, but in medium-sized neurons (30 µm diameter), following NGF stimulation or in inflammatory conditions, the intracellular calcium concentration can reach 100-fold higher concentrations, potentially making voltage a physiologically relevant regulator of TRPA1 also in the absence of macromolecular ligands [55,56].
Calcium Regulation of TRPA1 Activity
Calcium ions seem to play a crucial and dual role in controlling the gating behavior of TRPA1, inducing activation or potentiation as well as inactivation.The molecular basis of Ca ++ gating is complex and involves several distinct structural domains of the protein.Several findings indicate that Ca ++ -mediated TRPA1 activation and potentiation are induced by the elevation in intracellular calcium level [Ca ++ ]i.At micromolar concentrations, [Ca ++ ]i activates TRPA1 in a voltage-dependent way, slowing channel inactivation and causing persistent activation [47,54].Similarly elevation of [Ca ++ ]i by thapsigargin (a specific inhibitor of endoplasmic Ca ++ -ATPase) or by histamine (which signals to induce Ca ++ mobilization from intracellular stores) were also shown to activate TRPA1 [57].Direct Ca ++ binding to an EF-hand motif in the intracellular N-terminus seemed responsible for Ca ++ -induced activation [47,54].However, ev-en in mutants that are deficient for this binding site, some remaining levels of calcium sensitivity remained [23,57].
TRPA1 activation and potentiation would also require access for Ca ++ in the open channel.In Ca ++ -free solution, reactive agonists induced slow-activating, persistent TRPA1 current.TRPA1 pore mutant D918A, which has strongly reduced Ca ++ permeability, is also deficient for calciuminduced potentiation upon increasing extracellular calcium level ([Ca ++ ]o).In the same mutant, Ca ++ sensitivity (activation and inactivation) was rescued by lowering the intracellular calcium chelator EGTA, suggesting that Ca ++ entry through the pore is leading to channel activation via accumulation of [Ca ++ ]i [57].In contrary, high millimolar extracellular Ca ++ concentration [Ca ++ ]o led to a fast-inactivating, transient TRPA1 current.Add-back of millimolar [Ca ++ ]o to a Ca ++ -free solution led to a quick change towards fast potentiation and fast inactivation of agonist-induced currents [16,47,57].As a result, signaling events increasing [Ca ++ ]i have a strong potential to modulate TRPA1 responses.Inflammatory mediators signaling through an increase in phospholipases C (PLC) and an increase in cytosolic [Ca ++ ]i, or direct Ca ++ influx mediated by TRPV1 and other plasma membrane calcium channels are all additional means of modulating TRPA1 activity [18,58,59].In this regard, TRPV1 was proposed to play a direct role in the regulation of TRPA1 activity, altering the open-probability, magnitude and voltage dependency of mustard oil induced TRPA1 currents [31,60].TRPA1 itself might regulate its own Ca ++ gating upon activation by endogenous agonists or environmental irritants, and therefore control nociceptive signaling via a Ca ++dependent biphasic feedback loop [47].
TRPA1 IS A POLYMODAL INTEGRATOR OF NOCI-CEPTIVE STIMULI TRPA1 Expression Profile under Basal Conditions
Unlike many other potential targets for analgesia, TRPA1 was originally shown to have a distinct and restricted distribution pattern in peripheral sites of importance for pain processing.Consistent with a role in nociception, TRPA1 is expressed in dorsal root ganglia (DRG) and trigeminal gan-glia (TG) neurons of sensory primary afferents [16,31,35,61].In situ hybridization studies in mice demonstrated TRPA1 mRNA expression in a large number of smaller nociceptive cells (approximately 37% of all TG and 57% of all DRG neurons, respectively) [16].Similar results have been obtained in intact rat lumbar DRG where approximately 40% of all L5-L6 DRG cells were positive for TRPA1 mRNA.Furthermore, the results indicated that these neurons did not express neurofilament 200, but peptidergic markers of nociceptive neurons such as calcitonin gene-related peptide (CGRP), TRPV1 and also the NGF receptor TrkA.Immunohistochemistry experiments confirmed TRPA1 protein expression in small peripherin-positive nociceptors [16,35].
From the cell bodies TRPA1 is trafficked towards peripheral nerves and central terminals of the primary afferents.In rodents TRPA1 is expressed by about 50% of small peptidergic C-fiber nociceptors [31,61] although it has also been reported in non-peptidergic nociceptors that bind isolecitin IB4 [62].In human, TRPA1 has been detected in the intact peripheral nervous system [63] as well as in lingual nerve neuromas [64].
TRPA1 is also present in non-somatosensory neurons, including vagal fibers [65] and sympathetic neurons [66] and in non-neuronal tissues such as urothelial cells [67,68] and basal keratinocytes [63], hair cells in the inner ear [69] and native endothelial cells [70].The expression and activity profile of TRPA1 in a number of neuronal and non-neuronal locations suggest a functional role of TRPA1 in many other physiological contexts than pain (see [71] for a recent review).
Activation of TRPA1 by Exogenous Stimuli
TRPA1 is gated by a wide range of environmental irritants, pungent natural compounds and riot control agents such as mustard oil (AITC), garlic (allicin), ginger, wintergreen and cinnamaldehyde (CA), nicotine and various tear gases, all of which can induce pain, nocifensive behaviors and/or sensory neurons sensitization in animals and man Fig. (2), Table 1 [4,11,58,61,72].Many structurally unrelated TRPA1 activators are electrophilic in nature and typically activate the channel following covalent binding to specific residues [73]: CA, acrolein, AITC, iodoacetamide and most plant terpenes (e.g.thymol, carvacrol) belong to this category Fig. (1).Using a combination of cysteine scanning mutagenesis, Ca ++ fluorescence assays and electrophysiology their mechanism of action was shown to occur through covalent binding at specific cysteine residues on the intracellular N-terminal part of the channel.In mice, three of the 31 cysteine residues, C415, C422 and C622 (corresponding to C414, C421 and C621 of the human isoforms), turned out to be essential for mediating this covalent channel activation Fig. (1) [73,74].In agreement, inert chemical analogs of CA or acrolein were inactive at TRPA1, indicating that thiol reactivity is essential within these specific structural scaffolds [74].Icilin however, a naturally neutral activator seemed to exert its activity via a thiol-independent mechanismm.Depending on the TRPA1 conformation, C622 and C666 might take part in a dynamic network of disulfide bonds with spatially close cysteines residues, in addition to be interacting with reactive small molecule ligands [75].Upon modeling of TRPA1 sequence in its low resolution electron density map, C622 and C666 seemed to sit in a cluster of reactive residues within a linker region between the ARD and the transmembrane portion of the channel.These observations suggest a model where structural crosstalk between these regions plays a role in channel activation and desensitization [20].
A preclinical animal model of particular interest to pain researchers is the formalin test [76], where a diluted formalin solution (usually 2%, which corresponds to a 0.74% = 246 mM formaldehyde solution) is injected into the hind paw of rodents to produce a standard biphasic pain response pattern.Formaldehyde has also reactive properties and has been suggested to trigger nocifensive behaviors through its capacity to bind and activate TRPA1.Using calcium imaging and electrophysiology in heterologous cells transfected with TRPA1, formaldehyde was shown to increase [Ca ++ ]i with an estimated EC 50 around 350 µM [77], a concentration likely to be exceeded locally in the hind paw skin following injection of a 2% formalin solution (246 mM formaldehyde).Interestingly, nociceptive behaviors during both phases of the formalin test are strongly reduced in TRPA1 knockout mice or by pharmacological blockade of TRPA1 [78].
In addition to formalin, AITC and Ca ++ are the most characterized and frequently used TRPA1 agonists in experimental pain models.AITC activates C-mechano-heat nociceptors (CMHs) leading to neuropeptide release, causing vasodilation, hyperalgesia and pain [4,66].Therefore AITC (mustard oil) has been used in experimental preclinical and clinical pain studies for many years [79][80][81][82].Its application to the human skin elicits a sharp and burning pain sensation, followed by development of sensitization, which is characterized by hyperalgesia (increased pain sensation to a normally painful stimulus) and allodynia (pain elicited by a nonpainful stimulus) [81].Using various in vitro protocols, AITC activates human and rat TRPA1 at low micromolar concentrations (EC 50 = 1.9 -33.5 µM) (Table 1, cinnamaldehyde Fig. (2) [83].Specificity of AITC for TRPA1 has been demonstrated in vivo using genetic deletion of TRPA1 in mice, where AITC-induced pain response and inflammation were lost [17,18].In vitro, AITC activates TRPV1, although at much higher concentration (EC 50 = 1.8 mM) [84].CA is well known for its use in food and as a fragrance additive and therefore its physiological effects are well documented.In vitro reports have shown that CA activates TRPA1 in the micromolar range (EC 50 = 6.5 -19 µM) [83].In healthy volunteers, topical application of 0.2 -10% (1.5 -75 mM) CA solution to the forearm caused spontaneous burning pain, hyperalgesia, and neurogenic inflammation [85].The exact concentration of CA in the dermis following such an application is not known, but it cannot be excluded that it reached a level at which CA activates other thermo-TRPs, in addition to TRPA1.Indeed, in recombinant systems CA is reported to block TRPM8 at millimolar concentrations (EC 50 = 1.5 mM) and to activate TRPV3 in the same range [86].o-Clorobenzylidene malononitrile (CS) has been used as a tear gas by military and police forces worldwide for decades.Agents such as CS are electrophilic chemicals; their incapacitating effects result from instant pain and irritation of the eyes, excessive tearing and cramp of the eyelids (blepharospasm).Recently TRPA1 was established as the molecular target for CS.CS was shown to be a selective and potent agonist of human TRPA1 with an EC 50 of 0.9 nM (calcium fluorescence) [87,88].Similar values where obtained for the rat isoform (EC 50 = 0.7 and 0.2 nM at the human and rat TRPA1, respectively; (E.Nyman, AstraZeneca R&D: unpublished data).Pain responses to CS have been assessed in humans following application on the tongue and cornea in healthy volunteers [89].The sensory detection threshold after ocular administration was 730 nM with an EC 50 of 3.2 µM for blepharospasm.Administration on the tongue yielded an EC 50 of 6.8 µM for a painful stinging and burning sensation.In rats, intradermal injection of CS (5 µL, 160 nM -16 mM) produced a dose-dependent increase in nocifensive behaviors (lifting, shaking, biting and licking of the injected paw) as well as heat hyperalgesia and mechanical allodynia (G.Martino, AstraZeneca R&D, unpublished data).
The Question of TRPA1 as Transducer of Noxious Cold
TRPA1 involvement in the cold sensitivity of sensory neurons is still debated [4,31,58,90].
Experiments using patch clamp and calcium fluorimetry in recombinant systems provided the first evidence that TRPA1 was a sensor of noxious cold.In recombinant systems, the activity of TRPA1 was markedly enhanced at temperatures below 17 °C or in the presence of Icilin [31], indicating that sensation of cold temperatures or their perception elicited by chemicals are mediated via TRPA1.
In contrast, data from different groups studying the role of TRPA1 as a noxious cold sensor in native systems of cultured sensory neurons failed to link the function of TRPA1 to transduction of noxious cold [4,16].As described above, calcium is an important modulator of TRPA1 that is required for full agonist-evoked response but is also involved in channel desensitization.Notably, it was suggested that differences in calcium conditions could, in part, explain the differences reported with respect to TRPA1 activation by cold.In recombinant systems, such an activation may occur indirectly as a result of increased [Ca ++ ]i upon cooling as reported in both TRPA1-expressing and control HEK293 cells [54].In contrast, noxious cooling did not evoke unspecific [Ca ++ ]i increases in sensory neurons, maybe impairing calcium-induced activation, which could in turn explain why cold-activation of TRPA1 could not be observed in intact native cells [91].Yet the uncertainty remains since singlechannel recordings of TRPA1 demonstrated that it could be activated by cold stimulation, even under calcium-free conditions [92].Recent data confirmed and extended support for TRPA1-mediated cold sensitivity by showing dependence to voltage while still under calcium-free conditions [40].Differences in the length of cold stimulation according to individual protocols might explain the discrepancies amongst the various cold studies in whole-cell patch clamp or singlechannel recordings [6,54,93].
Using TRPA1 deficient mice in behavioral studies, two groups came to conflicting conclusions reporting either no differences between the KO mice and the controls [18] or a significant reduction of nociceptive behavior in the KO following acute noxious cold stimulation (acetone cooling and cold plate) [17].The specific reasons for these apparent contradictions remain unknown; however, here again, methodological differences are likely to be important [77].
TRPA1 and Sensory Mechanotransduction
The role of TRPA1 as a detector of mechanical stimuli remains: controversial, yet a range of evolutionary evidence showed, for example, that a TRPA1 worm ortholog was sensitive to mechanical pressure [94].In addition, Drosophila deficient in a specific TRPA1 homologue encoded by the painless gene (EP(2)2451) showed a decreased response to intense mechanical stimuli [95].TRPA1 deficient mice displayed a decreased behavioral response to punctuate mechanical stimuli in the noxious range [96], although this was not observed in another study [18], and a markedly reduced firing of C-fibers following noxious mechanical stimulations [19].In agreement, the selective TRPA1 antagonist HC-030031 also significantly reduced mechanically-evoked action potential firing in rodent C-fibers, particularly at highintensity forces [97].With regard to the expression profile, TRPA1 is also present in epidermal keratinocytes, a part of the mechanotransduction system [19].Collectively, the data suggests a role for TRPA1 in mechanical transduction by selective modulation of subtypes of mechanosensitive afferents.
TRPA1 Responds to a Variety of Noxious Endogenous Reactive Compounds
In addition to being activated by exogenous physical stimuli and noxious chemicals, TRPA1 also responds to a variety of endogenous reactive compounds including ketoaldehydes, cyclopentane prostaglandins and reactive oxygen species released after tissue injury and during inflammation.These endogenous mediators act on nociceptive sensory nerve endings, thus suggesting that TRPA1 is a key sensor of tissue damage related to inflammation and oxidative stress.
In response to oxidative stress a number of reactive electrophilic ketoaldehyde are formed via lipid peroxidation.Among those, 4-oxononenal (4-ONE), derived from oxidized ω-6-polyunsaturated fatty acids such as arachidonic and linoleic acid, can form stable Michael adducts with thiol containing cysteine and lysine residues of proteins.In turn 4-ONE is broken down enzymatically into other reactive metabolites such as 4-hydroxynonenal (4-HNE) [98].In vitro 4-ONE and 4-HNE have been shown to activate recombinant or native (DRG neurons) TRPA1 channels [74,[99][100][101][102] and when injected in rodents, to induce TRPA1-dependent nocifensive behaviors as well as mechanical and cold hypersensitivity [100,103,104], although some components of the responses were also shown to be TRPA1-independent.In osteoarthritis (OA) pain patients, synovial levels of 4-HNE are significantly increased [105,106] and growing evidence supports a role for 4-HNE as a pathophysiological modulator in carti-lage degradation [100,101,106,107].Whether 4-HNE plays a direct role in triggering pain in OA can only be speculated.Yet, the fact that TRPA1 can be detected in human synovial lining and that 4-HNE accumulates during oxidative stress to reach local concentrations (~5 mM) greatly exceeding its EC 50 at TRPA1 (~50 µM) would support the concept [100,108,109].
During inflammation, arachidonic acid is converted by cyclooxygenases into prostaglandins (PGs), which contribute to inflammatory pain and hyperalgesia through a direct action on their receptors.In addition PGs can be transformed into electrophilic compounds in vivo [110].In particular, cyclopentenone ring-containing A-and J-series prostaglandins are formed as non-enzymatic dehydration products of PGE2 and PGD2, respectively and can be detected in humans (see [111] for a review).Among them, 15-deoxyΔprostaglandin J2 (15-dPGJ2) a PGD2 metabolite with an α,β-unsaturated carbonyl moiety that can form Michael adducts has been shown to activate TRPA1 in HEK cells as well as in mouse DRGs and trigeminal neurons, an effect that was absent in TRPA1-deficient animals [112,113] and that could be blocked by the TRPA1 inhibitor HC-030031 [114].Furthermore, when injected into the skin, 15-dPGJ2 evokes acute nociceptive behaviors in rodents via a TRPA1dependent mechanism [101,112,113], thus indicating an additional, role of PGs in inflammatory pain via TRPA1induced nociceptor sensitization.
Nitric oxide (NO) a signaling molecule generated from arginine and oxygen by nitric oxide synthases is involved in various biological processes including vascular signaling, immune responses and neurotransmission.NO is algogenic in humans and plays an important role in pain sensitization caused by inflammation and injury in animal models [115][116][117].In addition to stimulating the cyclic guanosine monophosphate pathway (which in turn modulates a variety of downstream signaling targets), NO also forms stable adducts with cysteine residues and has been shown to activate both TRPA1 and TRPV1 in heterologous systems and in cultured primary sensory neurons [118][119][120][121].In behavioral assays, peripheral NO-induced nociception was compromised when TRPV1 and TRPA1 were both deleted, providing genetic evidence that the peripheral nociceptive action of NO was mediated by both TRPV1 and TRPA1 [121].
TRPA1 Expression and Function are Altered by Inflammation
TRPA1 is not only activated by endogenous proalgesic agents from the inflammatory soup, it also undergoes an inflammation-dependent modulation of its expression and function Fig (1).
As mentioned above, [Ca ++ ]i gating of TRPA1 channels can be strongly modulated downstream of receptor pathways that are activated by inflammatory mediators, such as neurotrophins, bradykinin, tryptase etc. [4,18,58,131,132] as well as by its own activity, resulting in an increase in nociceptor excitability.For instance, neurotrophins like nerve growth factor (NGF) and the glial cell line-derived neurotrophic factor (GDNF) family of growth factors are released in inflamed tissue where they participate to nociceptor activation and sensitization [124,125,133,134].Some effects of the neurotrophins are believed to occur via alteration in expression and functional sensitivity of pain transducing receptors including not only TRPV1 or B1 [124,[135][136][137] but also TRPA1 [63,123,126,138,139] (however see [140]), although the ability and extent of the growth factors to potentiate TRPA1 (and TRPV1) responses seem to vary depending on the type of tissue studied (skin, muscle or colon) [125].
Bradykinin (BK) is another mediator produced in response to tissue injury, inflammation, or ischemia that activate the PLC/PKC signaling pathway, causing the release of calcium from intracellular stores.BK elicits immediate excitation of nociceptors, followed by a longer lasting sensitization to thermal and mechanical stimuli [141,142].The BKevoked nociceptor excitation, thermal and mechanical hyperalgesia were strongly reduced in TRPA1-deficient mice and attenuated (mechanical hyperalgesia) by the TRPA1 antagonist AP18 [143], suggesting that TRPA1 activation mediated, or at least contributed, to the acute pain and hyperalgesia caused by BK.
Likewise, activation by tryptase and trypsin of the proteinase activated receptor-2 (PAR2), which co-expresses largely with TRPA1 in rat DRG neurons, has been shown to functionally sensitize TRPA1 in heterologous systems and DRG cells.This effect could be blocked by PLC inhibitors or mimicked by decreasing plasma membrane PIP2 levels through antibody sequestration or PLC activation, and confirmed in vivo by using the PAR2 agonist SL-NH2 at a subinflammatory dose that nonetheless led to an increase in AITC-or Ca ++ -evoked nocifensive behavior in rats [131].
Finally, the enhanced translocation of TRPA1 to the membrane of sensory neurons triggered by inflammation, resulting in higher amounts of functional TRPA1 channels, could be another effective mean to regulate the sensitivity of nociceptors to TRPA1 agonists and may represent one of the mechanisms controlling TRPA1 function in response to acute activation and inflammatory signals [140].
Blocking TRPA1 is Analgesic or Antihyperalgesic in Skin, Joint and Visceral Models of Inflammatory Pain
In addition to small molecule TRPA1 antagonists such as HC-030031, AP-18 and A-967079, which all emerged from drug discovery efforts, endogenous inhibitors of TRP channels have recently been identified.Resolvins, such as resolvin D1, D2 (RvD1, RvD2) and E1 (RvE1), are lipid mediators biosynthesized during the resolution phase of acute inflammation from ω-3 polyunsaturated fatty acids.They display potent pro-resolving and anti-inflammatory actions (see review in [144]) and have also proven to be very potent inhibitors of the TRPA1 (RvD1, RvD2) and TRPV1 (RvE1, RvD2) channels [145,146].
The ability of TRPA1 blockade by small molecules antagonists, antisense oligonucleotides and resolvins to reduce signs of hypersensitivity has been explored in various rodent models of inflammatory pain using several endpoints.For instance, the mechanical hypersensitivity induced by intraplantar injection of FCA, Carrageenan or the major proinflammatory cytokine TNFα as well as by intra-articular injection of FCA was reduced by systemic administration of HC-030031 and A-967079 at plasma exposures believed to engage TRPA1 [147,148] or upon local injection of AP-18, RvD1 or RvD2 [143,145,146,149], an effect that was abolished in TRPA1-deficient mice [143,145].However, TRPA1 antisense administration did not reduce mechanical hypersensitivity even though it had an effect on cold allodynia [123].In a model of osteoarthritis induced by intraarticular injection of monosodium iodoacetate (MIA) the results were inconsistent: A-967079 was reported to reverse the MIA-induced reduction in grip force [148] whereas HC-030031 failed to reverse the shift in weight bearing or to block place-preference elicited by intra-articular lidocaine (used as a measure of ongoing pain) [148,150] although both compounds were used at doses shown to reduce AITCinduced nocifensive behaviors.Systemic HC-030031, locally administered AP18, or transient knock-down of TRPA1 reduced FCA-induced cold hyperalgesia in rats [104,123,143,151] without affecting acute noxious cold sensation.Heat hyperalgesia on the other hand was neither affected by AP18 nor by TRPA1 antisense administration [143,145,151].The positive effect on this symptom observed after RvD1 or RvD2 administration may be attributed to inhibition by the resolvins of thermo-TRPs like TRPV3 and TRPV1 [145,146,152].
Interestingly, although both wild-type and TRPA1deficient mice developed mechanical hyperalgesia 24 hours after FCA injection, only the wild-type mice displayed a sustained mechanical hyperalgesia for 3 weeks following FCA 149].Thus, endogenous activation of peripheral TRPA1 appears to play a key role in the long lasting mechanical hyperalgesia observed after intra-articular injection of FCA; while, in the early stage of the inflammatory insult, compensatory mechanisms in TRPA1-deficient mice would mask the TRPA1 requirement [143].In agreement, the nociceptive and hyperalgesic responses measured within 24 hours after induction of inflammation were reduced, but not abolished, by TRPA1 antagonists [104,123,147,149,151]; but see [143].Similarly, TNFα-induced hyperalgesia was reduced but not suppressed in TRPA1-deficient mice, suggesting that other mechanisms than just TRPA1 activation contributed in the early AITC-, FCA-or TNFα-induced nociceptive responses [149].
Finally, in a mice model of pancreatitis caused by repeated injection of cerulein, HC-030031 and the TRPV1 antagonist AMG 9810 could synergistically reverse the change in exploratory behavior (believed to reflect pancreatitis-induced discomfort and pain), suggesting that reagents targeting both channels could be worthwhile exploring in acute pancreatitis pain [128].
Although some inconsistencies exist (likely affected by differences in methodologies), several lines of evidence support a critical role for endogenous activation of peripheral TRPA1 in the development of mechanical and cold hyperalgesia following tissue injury and inflammation.TRPA1 is up-regulated during inflammation, directly activated and indirectly sensitized by endogenous inflammatory mediators.
Thus, blocking of TRPA1 activity appears to be a relevant mechanistic approach to decrease mechanical hyperalgesia, a key symptom that contributes to movement-evoked pain in patients suffering from chronic nociceptive or inflammatory pain.Another major complaint from notably OA patients is pain at rest, which presumably depends on sustained activity in nociceptive fibers.The reduction in spontaneous firing of spinal wide-dynamic range neurons (WDR) to intra-articular administration of FCA and in exploratory behavior in a model of pancreatic pain points towards a role for TRPA1 in ongoing pain during overt inflammation [128,153].In contrast, when inflammation dissipates like in the chronic phase of MIA-induced arthritis, TRPA1 antagonists did not inhibit the spontaneous firing of spinal neurons or a measure of ongoing pain [150,153] indicating that they might not be very effective on pain at rest in less inflammatory conditions like advanced OA.
In summary, inhibiting peripheral activation/sensitization of nociceptor afferent endings by blocking TRPA1 channels appears to be an attractive approach to relieve some of the symptoms in patients suffering from acute or chronic nociceptive inflammatory pain conditions, including arthritic pain.
Role of TRPA1 in Neuropathic Pain
While the role of TRPA1 in inflammatory pain is supported by a wealth of concurring evidence, its contribution to the pathophysiology of neuropathic pain is more debatable.
Alteration in TRPA1 Expression and Function after Nerve Injury
In animals, the changes in TRPA1 expression profile were variable depending on the specific model of nerve injury studied.For instance, after chronic constriction injury of the sciatic nerve (CCI), TRPA1 mRNA levels in rat DRG neurons were reported to be slightly but significantly increased at day 7 post-surgery [154] or left unchanged from day 2 to 28 (R. Grant, AstraZeneca R&D, unpublished data)".In mice, the expression was instead significantly reduced in DRGs at day 7 and 14, in agreement with a reduction in the number of AITC-sensitive cells [62].In the rat spared nerve injury (SNI) model, the TRPA1 gene was down-regulated at 4 and 15 days post-surgery and returned to normal or above normal at 3 months [155].In the mouse partial nerve injury model (Seltzer method), TRPA1 mRNA levels were strongly reduced in L4 and L5 DRGs at times where mechanical allodynia could be observed [155].A reduction in TRPA1 level was also observed 7 days after complete sciatic nerve transection [155].In the spinal nerve ligation (SNL) model, Noguchi's group reported an increase in TRPA1 mRNA expression in the uninjured trkA-expressing small-to-medium L4 DRG neurons 1-14 days after L5 spinal nerve ligation, aligned with the development and maintenance of cold hyperalgesia in the hind paw [123,156].Immunohistochemical studies confirmed TRPA1 up-regulation in the L4 DRG and trafficking of the channels in small diameter myelinated and unmyelinated peripheral terminals.In agreement, the percentages of AITC-sensitive L4 DRG cells and of peripheral Aδ-fibers were increased in SNL compared to sham and naive rats, suggesting that following nerve injury TRPA1 is up-regulated on intact Aδ-fibers, then contributing to cold hypersensitivity [157].In contrast, TRPA1 expression was decreased in the injured L5 DRG cells in rats and in five inbred mice strains [158].Overall, it appeared as if TRPA1 is down-regulated in injured neurons and upregulated in neighboring uninjured neurons, the net effects being more obviously detected after complete sciatic nerve transection or in the SNL model.Indeed in the SNL model, all neurons from the L5 DRG are injured and all those from the L4 are spared, whereas in CCI, the injured neurons distribute to several ganglia.Upregulation of TRPA1 has also recently been reported in the streptozotocin-induced model of diabetic neuropathy in rats at a time points where cold hyperalgesia was observed [159].
In human, expression of TRPA1 was increased in smallto medium-sized DRG neurons obtained from patients having undergone brachial plexus repair following avulsion injury [63].However, investigating the potential correlation between TRPA1 expression and the presence or absence of neuropathic pain symptoms, Morgan et al. [64] showed no significant difference between levels of TRPA1 in lingual neuromas from patients with or without symptoms of dysaesthesia, and no relationship either between TRPA1 expression and VAS scores for pain, tingling or discomfort.Furthermore, despite a net decrease of TRPA1 mRNA levels observed in some nerve injury models, animals still displayed neuropathic-like signs of hypersensitivity.Therefore, it cannot be concluded that neuropathic pain symptoms in rodent or human are unequivocally linked to changes in TRPA1 channel expression.
Sensitization of the TRPA1 channel could also contribute to the development of neuropathic pain symptoms following nerve injury as it does during inflammation.For instance, in the rat SNL model, NGF is synthesized and released in the L5 degenerative nerve fibers and acts upon nearby sensory fibers, potentially inducing TRPA1 up-regulation in the intact L4 DRG, thus increasing cold hyperalgesia [123].In agreement, the magnitude of the response to AITC of L4 DRG cells and Aδ-fibers in the periphery is significantly larger in SNL compared with sham and naive rats [157].Also, in a mice model of chemotherapy-induced neuropathic pain, repeated administration of paclitaxel has been shown to activate PAR2 and the downstream enzymes PLC, PKCε, and PKA, resulting in the sensitization of TRPA1 along with TRPV1 and TRPV4.In turn, blocking downstream signaling pathways of PAR2 or the TRP channels attenuated paclitaxel-induced mechanical, heat, or cold hypersensitivity [160].
Blocking of TRPA1 Reverses Hypersensitivity in Rodent Models of Neuropathic Pain
The potential role of TRPA1 activation in the pathogenesis of neuropathic pain has also been addressed in rodents using pharmacological tools and antisense knock down.For instance, cold hypersensitivity was reduced by HC-030031 in the SNI model as well as in paclitaxel-or oxaliplatininduced neuropathy models [104,148,159,161,162], by A-967079 in the CCI model [104,148,159,161,162] and following intrathecal administration of TRPA1 antisense in the rat SNL model [123,156].Importantly, HC-030031 and A-967079 had no effect on noxious cold sensation in naive animals, suggesting distinct roles of TRPA1 in physiological and pathological states [104,148,159,161,162].Similarly, mechanical hyperalgesia observed following SNL, repeated paclitaxel or acute oxaliplatin administration, or in diabetic rats was reduced by HC-030031 or a close analogue [147,[161][162][163].In agreement, oxaliplatin-induced mechanical and cold allodynia were absent and cisplatin-evoked mechanical allodynia was reduced in TRPA1-deficient mice [104,148,159,161,162].Paclitaxel-induced heat hyperalgesia was also reduced by HC-030031 [160].
Collectively, the data suggest that TRPA1 antagonism is a potential approach to treating symptoms of neuropathic pain.However, inconsistencies exist and the strength of evidence is so far weaker than for inflammatory pain.
Human Data Supporting TRPA1 Role in Pathological Pain
Recently, a human heritable pain syndrome has been linked to a gain-of function mutation in the TRPA1 channel.Familial Episodic Pain Syndrome is characterized by episodes of debilitating upper body pain, triggered by fasting and physical stress although baseline sensory thresholds were normal.Enhanced secondary hyperalgesia to punctate stimuli on treatment with AITC was observed.The mutated channels show a 5-fold increase in inward current on activation at normal resting potentials and, like the wild type channel, can be activated by Ca ++ , cinnamaldehyde, 4-HNE and cold and blocked by HC-030031 [164].
Emerging Role of Spinal TRPA1 Channels in Pain Pathophysiology
The main lines of evidence supporting the role of TRPA1 in pain pathophysiology are based on its expression on peripheral nerve fibers.However, recent findings indicate that presynaptic activation of TRPA1 channels expressed on central terminals of primary afferents enhances glutamate release and facilitates excitatory transmission in the substantia gelatinosa between primary afferents and projection neurons [165], contributing as a result to inflammation-or nerve injury-induced hypersensitivity.The effects of intrathecal administration of RvD1, RvD2, A-967079 or an analogue of HC-030031 on mechanical hypersensitivity in various inflammatory or neuropathic pain models [146,163,166,167] as well as on spontaneous excitatory post synaptic currents and C-fiber-evoked long-term potentiation in the spinal cord [146] suggest an additional beneficial site of action in spinal cord for CNS penetrant TRPA1 antagonists.On the other hand, a recent report attributes the antinociceptive activity of acetaminophen to the agonistic properties of two of its metabolites on TRPA1 channels in the spinal cord and proposes spinal TRPA1 activation as a potential pharmacological strategy to alleviate pain [168].Further studies are warranted to understand this apparent discrepancy.
A ROLE FOR TRPA1 IN SENSITIZATION OF VA-GAL SENSORY NEURONS INNERVATING THE AIR-WAYS
Asthma is a chronic condition of airways inflammation.It is caused by a combination of genetic and environmental factors including exposure to irritants and allergens that will trigger uncontrolled inflammatory reactions and dramatic sensitization of the airways sensory neurons.Common treatments involve beta-2 adrenergic agonists and corticosteroids, generally given in combination so as to maximize efficacy and limit the risks associated with standalone beta-2 adrenergics or corticosteroid treatments [169,170].Beta-2 adrenergics contribute to smooth muscle relaxation, vasodilation and subsequent dilation of the airways whereas corticosteroid drugs bind to intracellular receptors signaling to response elements that will either up-or down-regulate inflammation-induced gene expression (up-regulation of annexin A1; down-regulation of TNFα; GM-CSF; chemo-and interleukins) [171].
Due to its expression in vagal sensory neurons innervating the airways and gating by inflammatory mediators and the ability of agonist to evoke coughing in animal and human, TRPA1 has been proposed as the target of pro-tussive agents.Thus TRPA1 blockers may find application in the treatment of allergic airway inflammation, asthma and COPD [172][173][174][175].For instance, in models of acute asthma, the typical allergen-induced responses like leukocyte extravasation, mucus production, cytokines and chemokines levels that led to airways hyperreactivity were all significantly lowered in TRPA1-, but not in TRPV1-deficient mice.A similar phenotype in rodents was also observed following pharmacological blockade using HC-030031 [176].
POTENTIAL ROLE OF TRPA1 IN ITCH
Itch or pruritus is defined as an unpleasant sensation that elicits the desire or reflex to scratch.As such, itch serves a protective role by warning against harmful agents in the environment.However, when it accompanies conditions such as chronic skin and systemic disorders, atopic dermatis, psoriasis, renal-or liver failures, peripheral neuropathy etc. itch becomes chronic causing debilitating sensory experiences with many similarities to pain.Complex interactions exist between pain and itch, both sensations sharing at least in part the same neural pathways, peripheral mediators, pathophysiological mechanisms, central processing and even treatments (see [177] for a review).Histamine is a well-known pruritogen which is mainly released by skin mast cells in response to external stimuli.Histamine-dependent itch in humans can be effectively blocked by histamine receptor antagonists and involves activation of TRPV1.Chronic itch, by contrast, is insensitive to antihistamine treatment and represents a significant unmet medical need.Recent findings have demonstrated that TRPA1, but not TRPV1, is a downstream transduction channel of histamine-independent chronic itch [178].Oxidative stress also induces profound scratching behaviors, which are largely histamine-and TRPV1-independent, but TRPA1-dependent.Antioxidants and TRPA1 antagonists have shown efficacy in itch models in mice and may potentially be used to treat oxidative stress induced itch conditions in human too [179].The recent demonstration by Liu et al. that oxidant-induced scratching behaviors were prevented in mice treated with the TRPA1 antagonist HC-030031 or in TRPA1 deficient animals seems to support this hypothesis.
TRPA1 POLYMODAL ACTIVITY AND IN VITRO AS-SAY DESIGNS FOR DRUG DISCOVERY
Several groups have been involved in exploring drug discovery and development projects targeted at TRPA1.In the majority of cases, the primary indications are various types of pain and respiratory disorders (COPD and asthma) [180].Here we summarize results from various in vitro assays, mostly fluorescence-based and electrophysiology, which were used in screening and mode of action studies.For ease of reference to readers interested in assay designs, these parts are separated by assay type including references to the structural determinants underlying the mode of action whenever possible.
Fluorescence-based Assays
A straightforward and common way to characterize compounds activity is by using Ca ++ imaging on multiwellplate readers like the FLIPR™ or Hamamatsu™ instruments.In addition to calcium imaging, assay designers at RedPoint Bio Corporation used hTRPA1 recombinant cells to set up assays based on a fluorescent membrane potential probe [and used these] to characterize direct thymol activation of hTRPA1 and its blockade by camphor [181].From its structure, thymol is not expected to behave as an electrophile in physiological conditions and may not exhibit the typical traits of covalent binding activators.At low micromolar concentration, thymol is a pure agonist, whereas at higher concentration it started to act as a functional antagonist, maybe only inducing rapid channel desensitization [44].Indeed, a feature of some typical TRPA1 activators like AITC is that they tend to cause rapid desensitization of the channel, especially in presence of physiological concentrations of calcium this can make the activators functionally behave as antagonists [182,183].In fluorescence screens, where the time resolution is limited, agonist-induced desensitization is a serious issue as it is often difficult to distinguish at first pass an antagonist from an agonist causing rapid desensitization.The timeframe during which a given reference agonist (like AITC) will induce continued activation of the channel is a critical parameter in the design of reliable screening assays.Recently, using fluorescence screening, a team from Pfizer identified a new activator compound that did not covalently bind TRPA1 through cysteine binding, and that proved to be a potent and selective activator of both rat and human TRPA1 in vitro (PF-4840154) [184].Out of a panel of 100 targets, this new tool only showed micromolar activity on dopamine D3 receptor, noradrenaline transporter and sigma opioid receptor.Although PF-4840154 significantly blocked hERG channel (IC 50 580 nM), it was inactive at hTRPV1, hTRPV4, hTRPM8, and other established pain targets.These characteristics make it a superior alternative for setting up in vitro assays on mice and human TRPA1.Notably, upon intraplantar injection PF-4840154 induced potent nocifensive behaviors in wild-type mice but not in their knock-out counterparts, behaviors that were fully reversed by pre-treatment with the TRPA1 antagonist HC-030031.Independent of the mechanism involving N-terminal cysteine binding, zinc is a pulmonary irritant that can directly activate TRPA1.Its mechanism of action involved ion influx through the TRPA1 channel followed by interaction with an intracellular C-terminal site (C983, H1021) [185][186][187].Using FLIPR calcium fluorescence on HEK293 hTRPA1, an EC 50 of 2.3 µM was reported for stable Zn activation in intact cells.In addition to zinc, other heavy metals like cadmium and copper were also shown to activate TRPA1 directly and caused C-fibers sensitization in pulmonary airways; these effects were essentially absent from knock-out mice, yet their specific potencies were not reported for recombinant in vitro TRPA1 systems [188].The agonist properties of heavy metal ions like zinc make them another class of activators potentially useful for in vitro assay designs.
A recent report described in details a HEK293 hTRPA1 calcium fluorescence assay based on Fluo-4 AM probe and the FlexStation3™ reader.The authors used a non-steroidal anti-inflammatory drug as reference agonist, flufenamic acid (FFA), to activate the channel (EC 50 = 55.4 µM) [189].FFA has been reported as a non-electrophilic and non-covalent ligand for TRPA1, TRPM8, TRPV1 and TRPV3.They suggested that that it has potential as a structural starting point for the development of a new class of reversible, nonreactive and non-volatile TRPA1 ligands.Regardless of the background cell line, WI-38 or inducible HEK, the potency of FFA appeared consistent when determined by calcium fluorescence on the human isoforms (EC 50 = 24-57 µM).Its potency inhuman isoforms was within a 3-fold range of that of the rat isoform expressed in oocytes (where FFA seemed to show very limited voltage dependency of TRPA1 current activation (EC 50 = 78 µM at -100 mV to 148 µM at +100mV) [190].
Electrophysiology Profiling Assays
Transfected Xenopus laevis oocytes are a common transient expression system used in electrophysiological studies of ion channels.The large size of the oocytes (about 1.0 mm) eases cell handling and the robust protein synthesis machinery of the tetraploid organism usually results in efficient surof the channel of interest.Oocytes based assays using temperature ramps were used to characterize the basis of heat/ cold sensitivity across various species and their dependence on ARD across species.Oocyte recordings on TRPA1 from rat, snakes, humans and their chimeras led to the characterization of two functionally different modules along the ARD that are responsible for the differential thermosensation across species (see above) [24].
With regard to automated electrophysiology, Sophion and Scottish Biomedical have reported a detailed protocol for stable recording of TRPA1 currents using proprietary HEK293 recombinant cells kept at -60mV and a voltage ramp ranging from -100 mV to +50 mV in a 400 ms pulse.The use of nominally calcium free solutions and supercinnamaldehyde as a reference agonist seem to slow or stabilize channel desensitization, and warranted stable outward current recording after a short interval where the current needed to reach steady state levels [191].
In the context of drug discovery projects, it will be important to identify if endogenous TRPA1 activators have any potential to trigger pore dilation like that observed with AITC (see above) or cause leftward shift in the apparition of TRPA1 outward K+ current to some physiologically relevant depolarized potentials.If so, it will be worthwhile testing whether the large pore antagonist properties of reference compounds like ruthenium red and HC-030031 are also shared by future TRPA1 drug candidates.Additionally, it will be important to assess if these are efficient in blocking the inward current (typically detected in Ca ++ fluorescence) and the outward current at depolarized potential in high Ca ++ conditions (typical voltage ramp assay).
Targeting the Temperature Sensitivity of TRPA1 with in vitro Assays
In mammals, TRPA1 orthologs are primarily known as chemosensors and eventual cold detectors [31,91].Yet, TRPA1 evolved into distinct temperature sensors across the animal kingdom, and discrete structural features may underlie most of species specificities with respect to heat and cold detection [192].In particular, mammals and fish TRPA1 typically do not respond to heat, whereas snake, mosquitoes and drosophila's channel exhibit differential heat-sensitivity thresholds [193][194][195].At the extreme end of subtle heat detection is TRPA1 from Crotalus atrox, which is able to sense infrared radiations while it seemed to show no sensitivity to cold [196].
Since the discovery that human or rat TRPA1 opened at temperatures below 17 °C, assay designers have developed cold sensing assays to screen and study the mode of action for compounds that modulate this activity [31,92].A versatile temperature-dependent assay was reported that relied on the rapid TaqMan temperature control to trigger thermo-TRPs activation; results were shown to reliably translate to native rat dorsal root ganglia neurons in either 384 or 96 well formats [197].
Recent Advances Towards Clinical Concept Testing for Pain And Respiratory Disorders
Glenmark Pharmaceuticals Ltd is developing GRC-17536 an orally available TRPA1 receptor antagonist for the treatment of neuropathic pain and respiratory disorders (Table 2, Fig. (3).In February 2012, Glenmark announced that GRC-17536 had successfully completed a Phase I trial (single ascending dose and multiple ascending dose) with the drug being well tolerated up to the maximum dose tested.A good pharmacokinetic profile was reported, seemingly devoid of gender or age effects.In March 2012, the company disclosed plans to initiate a 4-week randomized, doubleblind, parallel-group, placebo-controlled, proof-of-concept Phase II trial in 55 patients with painful diabetic neuropathy to assess the efficacy, safety and tolerability of GRC-17536.A Phase I/IIa trial in healthy adult volunteers and mild asthma patients using an inhaled version of GRC-17536 is reportedly expected to start in Jun 2012.The study will also include response to an allergen test [198,199].In January 2012, Cubist Pharmaceuticals and Hydra Biosciences have filed for regulatory approval to initiate human clinical studies with their small molecule TRPA1 antagonist CB-189,625 in development for the treatment of acute pain and certain inflammatory conditions Fig. (3), Table 2.The open-label dose-escalation Phase I study to assess the safety and pharmacokinetics of CB-189, 625 in healthy volunteers was expected to start in the first quarter of 2012 [200].
In conclusion, TRPA1 has emerged as an attractive molecular target for the development of novel inflammatory and possibly neuropathic pain therapies.In addition, TRPA1 antagonists may also hold promise as treatments for certain respiratory diseases and itch conditions.Various lines of evidence support the therapeutic utility of TRPA1 antagonists including (1) expression in relevant neuronal population, (2) demonstration that the channel is activated and modulated by multiple inflammatory mediators and their downstream pathways, (3) its selective activation causes pain in human and nocifensive reactions in animal, and conversely its inhibition reverse signs of inflammatory and neuropathic pain symptoms in animals and (4) its genetic linkage to a painful disease state in man.
Certainly, the chemical properties of existing compounds and the observed species specifity, conditioning compound progression and the choice of the surrogate species, are challenging factors for TRPA1 drug discovery.Therefore, the outcomes of forthcoming clinical proof of concept studies will be instrumental in furthering our understanding of the clinical value of this potential new class of analgesic or antiinflammatory drugs.2.
Fig. ( 1
Fig. (1)."TRPA1 scheme and major modulators" The 16 ankyrin binding domains are depicted as two stretches of black (regulatory) and grey (enhancer) modules.The Zn-binding domain in the C-term part is indicated by a filled star.TRP = TRP box motif.Calmodulin, major reactive cysteines, glycosylation site and pore loop (P) are referred to by their respective symbols (see insert).Indirect modulation by GPCRs or neurotrophin receptors signaling pathways and interaction with TRPV1 are also indicated to illustrate TRPA1 role as polymodal integrator in neuron sensitization.
Table 2 . Properties of Representative TPA1 Antagonists Compounds in Preclinical Exploration
Reversed formalin-induced behaviors in mouse at 300 mg/kg i.p. to levels comparable to gabapentin.Rat bioavailability = 37% at 2 mg/kg, reaching 0.72 µMol/L in plasma at 30 min with half-life of 0.81 h.The compound is brain penetrant with brain/plasma ratio of 0.75.1000-selective vs other TRP channels.Reverses AITC-induced nocifensive responses and osteoarthritic pain in rat ED50 23 mg/kg, p.o. human; r= rat; agonists used in EC50 studies: a AITC, b acrolein, c $02, d BITC, e allyl isothiocyanate. h= | 2016-10-06T20:11:54.042Z | 2013-03-08T00:00:00.000 | {
"year": 2013,
"sha1": "dbcf03244494da67a7db5ef41801e5c3e4138d95",
"oa_license": "CCBY",
"oa_url": "https://openpainjournal.com/VOLUME/6/PAGE/137/PDF/",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "dbcf03244494da67a7db5ef41801e5c3e4138d95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118890816 | pes2o/s2orc | v3-fos-license | $N=2$ Super-$W_3^{(2)}$ Algebra in Superfields
We present a manifestly $N=2$ supersymmetric formulation of $N=2$ super-$W_3^{(2)}$ algebra (its classical version) in terms of the spin 1 unconstrained supercurrent generating a $N=2$ superconformal subalgebra and the spins 1/2, 2 bosonic and spins 1/2, 2 fermionic constrained supercurrents. We consider a superfield reduction of $N=2$ super-$W_3^{(2)}$ to $N=2$ super-$W_3$ and construct a family of evolution equations for which $N=2$ super-$W_3^{(2)}$ provides the second hamiltonian structure.
algebra (its classical version) in terms of the spin 1 unconstrained supercurrent generating a N = 2 superconformal subalgebra and the spins 1/2, 2 bosonic and spins 1/2, 2 fermionic constrained supercurrents. We consider a superfield reduction of N = 2 super-W (2) 3 to N = 2 super-W 3 and construct a family of evolution equations for which N = 2 super-W
Introduction
In recent years a plenty of various superextensions of nonlinear W algebras were constructed and studied from different points of view, both at the classical and quantum levels (see, e.g., [1] and references therein). An interesting class of bosonic W algebras is so called quasisuperconformal algebras which include, besides the bosonic currents with the canonical integer conformal spins, those with half-integer spins [2,3,4]. The simplest example of such an algebra is the Polyakov-Bershadsky W (2) 3 algebra [5,6]. It is a bosonic analog of the linear N = 2 superconformal algebra (SCA) [7]: apart from two currents with the spins 2 and 1 (conformal stress-tensor and U(1) Kac-Moody (KM) current), it contains two bosonic currents with spins 3/2. For the currents to form a closed set (with the relevant Jacobi identities satisfied), the OPE between the spin 3/2 currents should necessarily include a quadratic nonlinearity in the U(1) KM current. So W (2) 3 , in contrast to its superconformal prototype, is a nonlinear algebra.
It is natural to seek for supersymmetric extensions of this type of W algebras and to see how they can be formulated in terms of superfields. First explicit example of such an extension, N = 2 super-W (2) 3 algebra, has been constructed at the classical level in [8] (its quantum version is given in [9]). It involves fermionic currents with integer spins 1 and 2 and contains both N = 2 SCA and W (2) 3 as subalgebras. Actually, it can be regarded as a nonlinear closure of these two algebras. 1 Curiously enough, the spin content of the currents of N = 2 super-W algebra is such that they cannot be immediately arranged into N = 2 supermultiplets with respect to the N = 2 SCA which is manifest in the formulation given in [8]. This means that N = 2 super-W (2) 3 , as it stands, does not admit the standard N = 2 superfield description, in contrast, e.g., to N = 2 super-W 3 algebra [12,13]. One can still wonder whether any other superfield formulation exists, perhaps with composite currents involved into the game. Recall that it is very advantageous to have a superfield description because it radically simplifies computations and allows to present all results in an explicitly supersymmetric concise form.
In the present paper we show that N = 2 super-W (2) 3 algebra of ref. [8] admits a nice superfield description with respect to another N = 2 superconformal subalgebra which is implicit in the original formulation. An unusual novel feature of this description is that some of the relevant supercurrents are given by N = 2 superfields subjected to nonlinear constraints. Using the superfield formulation constructed, we demonstrate that N = 2 super-W 3 algebra follows from N = 2 super-W (2) 3 by a secondary hamiltonian reduction, like W 3 follows from W (2) 3 [14,11]. We also construct a family of N = 2 superfield evolution equations with N = 2 super-W (2) 3 as the second hamiltonian structure.
Preliminaries
For the reader's convenience we review here the salient features of N = 2 super-W (2) 3 algebra in terms of component currents [8]. 1 Zamolodchikov's W 3 algebra [10] can be nonlinearly embedded into W (2) 3 [11], so it also forms a subalgebra in N = 2 W (in some special basis for the generating currents of the latter).
A powerful method of constructing conformal (super)algebras is the hamiltonian reduction method [16,3,17]. In this approach one writes down a gauge potential A valued in the appropriate (super)algebra g and then constrain some components of A to be equal to constants. From the residual gauge transformations of the remaining components of A one can immediately read off the OPEs of some conformal W (super)algebra, with these components as the generating currents. Since the residual gauge transformations clearly form a closed set, the Jacobi identities of the resulting W algebra prove to be automatically satisfied.
A straightforward application of hamiltonian reduction to superalgebra sl(3|2) gives rise to the classical N = 2 super-W 3 algebra [12]. In [8] a different choice of constraints has been made (it corresponds to some non-principal embedding of sl(2) into the bosonic sl(3) × sl(2) subalgebra of sl(3|2)). The residual gauge transformations of the remaining currents yield just the N = 2 super-W (2) 3 algebra we will deal with here. More precisely, starting with the following constrained sl(3|2) gauge potential A where {J s , J w , G + , G − , T 1 , T 2 } and S 1 ,S 1 , S,S, S 2 ,S 2 are, respectively, bosonic and fermionic currents, one can easily find the residual gauge transformations which preserve this particular form of A. They correspond to the following parameters with Λ being a sl(3|2)-valued matrix of the parameters The remaining twelve combinations of the parameters are expressed through these ones and the currents. After representing these transformations in the form where φ(z) is any current, a self-consistent set of OPEs for the currents can be extracted from eq.(2.5).
To understand why this superalgebra was called N = 2 super-W 3 , it is instructive to redefine the currents in the following way z 12 . (2.8) So they form W Table 1. Table 1.
All the currents with the aforementioned spins, except for T s and T w , are primary with respect to the following Virasoro stress-tensor T having a zero central charge The currents T s and T w are quasiprimary with the central charges 3c and −3c, respectively. It can be checked that in this N = 2 super-W algebra there exists no basis for the currents such that all the currents are primary with respect to some (improved) Virasoro stress-tensor.
The whole set of OPEs of N = 2 super-W (2) 3 algebra in terms of these currents is given in Appendix.
3 algebra in terms of N = 2 supercurrents Despite the fact that N = 2 super-W (2) 3 algebra has an equal number of bosonic and fermionic currents, it is unclear how they could be arranged into N = 2 supermultiplets. The main obstruction against the existence of a superfield description is the fact that in the superalgebra considered the numbers of currents with integer and half-integer spins do not coincide, while any N = 2 superfield clearly contains the equal number of components with integer and half-integer spins.
To find a way to construct the N = 2 super-W algebra is nonlinear. This means that one may choose the basis for its generating currents in many different ways. The transformations relating different bases must be invertible but in general they are nonlinear and can include derivatives of the currents along with the currents themselves.
Secondly, we would like to stress that the OPEs (2.7), (2.8), (A.1) do not fix the scale of fermionic S 1 ,S 1 , S,S, S 2 ,S 2 and bosonic G + , G − currents. Moreover, keeping in mind that all these currents possess definite charges with respect to the J w and J s U(1) currents, one can introduce a new "improved" stress-tensor with respect to which the currents G + , G − , S 1 ,S 1 , S,S, S 2 ,S 2 , still remaining primary, have the dimensions (spins) listed in Table 2. Table 2.
Thus we cannot exclude the possibility that in some nonlinear basis the generating currents could have appropriate spins to be organized into supermultiplets with respect to some new N = 2 SCA.
Fortunately, just this situation takes place for the superalgebra under consideration. In order to demonstrate this, let us pass to the new basis (J s ,S 2 ,S 1 ,T s ), (S 1 ,J), (G + ,S), (S, G − ), (T ,S 2 ), related to the original one as All the newly defined currents, except forJ, are primary with respect to the Virasoro stress-tensorT s (it corresponds to the choice of b = 0, g = −1 in eq. (3.1) and Table 2) and have the following spins and statistics: These transformation properties follow from the OPEs: together form a nonlinear and actually not fully reducible representation of the N = 2 SCA defined above. Crucial for putting this representation in a more transparent manifestly supersymmetric form is the observation that the nonlinearly transforming pairs of the basic currents, namely (S, G − ) and (T ,S 2 ), can be combined with the composites B 1 , F 1 and F 2 , B 2 into two linearly transforming spin 2 N = 2 supermultiplets with the opposite overall Grassmann parities.
Thus, the basic currents of N = 2 super-W This extended set of currents, in accordance with their spin content, is naturally accomodated by the five N = 2 supercurrents: general spin 1 J(Z), spin 1/2 anti-chiral fermionic G(Z) and bosonic Q(Z), spin 2 fermionic F (Z) and bosonic T (Z) 3 . The precise relation of the components of these superfields to the currents of N = 2 W where and D, D are the spinor covariant derivatives defined by The next SOPEs express the property that the remaining four supecurrents have the aforementioned spins with respect to this N = 2 SCA Let us pay attention to the presence of a central term in (3.11). It reflects the property that the superfield G(Z) transforms inhomogeneously under N = 2 SCA. All other superfields are primary with respect to the N = 2 SCA supercurrent J(Z).
In each of the supercurrents F (Z) and T (Z), the spin 3 component and one of the spin 5/2 components are composite (see (3.3)). In the superfield language, this implies that these superfields have to satisfy some constraints. Using the formulas (A.2) of Appendix, one can check that the relations (3.3) amount to the following nonlinear constraints For completeness, we add also the chirality conditions for G, Q By means of eq. (3.14) one could, in principle, eliminate F (Z) in terms of T (Z), G(Z) and Q(Z). If one substitutes this expression for F (Z) in the constraint (3.13), the latter is satisfied identically. However, this expression is singular at Q(Z) = 0. We prefer to deal with two constrained supercurrents in order to have polynomial non-singular expressions in all SOPEs. Now we are ready to construct the remaining SOPEs of N = 2 super-W 3 . Taking the most general Ansatz for these SOPEs in terms of the introduced superfields, using (3.13), (3.14), (3.3) and requiring the latter to be consistent with the OPEs for the superfield components (see Appendix) we obtain the following non-trivial relations (3.16) The above SOPEs are self-consistent only on the shell of constraints (3.13), (3.14). These constraints are first class and the Jacobi identities are satisfied only on their shell A 1 = A 2 = 0. They are consistent with the SOPEs (3.8), (3.11), (3.12), (3.16) in the sense that the SOPEs of A 1 , A 2 with all supercurrents are vanishing on the constraint shell (the compatibility of the whole set of SOPEs with the linear chirality conditions (3.15) is evident by construction). It should be pointed out that it is impossible to satisfy the Jacobi identities off the constraint shell unless one further enlarges the set of supercurrents. We have checked this by inserting the expressions A 1 and A 2 (3.13), (3.14) 4 in all appropriate places in the right hand sides of the SOPEs obtained. Thus the constraints (3.13), (3.14) are absolutely necessary for the above set of N = 2 superfields to form a closed algebra. In a forthcoming paper devoted to N = 2 superfield hamiltonian reduction [15] it will be shown that these constraints (as well as the chirality conditions (3.15)) are remnants of the Hull-Spence type constraints [18] for the supercurrents of N = 2 extension of affine superalgebra sl(3|2) (1) .
Our final remark concerns the presence of the spin 1/2 currents S 1 and G + in the basis (3.2). At first sight, following the reasonings of ref. [19], one could think that they can be factored out to yield a smaller nonlinear algebra. However, this is not true in the present case because an important assumption of ref. [19] does not hold, namely the assumption that OPEs between the spin 1/2 currents contain singularities. Indeed, the OPEs of these currents are regular in our case. So in the algebra N = 2 super-W This substitution is dictated by the requirement that SOPEs of these supercurrents with G(Z) andQ(Z) (4.1) be homogeneous inQ(Z) and G(Z). The supercurrentJ(Z) can be checked to generate another N = 2 SCA, such that the conformal weights ofQ(Z), G(Z), T (Z) and F (Z) with respect to it equal 0, 1/2, 2 and 5/2, respectively. The constraints (4.1) and (4.2) prove to be preserved by this N = 2 SCA. Thus the superfieldsJ(Z) andT (Z) by construction are gauge invariant with respect to the gauge transformations generated by the first-class constraints (4.1). So, according to the standard ideology of hamiltonian reduction [16,3,17] 5 , they have to form a closed superalgebra (with all the Jacobi identities satisfied) on the shell of constraints (4.1), (4.2) and with The last relation follows by substituting (4.1), (4.2) into eq.(3.14). Note that with this F eq.(3.13) is identically satisfied. Using the SOPEs of N = 2 super-W (2) 3 we find that the resulting SOPEs for the currents J(Z) andT (Z) after substituting (4.1), (4.2), (4.4) exactly coincide with SOPEs of classical N = 2 super-W 3 algebra [13]. In the next Section we will make use of this result to construct the simplest nontrivial hamiltonian flow on N = 2 super-W algebra under the natural assumptions that it (i) respects rigid N = 2 supersymmetry and (ii) has the same scaling dimension 2 as the hamiltonian of the ordinary bosonic Boussinesq equation, is given by Note the presence of the free parameters v 1 , ..., v 4 in (5.1). Now, using SOPEs of N = 2 super-W and the Poisson brackets in the r.h.s. of (5.2) are understood), it is straightforward to find the explicit form of the evolution equations. Due to the complexity of these equations, it is not so illuminating to write down them here. We also postpone to future publications the analysis of integrability of this system. In ref. [13] we have constructed, in N = 2 superfield form, the most general oneparameter super Boussinesq equation with the second hamiltonian structure given by the classical N = 2 super-W 3 algebra. With making use of the results of Sect. 4 it is not difficult to show that the obtained system of evolution equations reproduces the one of ref. [13] upon the above truncation of N = 2 super-W (2) 3 to N = 2 super-W 3 and with the following relations between the parameters in (5.1) Here α is the parameter entering the N = 2 super Boussinesq equation [13].
Conclusion
To summarize, we have concisely rewritten classical N = 2 super-W (2) 3 algebra of ref. [8] in terms of five constrained N = 2 superfields, found its superfield reduction to N = 2 super-W 3 algebra [12,13], and constructed a family of N = 2 supersymmetric equations the second hamiltonian structure for which is given by this superalgebra and which generalize the N = 2 super Boussinesq equation of ref. [13]. In a forthcoming publication [20] we will extend our consideration to the case of full quantum N = 2 super-W (2) 3 algebra. An interesting problem is to find out possible string theory implications of N = 2 W (2) 3 algebra, both in its component and superfield formulations. The fact that there exists a zero central charge stress-tensor (2.9) with respect to which almost all of the currents are primary suggests that this algebra admits an interpretation as a kind of twisted topological superconformal algebra and so has a natural realization in terms of BRST structure associated with some string (the W (2) 3 one?) or superstring. | 2019-04-14T02:58:16.255Z | 1995-05-23T00:00:00.000 | {
"year": 1995,
"sha1": "e59cbfcdb2cac210110f934502867db12aec7813",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9505142",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aaa75e8d3d766159c9ff61c31179c02f6f48bbaf",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249431337 | pes2o/s2orc | v3-fos-license | Towards a General Purpose CNN for Long Range Dependencies in $N$D
The use of Convolutional Neural Networks (CNNs) is widespread in Deep Learning due to a range of desirable model properties which result in an efficient and effective machine learning framework. However, performant CNN architectures must be tailored to specific tasks in order to incorporate considerations such as the input length, resolution, and dimentionality. In this work, we overcome the need for problem-specific CNN architectures with our Continuous Convolutional Neural Network (CCNN): a single CNN architecture equipped with continuous convolutional kernels that can be used for tasks on data of arbitrary resolution, dimensionality and length without structural changes. Continuous convolutional kernels model long range dependencies at every layer, and remove the need for downsampling layers and task-dependent depths needed in current CNN architectures. We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$\mathrm{D}$) and visual data (2$\mathrm{D}$). Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
Introduction
Convolutional Neural Networks (LeCun et al., 1998) (CNNs) are a class of Deep Learning models widely used for machine learning applications. Their popularity stems from their high performance and efficiency which has led them to achieve state-of-the-art in several applications across sequential (Abdel-Hamid et al., 2014;Van Den Oord et al., 2016), visual (Krizhevsky et al., 2012;Simonyan & Zisserman, 2014) and high-dimensional data (Schütt et al., 2017;Wu et al., 2019). Nevertheless, an important limitation of CNNs -and Neural Networks in general-is that their architectures must be tailored towards particular applications in order to handle different data lengths, resolutions and dimensionalities. This, in turn, has led to an extensive number of task-specific CNN architectures Bai et al., 2018;Simonyan & Zisserman, 2014;Szegedy et al., 2015;Ronneberger et al., 2015;He et al., 2016;Qi et al., 2017;Wu et al., 2019).
Data can come at many different lengths, e.g., images can be 32x32 or 1024x1024 and audio can easily be 16000 per second. The problem with standard CNNs is that their convolutional kernels are local, which requires a custom architecture for each length with carefully chosen strides and pooling layers to capture full context. In addition, many types of data are inherently continuous in nature and have the same semantic meaning at different resolutions, e.g., images can be captured at arbitrary resolutions and have identical semantic content, and audio can be arbitrarily sampled at 16kHz or 44.1kHz and still sound the same to human ears. Nevertheless, conventional CNNs are bound to resolution and cannot be used across resolutions due to the discrete nature of their convolutional kernels. Both problems are further exacerbated when considering data of different dimensionality with the same CNN, e.g., sequential (1D), visual (2D) and high-dimensional data (3D, 4D), as different dimensionalities operate at different characteristic lengths and resolutions, e.g., a second of audio easily has length 16000 which strongly contrasts with the size of images in benchmark datasets (Krizhevsky et al., 2009;Deng et al., 2009).
Towards a general-purpose CNN architecture. In this work, we aim to construct a single CNN architecture that can be used on data of arbitrary resolutions, lengths and dimensionalities. Standard CNNs require task-specific architectures due to the discrete nature of their convolutional kernels which binds the kernels to specific data resolutions and makes them ill-suited to model global context due to the large amount of parameters required to construct large discrete convolutional kernels. Consequently, in order to construct a general-purpose CNN architecture it is crucial to develop a resolution agnostic convolutional layer able to model long range dependencies in a parameter efficient manner.
(a) (b) Sequential data (c) Visual data (d) Point clouds Figure 1. Continuous convolutional kernels. A continuous convolutional kernel is parameterized with a small neural network G Kernel that receives coordinates ci ∈ R D as input and outputs the value of the convolutional kernel at that position K(ci)=G Kernel (ci) ∈ R N in ×N out (1a). The continuous parameterization of K allows the convolutional layer to (i) model long range dependencies, (ii) handle irregularly sampled data, and (iii) be used across different resolutions. Additionally, changing the dimensionality of the coordinates ci can be used to construct convolutional kernels for sequential (1b), visual (1c), and higher dimensional data (1d) with the same kernel generator network.
The need for continuous parameterizations. Discrete convolutional kernels are defined with N out ×N in independent learnable weights at each kernel position. Hence, large convolutional kernels require a large number of parameters and conventional CNNs rely on local convolutional kernels in combination with task-dependent depth values and pooling layers in order to model long range dependencies. Alternatively, we can construct continuous convolutional kernels through use of a small neural network that maps positions to the value of the kernel at those positions (Romero et al. (2022b), Fig. 1a). This approach decouples the size of the convolutional kernel from the number of parameters required to construct it, thus allowing the construction of arbitrary long kernels in a parameter efficient manner. Moreover, this parameterization overcomes the discrete nature of standard kernels and allows for the construction of resolution agnostic convolutional kernels that operate on coordinates of arbitrary resolution. Consequently, the same kernel generator network -and thus the same CNN-can be used regardless of the input length and resolution. Furthermore, the same kernel generator network can be used to construct convolutional kernels for sequential D=1 (Fig. 1b), visual D=2 (Fig. 1c) and higher dimensional tasks D≥3 (Fig. 1d) simply by changing the dimensionality of the input coordinates. In summary, the properties of Continuous Convolutional Kernels allow for the construction of a single CNN architecture that can be used across data lengths, resolutions and dimensionalities.
Contributions.
• We present the Continuous CNN (CCNN): a simple, general purpose CNN that can be used across data resolutions and dimensionalities without structural modifications. Our CCNN matches and often surpasses the state-of-the-art on several sequential (1D) and visual tasks (2D), as well as on tasks with irregularly-sampled data and test-time resolution changes.
• To this end, we provide several improvements for existing continuous CNN methods (Romero et al., 2022b;a) that allow them to match current state-of-the-art methods, e.g., S4 (Gu et al., 2022). Our improvements include changes to the initialization of the kernel generator networks, and modifications to the convolutional layers and the overall structure of the CNN.
Continuous Kernel Convolutions
Continuous kernel convolutions (Romero et al., 2022b) parameterize convolutional kernels as continuous functions by using a small neural network G Kernel ∶ R D → R Nout×N in as a kernel generator network. This network maps a coordinate c i ∈ R D to the value of the convolutional kernel at that position: G Kernel (c i ) ∈ R Nout×N in (Fig. 1a). By passing a vector of K coordinates Subsequently, a convolution operation takes place between an input signal x ∶ R D → R N in and the generated convolutional kernel K ∶ R D → R Nout×N in to construct an output feature representation y ∶ R D → R Nout . That is: y=ConvNd(K, x).
Properties
A general operation for arbitrary data dimensionalities. By changing the dimensionality D of the input coordinates c i , the kernel generator network G Kernel can be used to construct convolutional kernels of arbitrary dimensionality. Consequently, the same operation can be used to process sequential D=1, visual D=2 and higher dimensional data D≥3.
Parameter and computation efficient modelling of long range dependencies at every layer. We can use the kernel generator network G Kernel to construct convolutional kernels as big as the input signal in order to model long range dependencies at every layer, i.e., K=[G Kernel (c i )] i∈[1,...,K] ; K=len(x). The number of parameters in G Kernel is independent from the length of the convolutional kernel, and thus kernels of arbitrary size can be constructed under a fixed parameter count. Convolutions with large convolutional kernels can be efficiently computed using the convolution theorem, which states that a convolution in the time domain equals a pointwise product in the frequency domain.
Irregularly-sampled data. For some applications, the input x may not be on a regular grid, e.g., medical data. Discrete convolutional kernels are ill-suited for such applications as their value is only known at some preset positions and not for arbitrary coordinates c i . Contrarily, continuous kernels are defined everywhere and thus can handle irregular data natively.
Equivalent responses across input resolutions. If the input signal x undergoes a resolution change, e.g., audio initially observed at 8KHz is now observed at 16KHz, convolving with a discrete convolutional kernel would yield different responses, as the kernel would cover a different subset of the input at each resolution. On the other hand, continuous kernels are resolution agnostic, and thus able to recognize an input regardless of its resolution. When presenting an input at a different resolution, e.g., higher resolution, it is sufficient to pass a finer grid of coordinates through the kernel generator network in order to construct the same kernel at the corresponding resolution. For a signal x and a continuous convolutional kernel K sampled at resolutions r (1) and r (2) , the convolution at both resolutions are approximately equal up to a factor proportional to the resolution change (Romero et al., 2022b): (1) Learning of hyperparameters. A promising property of continuous kernels is that they enable the learning of parameters that must otherwise be treated as hyperparameters in CNNs with discrete kernels. FlexConvs (Romero et al., 2022a), for instance, define their convolutional kernels as the product of a kernel generator network G kernel and a trimmed Gaussian mask Gauss µ,σ with learnable parameters µ, σ, i.e., K(c i )=G kernel (c i ) ⋅ Gauss µ,σ (c i ). The Gaussian mask defines the size of the kernel and thus, by learning the mask, one can effectively learn the size of the convolutional kernel during training. In concurrent work we observe that a similar strategy can be used to additionally learn the depth and width of neural networks. An improved residual block with continuous kernel convolutions. Recent works have shown that residual blocks (He et al., 2016) can be strongly improved by changing the nonlinearities used and the position of the normalization layers within the blocks (Xiong et al., 2020;Liu et al., 2022). Based on these observations, we modify the FlexNet architecture (Romero et al., 2022a) with a residual network composed of blocks similar to those of S4 networks (Gu et al., 2022). The CCNN architecture is shown in Figure 2.
The Continuous Convolutional Neural Network: Modelling Long Range Dependencies in ND
Depthwise separable continuous kernel convolutions. Separable convolutions have long been shown to improve the parameter and computational efficiency of CNNs (Rigamonti et al., 2013;Sifre & Mallat, 2014). More recently, their usage has shown improvement over conventional convolutions in CNNs (Chollet, 2017;Tan & Le, 2019;Knigge et al., 2021;Liu et al., 2022), due to the separation of spatial and channel dimensions, which reduces the computational and parameter complexity of the convolution and allows for wider networks and higher performance.
Based on these observations we construct a depth-wise separable version of FlexConv (Romero et al., 2022a), in which a channel-wise convolution is computed with a kernel generated by a kernel generator network G Kernel ∶ R D → R N in , followed by a point-wise convolution from N in to N out . This change allows for the construction of much a wider CCNN -from 30 to 110 hidden channels-without increasing the parameter or computation complexity of the network.
Proper initialization of the kernel generator network G Kernel . We observe that the kernel generator networks in previous works are not properly initialized for their purpose of parameterizing convolutional kernels (Schütt et al., 2017;Wu et al., 2019). Upon initialization, one would like the variance of the input and the output of a convolutional layer to remain equal to avoid exploding and vanishing gradients, i.e., Var(x)=Var(y). As such, convolutional kernels are initialized to have variance Var(K)=gain 2 (in channels ⋅ kernel size), with a gain that depends on the nonlinearity used (He et al., 2015). Nevertheless, neural networks are initialized such that the unitary variance of the input is preserved at the output. Consequently, when used as a kernel generator network, a standard initialization method leads the kernel to have unitary variance, i.e., Var(K)=1. As a result, CNNs using neural networks as kernel generator networks experience a layer-wise growth in the variance of the feature representations proportional to in channels ⋅ kernel size. For example, we observe that the logits of CKCNNs (Romero et al., 2022b) and FlexNets (Romero et al., 2022a) lie in the order of 1e 19 upon initialization. This is undesirable as it can lead to unstable training and the need for low learning rates. To solve this problem, we require that the variance at the output of G Kernel equals gain 2 (in channels⋅kernel size) and not 1. To this end and inspired by Chang et al. (2020), we re-weight the last layer of the kernel generator network by gain √ in channels ⋅ kernel size. As a result, the variance at the output of the kernel generator network follows the initialization of conventional convolutional kernels, and the logits of CCNNs present unitary variance upon initialization.
Experiments and discussion
Our goal is to construct a single model that can be applied to data of arbitrary length, resolution and dimensionality. We construct two CCNNs of different sizes: CCNN 4,110 (4 blocks, 110 channels) and CCNN 6,380 (6 blocks, 380 channels) and validate them on several sequential (1D) and visual (2D) benchmark datasets. For sequential data, we consider 1D pixel-level image classification (Le et al., 2015;Chang et al., 2017), speech classification (Warden, 2018) and the Long Range Arena (LRA) (Tay et al., 2021) benchmark, which evaluates the capacity of models to describe long range dependencies. For visual data, we consider 2D classification of images of different size (Krizhevsky et al., 2009;Coates et al., 2011). A complete description of the datasets used is given in Appx. A. The hyperparameters used in our experiments as well as additional descriptions of our models are reported in Appx. B. 1 Results. As shown in Tabs. 1-4, our CCNN models perform well across all tasks considered. In fact, CCNNs set a new state of the art on multiple LRA tasks as well as 1D CIFAR10 pixel classification and raw speech classification on Speech Commands, while often being (much) smaller than competitive approaches.
The importance of modelling long range dependencies on ND. In principle, we could consider all tasks as a sequential task in which no 2D structure is considered. This is done, for instance with S4 (Gu et al., 2022) due to the complexity of defining state spaces in multidimensional spaces. Nevertheless, this comes at the cost of throwing away important information regarding the nature of the data. Contrarily, CCNNs can be easily defined on multidimensional spaces simply by changing the dimensionality of the coordinates going into the kernel generator networks. Interestingly, we observe that by considering the 2D nature of the Image and Pathfinder tasks in the LRA benchmark, much better results can be obtained (Tab. 3). In PathFinder with 2D images, our largest CCNN obtains an accuracy of 96.00 outperforming the previous state of the art by a margin of almost 10% points and performing remarkably better than the CCNN on flattened images. In addition, we observe that models trained on the original 2D data show faster convergence than their sequential counterparts (Fig. 3).
We note that 2D CNNs with small convolutional kernels, e.g., ResNet-18, were unable to solve Pathfinder due to the lack of fine-grained global context modelling resulting from intermediate pooling layers. This was also seen by Gu et al. (2020).
Conclusion
We propose the Continuous Convolutional Neural Network: a single CNN architecture able to model long range dependencies on data of arbitrary length, resolution and dimensionality. Key to this development is the replacement of discrete convolutional kernels used in standard CNNs with Continuous Convolutional Kernels. With a single architecture, our CCNN performs competitively and often outperforms the current state of the art across a variety of sequential and visual tasks.
Supplementary Material Towards a General Purpose CNN for Long Range Dependencies in ND
A. Dataset description Sequential and Permuted MNIST. The MNIST dataset (LeCun et al., 1998) consists of 70K gray-scale 28×28 handwritten digits divided into training validation and test sets of 60K and 10K samples, respectively. For validation purposes, the training dataset is further divided into training and validation sets of 55K and 5K samples, respectively.
The sequential MNIST dataset (sMNIST) presents MNIST images as a sequence of 784 pixels for digit classification. Consequently, good predictions require the model to preserve long-term dependencies up to 784 steps in the past. The permuted MNIST dataset (pMNIST) incorporates an additional level of difficulty by permuting the order of all sMNIST sequences with a random permutation. Resultantly, models can no longer rely on local information for the construction of their features and the importance of modelling long-term dependencies becomes more pronounced.
CIFAR10, CIFAR100 and Sequential CIFAR10. The CIFAR10 dataset (Krizhevsky et al., 2009) consists of 60K realworld 32×32 RGB images uniformly drawn from 10 classes divided into training and test sets of 50K and 10K samples, respectively. The CIFAR100 dataset (Krizhevsky et al., 2009) is similar to the CIFAR10 dataset, with the difference that the images are now uniformly drawn from 100 different classes. For validation purposes, the training dataset of both CIFAR10 and CIFAR100 are further divided into training and validation sets of 45K and 5K samples, respectively.
Analogously to the sMNIST dataset, the sequential CIFAR10 (sCIFAR10) dataset presents CIFAR10 images as a sequence of 1024 pixels for image classification. This dataset is more difficult than sMNIST, as (i) larger memory horizons are required to successfully solve the task, and (ii) more complex structures and intra-class variations are present in the images.
Speech Commands. The Speech Commands dataset (Warden, 2018) consists of 105809 one-second audio recordings of 35 spoken words sampled at 16kHz. Following Kidger et al. (2020), we extract 34975 recordings from ten spoken words to construct a balanced classification problem. We refer to this dataset as Raw Speech Commands. In addition, we use the preprocessing steps of Kidger et al. (2020) and extract mel-frequency cepstrum coefficients from the raw data. The resulting dataset, referred to as MFCC Speech Commands, consists of time series of length 161 and 20 channels.
Long Range Arena. The Long Range Arena benchmark (Tay et al., 2021) consists of 6 tasks with lengths 1K-16K steps encompassing modalities and objectives that require similarity, structural, and visuospatial reasoning. The Pathfinder, Path-X and Image tasks are similar in nature to the sMNIST and sCIFAR10 tasks. These tasks consists of classification tasks performed on images that are treated as sequences. The Image task corresponds to the sequential CIFAR10 dataset with the only difference that the CIFAR10 images are treated as gray-scale images. The Pathfinder and Path-X tasks are binary tasks in which binary images are provided and the model must predict whether the two points in the images are connected with a line or not -see Fig. 4 for an example-. The difference between both datasets is their resolution. Whereas Pathfinder has images of size 32×32, Path-X has images of size 128×128. It is important to mention that these tasks are so difficult that even if treated as 2D signals, CNNs without global receptive fields are unable to solve them (Gu et al., 2022).
STL-10. The STL-10 dataset (Coates et al., 2011) is a subset of the ImageNet dataset (Krizhevsky et al., 2012) consisting of 13,000 96×96 real-world RGB images uniformly drawn from 10 classes divided into training and test sets of 5K and 8K images, respectively. For validation purposes, the training dataset is further divided into training and validation sets of 4,500 and 500 samples, respectively. Code repository and logging. Our code is written in PyTorch. We utilize wandb (Biewald, 2020) hydra (Yadan, 2019) and pytorch-lightning (Falcon et al., 2019) for logging and code structuring. Our experiments are performed on NVIDIA TITAN RTX, A6000 and A100 GPUs, depending on the size of the datasets and inputs considered, and our code is publicly available at github.com/david-knigge/ccnn.
The kernel generator network G Kernel . Our kernel generator network is parameterized as a 3-layer MAGNet (Romero et al., 2022a) with 32 hidden units for the CCNN 4,140 models, and 64 hidden units for the larger CCNN 6,380 models. The output size of the kernel generator network corresponds to the input channels of each layer in the network.
Normalized relative positions. The kernel generator network G Kernel can, in principle, receive arbitrary coordinates as input. However, considering unitary step-wise relative positions, i.e., 0, 1, 2, ... , N, can be problematic from a numerical stability perspective as N may grow very large, e.g., N=16000 for the Speech Commands dataset. Consequently, based on insights from the Implicit Neural Representations, e.g., Sitzmann et al. (2020); Fathony et al. (2021), we normalize the coordinates such that they lie in the space [−1, 1] D for D-dimensional kernels. To this end, we map largest unitary positions seen during training [0, N ] to a uniform linear space in [−1, 1].
B.2. Hyperparameters and training details
Optimizer and learning rate scheduler. All our models are optimized with AdamW (Loshchilov & Hutter, 2017) in combination with a cosine annealing learning rate scheduler (Loshchilov & Hutter, 2016) and a linear learning rate warm-up stage of 10 epochs.
Best hyperparameters found. We perform hyperparameter search on the learning rate, dropout rate, weight decay, and ω 0 of our CCNNs for each task considered. 2 The best hyperparameters found are reported in Tables 5 and 6. | 2022-06-08T01:15:59.535Z | 2022-06-07T00:00:00.000 | {
"year": 2022,
"sha1": "4be5febe15459af39268e8fd6b91440b596fd6d6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "238e4958773a5d9d3260f05e2532996b8b7dbaea",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225142738 | pes2o/s2orc | v3-fos-license | Multimodal Imaging Characteristics and Diagnostic Approach to Ancient Schwannoma in a Pediatric Patient
Ancient schwannoma is an extremely rare benign, peripheral nerve sheath tumor. Despite its benign nature, its characteristic heterogeneous appearance and degenerative changes commonly lead to misdiagnosis of malignancy. Although schwannomas are extremely uncommon in the pediatric population, these neoplasms have been associated with underlying conditions such as neurofibromatosis type two, and appropriate recognition is important to ensure close monitoring of potential future symptoms secondary to other tumors. We report the imaging and laboratory findings of an ancient schwannoma of the vagus nerve in a 10-year-old female, the first documented case of such a tumor in a pediatric patient, and discuss its characteristic findings and diagnostic considerations. Awareness of this rare tumor can help promote correct diagnosis and avoidance of costly, high-risk diagnostic methods.
Introduction
Schwannomas are benign, neurogenic tumors originating from Schwann cells surrounding peripheral nerves. These rare lesions can arise from peripheral nerves surrounded by Schwann cells in any area of the body. They are most often found in the head, neck, and extremities [1][2][3]. The vast majority of schwannomas occur in adults, with fewer than 10% diagnosed in patients younger than 21 years [1][2][3]. It is relatively common for schwannomas to involve cranial nerves, but schwannomas of the vagus nerve are rare, especially in children, with only 15 cases of pediatric vagal schwannoma reported in literature [1,2].
Schwannomas are divided into different subtypes based on their histologic characteristics. Ancient schwannoma is an uncommon subtype of these rare neoplasms, histologically characterized by degenerative changes and diffuse hypocellular patterning, which often leads to misdiagnosed malignancy despite the tumor's benign nature [4,5]. There are only a few documented cases of ancient schwannoma in the literature, but the few known cases were most often seen in elderly patients with long-standing tumors [4]. In this report, we describe the first documented case of an ancient vagal schwannoma in a pediatric patient.
Case Presentation
A 10-year-old female presented to her primary care provider with a three-day history of sore throat and cervical lymphadenopathy. The sore throat improved after four days of antibiotic therapy, but persistent right cervical lymphadenopathy warranted follow-up at her primary care clinic. Aside from periodic lowgrade contralateral posterior neck pain and intermittent difficulty in swallowing, the patient was asymptomatic. White blood cell count, mononucleosis spot test, C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR) were unremarkable, and Epstein-Barr virus (EBV) studies were consistent with prior infection. A right neck ultrasound (US) revealed a solid hypervascular mass deep to the right sternocleidomastoid muscle (SCM) and posterior to the carotid artery and internal jugular vein, as seen in Figure 1. A computed tomography (CT) scan was obtained and found a heterogeneously enhancing welldefined mass displaying little surrounding inflammation, reactive prominence of right posterior triangle lymph nodes, and mass effect on right internal jugular vein (Figure 2), suggesting lymph node enlargement but warranting follow-up to rule out a neoplastic process. Based on these results, the patient was directed to present to the Emergency Department at our institution. Outside imaging was reviewed and, after unremarkable CRP, complete blood count, and urine and blood metanephrine studies performed at our institution, magnetic resonance imaging/angiography (MRI/MRA) was obtained. This identified a well-defined 3.0 x 2.3 x 4.1 cm ovoid-appearing, heterogeneous mass seen within the right carotid space. On postcontrast sequences, there was avid enhancement with internal areas demonstrating less prominent enhancement and no restricted diffusion. The mass displaced the right internal jugular vein anterolaterally and the internal carotid artery anteromedially with associated prominent and mildly enlarged cervical lymph nodes felt to be reactive in nature (Figure 3). Given the location and imaging characteristics, a diagnosis of vagal schwannoma was favored. Fine-needle aspiration (FNA) was performed with cytologic and histologic findings consistent with schwannoma, and surgical excision was performed. Anatomic pathology report noted an ancient schwannoma with characteristic atypia and degenerative changes.
Discussion
Ancient schwannomas are variants of benign peripheral nerve sheath tumors that differ from classic schwannomas based on their histologic degenerative features and nuclear atypia [4,5]. Perivascular hyalinization, calcification, cystic necrosis, and degenerative nuclei are all characteristic features of these neoplasms. These changes are not known to hold prognostic significance; however, they can often lead to a misdiagnosis of malignant soft tissue tumors [5]. Thus, an understanding of the histologic features of this neoplasm as well as their characteristic appearance on imaging is paramount to ensuring a correct diagnosis.
The multimodal imaging (US, CT, MRI/MRA) performed for this patient are illustrative of the imaging characteristics of ancient schwannomas and makes this case unique in comparison to its counterparts which classically include only one or two imaging modalities. US in Figure 1A shows a heterogeneous ovoid mass. CT scan seen in Figure 2 showed a well-defined mass with enhancement in the areas surrounding degeneration, which are findings consistent with ancient schwannoma [5]. In classic schwannoma, these enhancing degenerative areas may not be observed and a target-like appearance made by fibrocollagenous central enhancing areas and peripheral myxomatous regions may be seen instead. MRI, seen in Figure 3, was consistent with the diagnosis of a well-defined, heterogeneous lesion, and heterogeneous enhancement around cystic areas, and supported the diagnosis of ancient schwannoma over the classic subtype. Hypocellular, disorganized, myxomatous areas known as Antoni B areas show high signal intensity on T2weighted images while their cellular, organized Antoni A counterparts show low-to-intermediate signal intensity [5,6]. The high-intensity patterning in this MRI suggests the presence of increased hypocellular Antoni B areas with smaller Antoni A areas, perhaps a result of their degeneration to cysts or necrosis [5]. These findings are characteristic of ancient schwannomas and are not usually seen in classic schwannomas. Furthermore, this lesion lacks the classic "salt and pepper" MRI appearance classically seen in paragangliomas [7]. CT and MRI both show evidence of anterolateral internal jugular vein displacement and anteromedial carotid displacement consistent with a mass within the carotid sheath. This supports evidence of a vagal over a cervical sympathetic chain lesion, which tends to displace both the internal jugular vein and carotid without separating them.
Although imaging can assist in the diagnosis of ancient schwannoma, histopathology is the only method of confirmation. In this case, diagnosis of schwannoma was made via ultrasound guided FNA and ancient subtype was confirmed post-excision. FNA was chosen to aid in diagnosis, triage urgency of surgical excision, and could be performed with local anesthetic only. The use of FNA in schwannoma diagnosis remains controversial due to challenges in representative specimen acquisition and difficulties in making a confident diagnosis based on a limited sample [2,8]. Despite these potential challenges, FNA was instrumental in the workup for accurate surgical planning and provided the diagnosis in a minimally invasive fashion.
Neurogenic tumors make up only 2% of benign, pediatric neoplasms but they are commonly associated with underlying conditions such as neurofibromatosis types one and two (NF-1, NF-2) [3,6]. Schwannomas in particular are characteristic of NF-2, and while bilateral vestibular schwannomas are the hallmark of the disease, schwannomas of other peripheral nerves may arise and warrant further testing. Cases such as this which involve atypical, faster-growing schwannomas in a pediatric patient support subsequent close monitoring as well as genetic workup. | 2020-10-28T19:16:40.842Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "30b6809d9d3fc45c0f4c153fdf77435a7886ba2c",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/39942-multimodal-imaging-characteristics-and-diagnostic-approach-to-ancient-schwannoma-in-a-pediatric-patient.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86aa35ae72f52f891c32b367718a564fb515cfa7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15763626 | pes2o/s2orc | v3-fos-license | Adenosine Kinase Inhibition Protects against Cranial Radiation-Induced Cognitive Dysfunction
Clinical radiation therapy for the treatment of CNS cancers leads to unintended and debilitating impairments in cognition. Radiation-induced cognitive dysfunction is long lasting; however, the underlying molecular and cellular mechanisms are still not well established. Since ionizing radiation causes microglial and astroglial activation, we hypothesized that maladaptive changes in astrocyte function might be implicated in radiation-induced cognitive dysfunction. Among other gliotransmitters, astrocytes control the availability of adenosine, an endogenous neuroprotectant and modulator of cognition, via metabolic clearance through adenosine kinase (ADK). Adult rats exposed to cranial irradiation (10 Gy) showed significant declines in performance of hippocampal-dependent cognitive function tasks [novel place recognition, novel object recognition (NOR), and contextual fear conditioning (FC)] 1 month after exposure to ionizing radiation using a clinically relevant regimen. Irradiated rats spent less time exploring a novel place or object. Cranial irradiation also led to reduction in freezing behavior compared to controls in the FC task. Importantly, immunohistochemical analyses of irradiated brains showed significant elevation of ADK immunoreactivity in the hippocampus that was related to astrogliosis and increased expression of glial fibrillary acidic protein (GFAP). Conversely, rats treated with the ADK inhibitor 5-iodotubercidin (5-ITU, 3.1 mg/kg, i.p., for 6 days) prior to cranial irradiation showed significantly improved behavioral performance in all cognitive tasks 1 month post exposure. Treatment with 5-ITU attenuated radiation-induced astrogliosis and elevated ADK immunoreactivity in the hippocampus. These results confirm an astrocyte-mediated mechanism where preservation of extracellular adenosine can exert neuroprotection against radiation-induced pathology. These innovative findings link radiation-induced changes in cognition and CNS functionality to altered purine metabolism and astrogliosis, thereby linking the importance of adenosine homeostasis in the brain to radiation injury.
INTRODUCTION
The adverse neurocognitive side effects of radiotherapy used to treat CNS cancers are unintended and largely unavoidable. It is now well-documented that major changes occur in the brain following exposure to clinical radiotherapy protocols, including severe morphologic and physiological damage that coincides with substantial depletion of CNS stem cell populations (Monje et al., 2002;Mizumatsu et al., 2003;Rola et al., 2004;Limoli et al., 2007;Parihar et al., 2014a). These changes occur along with reductions in dendritic complexity and synaptic density of more mature neurons (Parihar and Limoli, 2013;Parihar et al., 2014b). Consequently, cranial radiotherapy causes substantial decrements in short-and long-term learning and memory function that persist well after exposure (Greene-Schloesser and Robbins, 2012;Greene-Schloesser et al., 2013). We have previously shown in rodent models that exposure to radiation leads to long lasting reductions in neural stem cell (NSC) proliferation, prolonged oxidative stress, inhibition of neurogenesis, elevated CNS inflammation and cognitive dysfunction (Acharya et al., 2009(Acharya et al., , 2010(Acharya et al., , 2011Lan et al., 2012;Parihar et al., 2014a). While these factors are likely to contribute to the disruption of CNS function, our current understanding of the molecular and cellular mechanisms underlying radiation-induced damage in the brain, and how they impact neurocognition, are limited.
Astroglial activation is known to be a major consequence of radiation-induced chronic injury (Zhou et al., 2011;Ballesteros-Zebadua et al., 2012;Osman et al., 2014). Astrocytes form complex networks by contacting thousands of synapses and any disruption of astrocytic function following exposure to radiation will disrupt the global homeostasis of the brain (Giaume et al., 2010;Pannasch et al., 2011). Astrocytes also constitute a 'sink' for the metabolic clearance of neurotransmitters and the signaling molecule adenosine (Boison, 2007(Boison, , 2008(Boison, , 2009(Boison, , 2013Halassa et al., 2007). Adenosine is a ubiquitous modulator of synaptic transmission and neuronal activity, exerting its functions via activation of G i/o protein coupled-A 1 and A 3 and G s coupled A 2A and A 2B receptors (Boison, 2009(Boison, , 2013Boison et al., 2010;Diogenes et al., 2014). Mechanisms and physiologic functions of adenosine receptors have extensively been studied (for details see: Boison, 2013). A shift in the A 1 /A 2A receptor ratio/activation during radiation-induced CNS injury may reinforce the excitatory tone at synapses and contribute to cognitive dysfunction. Therefore, adenosine regulates global brain function under normal physiological settings and under pathophysiological conditions to provide neuroprotection.
Due to the widespread distribution of adenosine receptors in the brain, a tight regulation of endogenous levels of adenosine is a necessity (Boison, 2007(Boison, , 2008(Boison, , 2009(Boison, , 2013. Astrocytes play a key role in regulating the levels of extracellular adenosine through cytosolic adenosine kinase (ADK) (Boison, 2007(Boison, , 2008(Boison, , 2009(Boison, , 2013. ADK, which phosphorylates adenosine to 5 -AMP, is considered to be the key metabolic enzyme for the regulation of extracellular adenosine in the brain (Lloyd and Fredholm, 1995). Thus, inhibition or knockout of ADK leads to rapid increases in extracellular adenosine, while overexpression of ADK leads to a reduction of the synaptic adenosine tone. During astrogliosis, ADK is overexpressed, thereby limiting the availability of synaptic adenosine that fosters neurodegeneration (Boison, 2007(Boison, , 2008(Boison, , 2009(Boison, , 2013. Thus, metabolic regulation of adenosine by astroglial-synaptic compartments directly impacts neuronal plasticity. Importantly, relatively little is known about the impact of radiation-exposure on adenosine metabolism, astroglial and synaptic function and its correlation with cognitive function. We hypothesized that adenosine-dependent metabolic regulation is a key mechanism in ionizing radiation-induced neurodegeneration and cognitive dysfunction. Using a specific ADK inhibitor, this proof-of-principle study delineates the protective role of adenosine to attenuate radiation-induced cognitive decline.
Animals, Irradiation and 5-ITU Treatment
All animal procedures described are in accordance with NIH guidelines and approved by the University of California Institutional Animal Care and Use Committee. Four month old male athymic nude (ATN) rats (Cr:NIH-Foxn1rnu, strain 316; Charles River, San Diego) were maintained in sterile housing conditions (20 • C ± 1 • C; 70% ± 10% humidity; 12 h:12 h light and dark cycle) and had free access to sterilized diet and water. Rats were divided into 4 experimental groups (8-10 animals per group): 0 Gy receiving vehicle (Con), 0 Gy receiving 5-iodotubercidin, 5-ITU (Con+5-ITU), 10 Gy headonly irradiation receiving vehicle (IRR) and 10 Gy head only irradiation receiving 5-ITU (IRR + 5-ITU). Animals showing signs of eye infection and/or neophobic behavior were excluded from the study. In order to augment adenosine signaling in the brain we decided to use a pharmacological approach using the well-characterized ADK inhibitor 5-iodotubercidin (5-ITU), which induces mild and transient sedation after brain penetration. The use of a pharmacological agent allows us to test a possible clinical route of therapeutic adenosine augmentation and to prepare for future cell transplantation approaches, which will require the use of immunocompromised ATN rats, used in the present study. The ADK inhibitor 5-ITU (HY-15424, NSC 113939, MedChem Express, Princeton, NJ, USA) was made up fresh daily by dissolving in saline with 2% ethanol (v/v, Sigma, St. Louis, MO, USA). Animals received either vehicle (2% ethanol in saline, i.p.) or 5-ITU daily (3.1 mg/kg, i.p.) for 6 days in order to precondition the brain with neuroprotective adenosine. One hour after the last 5-ITU injection, animals received 0 or 10 Gy headonly X-rays. For cranial irradiation, animals were anesthetized with isoflurane (5% for induction and 2% for maintenance of anesthesia), placed ventrally on the treatment table (XRAD 320 irradiator, Precision X-ray, North Branford, CT, USA) without restraint, and positioned under a collimated (1.0 cm 2 diameter) beam for head-only irradiation delivered at a dose rate of 1.10 Gy/min. 5-ITU dosing was based on our previous studies (Fedele et al., 2005;Williams-Karnesky et al., 2013). Neither irradiation nor 5-ITU treatment resulted in a change in the body weight of animals. All behavioral and immunohistochemical analyses were carried out at 1 month post-irradiation.
Behavior Testing
To determine the effect of 5-ITU treatment on radiation-induced alteration in hippocampal-and frontal cortex-dependent cognition, rats from each group were subjected to cognitive testing 1 month after irradiation. Behavioral testing was conducted over 3 weeks and included two open arena, spontaneous exploration tasks (novel place recognition, NPR and NOR) followed by a fear conditioning (FC) task. Behavioral testing closely followed our previously described protocols (Acharya et al., 2009(Acharya et al., , 2015aChristie et al., 2012) in immunocompromised animals. To avoid infections in our strain of rats, we avoided water-based test paradigms such as the Morris water maze and opted for NOR and NPR as open arena tests. All behavioral testing data were collected by independent, blinded observers and the average of these data was used to compute the results for each task. Animals were first subjected to the NPR task followed by the NOR task. For the NOR and NPR tasks, the 'head direction to zone' function in Ethovision XT (Noldus) was used to track object exploration. An animal was considered to be exploring an object when its head was oriented toward it and its nose was within a 1-cm radius. All experimenters were blinded to the experimental condition and animal identification. Furthermore, an additional observer blinded to all experimental conditions re-scored the behavioral data (video files) thereby confirming the automated tracking results of Ethovision XT independently. The average of both scores was used to compute all behavioral data. We did not observe animals climbing on the object or any neophobic behavior. NPR and NOR data are presented as a discrimination index (DI) and calculated as ([Novel location exploration time/Total exploration time] -[Familiar location exploration time/Total exploration time]) × 100. A positive index indicates that rats spent more time exploring novelty (i.e., switched objects or locations), while a negative score indicates that rats exhibited little or no preference for exploring novelty. The FC task was administered in three sequential phases over 3 days including a training phase, a context test and a cue test as described previously (Christie et al., 2012;Acharya et al., 2015a).
Immunohistochemistry
Following completion of behavioral testing, animals were euthanized and perfused (intracardiac) with 4% paraformaldehyde (Acros Organics) made in phosphate buffered saline (100 mM, pH 7.4, Gibco), brains were cryoprotected (10-30% sucrose gradient) and sectioned coronally (30 µm thick) using a cryostat (Leica Microsystems, Germany). For the dual-immunofluorescence analysis of ADK and glial fibrillary acidic protein (GFAP), the following antibodies were used: rabbit anti-ADK (from the same batch that was previously characterized and validated on knockout tissue; Gouder et al., 2004), mouse anti-GFAP (EMD Millipore), goat anti-rabbit or anti-mouse conjugated with Alexa Fluor 488 or 594 (Life Technologies/Invitrogen) and DAPI (Sigma-Aldrich). Representative sections (3-4 sections/animal, four animals/group) through the middle of the hippocampus were selected for staining and stored in Tris-buffered saline (TBS, 100 mM, pH 7.4, Sigma-Aldrich) overnight. Free floating sections were first rinsed in TBS followed by Tris-A (TBS with 0.1% Triton-X-100, Sigma-Aldrich), blocked with 10% normal goat serum (NGS with Tris-A, Sigma-Aldrich) and incubated overnight in a mixture of rabbit anti-ADK (1:3000) and mouse anti-GFAP (1:500) antibodies prepared in 3% NGS and Tris-A. The next day, the sections were treated with a mixture of goat anti-rabbit Alexa Fluor 488 (1:750) and goat anti-mouse Alexa Fluor 594 (1:500 dilution each) made with Tris-A and 3% NGS for 1 h. The sections were light protected, washed with Tris-A, and counterstained with DAPI nuclear dye (1 µmol/L in TBS, 15 min) for visualization of hippocampal morphology. Immunostained sections were rinsed in TBS and mounted on clean gelatin coated slides using SlowFade Anti-fade Gold mounting medium (Life Technologies/Invitrogen). ADK positive cells were visualized under fluorescence as green and GFAP as red fluorescence.
Confocal Microscopy, Image Processing and 3D Quantification of Immunoreactivity
Immunostained sections were imaged using a laser-scanning confocal microscope (Nikon Eclipse Ti C2) equipped with a 40× PlanApo oil-immersion lens (1.3 NA) and an NIS-Elements AR module (v4.30, Nikon). 30 z stacks (1024 bit depth) at 1 µm from three different fields (318 µm × 318 µm area) in each section were imaged from the dentate gyrus. ADK immunofluorescence was imaged with 493 nm excitation and 518 nm emissions and GFAP was imaged with 592 nm excitation and 617 nm emissions. Images were deconvoluted using the AutoQuant software (version X3.0.4, Media Cybernetics, Rockville, MD, USA) with 1.26867 × 1.26867 × 1 µm spacing, and wavelengths set at 447 nm (DAPI), 510 (ADK) and 594 nm (GFAP). An adaptive, 3D blinded deconvolution method was used (Figure 3). AutoQuant automatically creates and stores deconvoluted images for direct import into the Imaris module (version 8.1.2, Bitplane, Inc., Zurich, Switzerland). The 3D algorithm-based surface rendering and quantification of fluorescence intensity for ADK and GFAP were carried out in Imaris at 100% rendering quality. Each channel was analyzed separately. 3D surface rendering detects immunostained puncta (ADK) or cell processes [GFAP, satisfying pre-defined criteria, verified visually for accuracy] (Figure 3). A channel mean intensity filter was applied and minimum thresholds were used for all the experimental groups. The pre-set parameters were kept constant throughout the subsequent analysis of ADK and GFAP immunoreactivity. The quantification of astrocyte number (GFAP co-labeled with DAPI) was facilitated using the Co-localization and the Spot tools of Imaris module. ADK and GFAP data was expressed as a mean immunoreactivity (percentage) relative to unirradiated controls. The method is summarized in Figure 3.
Statistical Analysis
Statistical analyses were carried out using GraphPad Prism (v6). One-way ANOVA was used to assess the normal distribution of data and significance between control and irradiated groups receiving either vehicle or 5-ITU treatment. When overall group effects were found to be statistically significant, a Bonferroni multiple comparisons test was used to compare the IRR with individual experimental groups. For analysis of FC data, repeated measures two-way ANOVA were performed. All analyses considered a value of P ≤ 0.05 to be statistically significant.
FIGURE 1 | Adenosine kinase (ADK) inhibition by systemic 5-iodotubercidin (5-ITU) treatment protects against radiation-induced cognitive dysfunction. Adult rats received 5-ITU (3.1 mg/kg, i.p., daily for 6 days) and were irradiated (0 or 10 Gy, head only) 1 h after the last injection. Animals were divided into four experimental groups: 0 or 10 Gy whole brain irradiated receiving either vehicle or 5-ITU (Con, Con + 5-ITU, IRR, IRR+5-ITU). (A,B) 1 month post-irradiation, animals were tested on spatial and episodic memory retention using the NPR and NOR tasks followed by fear conditioning (FC). The tendency to explore a novel place (NPR) or object (NOR) was derived from the Discrimination Index (DI). (A,B) Whole brain irradiation (IRR) shows significant behavioral deficits on NPR and NOR tasks compared to controls (Con and Con + 5-ITU) as indicated by impaired preference to a novel place or object. Irradiated animals treated with 5-ITU (IRR + 5-ITU) show significant preference for the novelty when compared with irradiated (IRR) animals receiving vehicle. (C) 5-ITU treatment also improves behavior on the hippocampal-dependent contextual FC task. The baseline freezing levels were comparable across groups, and all groups showed elevated freezing behavior following a series of 5 tone-shock pairings. The context test was administered 24 h later, and IRR animals showed significantly decreased freezing compared to controls (Con and Con + 5-ITU). Irradiated animals receiving 5-ITU showed a significant elevation in freezing behavior that was indistinguishable from the Con group. Data are presented as mean ± SEM. (N = 8-10 animals/group). P-values are derived from ANOVA and Bonferroni's multiple comparisons test. * * * P < 0.001; * * P < 0.01; * P < 0.05 compared with the IRR group.
Novel Place Recognition (NPR)
One month post-IRR, rats were habituated in an open field arena and then tested on the NPR task ( Figure 1A). The ability to explore a novel spatial location on the NPR task is dependent on intact hippocampal function (Save et al., 1992;Mumby et al., 2002;Barker et al., 2007;Barker and Warburton, 2011). The total exploration of both objects during the familiarization and test phases were comparable between all groups for this task. The DI was calculated to measure preference or indifference for exploring novelty. A positive DI indicates a preference, or more time exploring the novel place, while a negative DI indicates indifference, or more time exploring the familiar object. Following a 1 h retention interval between the familiarization and test phases, a significant overall group effect was found for the DI [F (3,28) = 5.88, P = 0.003] that differed between the groups. In the test phase, IRR animals spent significantly less time exploring the novel place compared to Con (P = 0.001), Con + 5-ITU (P = 0.01) and IRR + 5-ITU groups (P = 0.05, Figure 1A). Unirradiated animals receiving vehicle (Con) or 5-ITU (Con + 5-ITU) treatment showed comparable novel place exploration. Furthermore, after the 1 h retention interval, irradiated animals treated with 5-ITU (IRR + 5-ITU) did not differ from either Con or Con + 5-ITU animals.
These data indicate that ADK inhibition by 5-ITU treatment prior to cranial IRR improved object location exploration on the NPR task as compared to irradiated animals receiving vehicle.
Novel Object Recognition
After NPR testing, rats were habituated and then tested on the NOR task 1 month post-IRR ( Figure 1B). Impairment in prefrontal cortex and hippocampal function manifests as an inability to discriminate a novel from a familiar object in the NOR task (Barker et al., 2007;Barker and Warburton, 2011). The total exploration times for both objects were not different between all experimental groups for this task. In the test phase, a significant overall group difference was found between the four cohorts for the DI [F (3,28) = 8.95, P = 0.001]. After a 5 min retention interval between the familiarization and test phases, Con and Con + 5-ITU rats showed a preference for the novel object ( Figure 1B). However, irradiated rats receiving vehicle (IRR) showed a significantly diminished preference to explore novel object compared to either Con or Con + 5-ITU animals (P < 0.01). The novel object exploration for the Con and Con + 5-ITU animals did not differ. 5-ITU treated irradiated animals (IRR + 5-ITU) exhibited significantly improved performance on the NOR task compared to the IRR group (P = 0.01). The DIs for Con, Con + 5-ITU and IRR + 5-ITU groups were statistically indistinguishable. Thus, 5-ITU treatment improved novel object exploration behavior in irradiated animals.
In summary, for each of the open arenas, episodic memory tasks (NPR and NOR), a preference toward novelty (as indicated by DI) was found to be significantly greater for Con, Con + 5-ITU and IRR + 5-ITU groups in comparison with IRR group (Figures 1A,B), demonstrating the protective effect of ADK inhibition. These spontaneous exploration tasks (NPR, NOR) rely on the innate curiosity of an animal to explore a 'new object placement' or a 'new object.' Other factors such as fatigue, depression and/or anxiety may also affect the overall performance on these tasks, although differences in exploration during either the habituation or familiarization phases of these tasks were not found. To account for these possible confounds, animals were subsequently tested in the FC task to interrogate hippocampal function using a task not reliant on spontaneous exploration.
Radiation-Induced Elevation in ADK and Astrogliosis
To assess the impact of cranial irradiation on the status of ADK and astrocytes, immunoreactivity for ADK and GFAP was assessed via dual-immunofluorescence confocal microscopy (Figure 2). Representative confocal micrographs revealed a marked impact of cranial irradiation on ADK and GFAP immunoreactivity. Compared to Con and Con + 5-ITU groups, irradiated animals (IRR) showed increased expression of ADK in the hippocampal granule cell layer (GCL), sub-granular zone (SGZ) and dentate hilus (DH) at 1 month post-IRR ( Figure 2B). Concurrently, GFAP staining in unirradiated controls (Con and Con + 5-ITU) show morphological characteristics consistent with resting astrocytes (Figure 2C). Astrocytes in the IRR group display enlarged cell bodies with thicker and longer processes; this is consistent with hypertrophic, reactive astrocytes, or astrogliosis (Figure 2C).
High resolution, 3D algorithm-based quantification of ADK, GFAP and astrocyte number from the confocal z stacks was facilitated by blinded deconvolution (AutoQuant) and subsequent analysis using Imaris module (Figure 3). ADK and GFAP quantitative immunofluorescence revealed a significant increase in the immunoreactivity of the IRR group compared to controls (Con and Con + 5-ITU, Figure 4). Cranial radiation exposure (IRR group) significantly elevated ADK levels by 1.5 fold (P = 0.01 vs. Con and P = 0.02 vs. Con + 5-ITU group) in the hippocampus 1 month after exposure ( Figure 4A). In parallel, hippocampal GFAP immunoreactivity was elevated by ∼2 fold in the IRR group (P = 0.001, Figure 4B) without a significant change in the total number of astrocytes ( Figure 4C) at 1 month. However, irradiated animals receiving 5-ITU treatment (IRR + 5-ITU) showed a significant reduction in ADK immunoreactivity (P = 0.02) and astrogliosis (P = 0.001) throughout the hippocampus compared to IRR animals. These qualitative (Figure 2) and quantitative (Figure 4) data demonstrate that pharmacological ADK inhibition could substantially protect against radiation-induced neuropathology.
DISCUSSION
Our findings demonstrate that adenosine's well-known protective role also extends to the attenuation of radiation-induced cognitive impairments. Our previous data demonstrated persistent, long-term cognitive impairments from 1 to 8 months following a single IRR exposure (Acharya et al., , 2015bParihar et al., 2014bParihar et al., , 2015, and suggest that global disruption of homeostatic functions in the brain combine to compromise cognitive performance over protracted post-IRR intervals. Several neurodegenerative conditions (Alzheimer's Parkinson's, ALS, epilepsy) share two key features with radiation-induced neuropathology: (i) astrogliosis as a histopathological hallmark and (ii) the onset of cognitive impairment (Bell and Zlokovic, 2009;Palop and Mucke, 2009;Aarsland and Kurz, 2010;Rusina et al., 2010). Astrocytes play a key role in regulating the levels of extracellular adenosine via cytosolic ADK to form a metabolic reuptake system. Our data show that cranial irradiation triggers astrogliosis and ADK overexpression 1 month after exposure that could lead to enhanced metabolic clearance of adenosine and resulting adenosine deficiency. Radiationinduced synaptotoxicity, astrogliosis, and adenosine deficiency in turn influence cognitive function (Boison and Aronica, 2015). FIGURE 2 | Cranial irradiation elevates adenosine kinase (ADK) immunoreactivity and astrogliosis. Immunofluorescence analysis demonstrates that at 1 month post-treatment, compared to controls (Con and Con + 5-ITU), exposure to cranial irradiation (10 Gy) leads to elevated ADK immunoreactivity (A,B; IRR group; ADK, green; DAPI nuclear counterstain, blue) that is reduced to control levels in irradiated animals treated with 5-ITU (IRR + 5-ITU). Representative confocal micrographs show the presence reactive astrocytic cell bodies (A,C; glial fibrillary acidic protein; GFAP, red) in the hippocampal dentate hilus (DH), sub-granular zone and granule cell layer (GCL) indicating astrogliosis. IRR + 5-ITU animals showed reduced ADK and GFAP immunoreactivity compared to IRR animals. Scale bar: 30 µm.
Our data critically test our hypothesis that adenosine augmentation prior to irradiation is protective against radiationinduced neuropathology. Animals receiving the ADK-inhibitor prior to cranial IRR were characterized by improved behavioral performance as characterized in three distinct tasks to assess cognitive function (Figure 1). Treatment with 5-ITU prior to irradiation (IRR + 5-ITU) prevented development of radiation-induced memory impairments on the NPR and NOR tasks at 1 month post-exposure. In contrast to irradiated rats receiving vehicle (IRR), DIs of irradiated animals with 5-ITU treatment (IRR + 5-ITU) were indistinguishable from unirradiated controls, where both controls (Con) and 5-ITU injected animals (Con + 5-ITU) showed significant preference for exploring the novel place or object. Moreover, unirradiated animals receiving 5-ITU were statistically indistinguishable from the controls receiving vehicle. In these preventative studies we chose a time point of analysis (4 weeks after irradiation), which reflects the delayed onset of cognitive dysfunction after radiation therapy (Tofilon and Fike, 2000) in combination with the prophylactic use of an ADK inhibitor. Whether, prophylactic ADK inhibition affects cognitive function at different time points post irradiation, or whether post-irradiation treatment with an ADK inhibitor might be of therapeutic benefit has not been addressed here, but might be interesting to investigate in future work.
The effectiveness of ADK inhibition to prevent radiationinduced behavioral deficits was further confirmed using the contextual FC task (Figure 1C) that engages the hippocampus and does not rely on spontaneous exploration (Phillips and LeDoux, 1992;Winocur et al., 2006). Irradiated animals (IRR) spent significantly less time in freezing than Con and Con + 5-ITU cohorts during the context phase of the FC task. These FIGURE 3 | Work flow of quantitative immunofluorescence. Blinded deconvoluted volume of z stacks (step 1) were uploaded into Imaris (v8.1.2) for quantification of GFAP and ADK immunoreactivity (step 2). 3D algorithm-based Surface rendering of individual channels (pseudo-colored, steps 3-4) provide quantitative analyses of fluorescence intensity (GFAP, ADK) and Spot tool and Co-localization modules provide quantification of the number of astrocytes (steps 4-5). Data bin (step 6) provide quantitative immunofluorescence data for the individual channels. Scale bars: 5 µm. data suggest that irradiation disrupted long-term (24 h) memory for the tone-shock (context) association that has been shown to rely on intact hippocampal function (Phillips and LeDoux, 1992;Winocur et al., 2006). Importantly, animals treated with 5-ITU prior to irradiation (IRR + 5-ITU) exhibited intact freezing behavior, and were statistically indistinguishable from Con and Con + 5-ITU animals in their contextual fear memory. This finding indicates that radiation-induced deficits in hippocampal-dependent long-term memory function may be prevented by ADK inhibitor-induced adenosine augmentation at the time of irradiation. The amount of post-training freezing observed was comparable between all experimental cohorts, suggesting that experimental procedures did not affect initial acquisition of the conditioned freezing response and memory consolidation. Similarly, irradiation or 5-ITU treatments did not affect freezing behavior during the cue test phase, indicating intact amygdala-dependent acquisition and memory formation (Phillips and LeDoux, 1992;Winocur et al., 2006). The specific deficits observed in contextual fear memory tasks are consistent with the impairments in the NPR and NOR tasks and suggest that cranial irradiation disrupts hippocampal and frontal cortex function, and pre-IRR treatment with an ADK inhibitor prevents radiation-induced cognitive deficits.
Our data clearly show that pharmacological inhibition of ADK can prevent a decline in cognition following cranial irradiation. In the present study, we demonstrate that in the irradiated brain (IRR), ADK is co-expressed in GFAPpositive reactive astrocytes, which are characterized by a hypertrophic morphology with larger soma and increased length and width of astrocytic stellae compared to unirradiated controls (Figures 2A,B). The total number of GFAP + astrocytes did not differ between control and irradiated groups ( Figure 4C). These findings indicate that astrogliosis is accompanied by ADK overexpression. Quantification of ADK and GFAP immunoreactivity by high resolution confocal microscopy showed a marked rise of fluorescence intensity in the irradiated hippocampus (Figures 4A,B) whereas, pre-treatment with the ADK inhibitor, 5-ITU, prevented increases in ADK and GFAP immunoreactivity in the irradiated brain. It is likely that pre-treatment with 5-ITU attenuated the initial radiation-induced injury and therefore the very processes that eventually cause increases in GFAP and ADK immunoreactivity, although epigenetic mechanisms (Williams-Karnesky et al., 2013;Boison, 2016) might also be implicated. At the doses used, 5-ITU is not known to exert any cytotoxic or apoptotic effects on astrocytes (Ugarkar et al., 2000).
FIGURE 4 | Continued
immunoreactivity show that compared to controls (Con and Con + 5-ITU), irradiation significantly increased the ADK (A) and astrogliotic cell bodies (B) in the hippocampal dentate hilus, granule cell layer, sub-granular zone and CA3/CA1 subfields. Compared with the irradiated cohort (IRR), animals receiving 5-ITU (IRR + 5-ITU) had significantly lower ADK and GFAP immunoreactivity in all hippocampal subfields. The reduced ADK and GFAP immunofluorescence was comparable to controls (Con). The number of astrocytes per hippocampal section did not change after irradiation or 5-ITU treatment (C). All data are presented as mean ± SEM. (N = 4 animals per group). * P < 0.02; * * P < 0.01; * * * P < 0.001 compared with the IRR group (ANOVA and Bonferroni multiple comparisons test).
Chronic inflammation is a hallmark of the irradiated brain that is linked with cognitive decline (Zhao and Robbins, 2009;Moravan et al., 2011;Belarbi et al., 2013;Acharya et al., 2014c;Parihar et al., 2014a). Our results suggest that: (1) increased ADK expression is associated with astrogliosis in the irradiated brain; ADK-induced adenosine deficiency in turn may contribute to the radiation-induced neuropathology, and (2) chronic upregulation of ADK in the irradiated brain can be prevented by the transient prophylactic administration of an ADK inhibitor; this finding is in line with a lack of radiationinduced cognitive impairments. Past studies have shown that 5-ITU was effective at increasing extracellular adenosine levels in the brain (Pazzagli et al., 1995;Boison and Stewart, 2009;Boison, 2013), and supports our current findings suggesting that inhibition of ADK prior to irradiation is neuroprotective through a similar mechanism. Inhibition of ADK reduced synaptotoxicity in the hippocampus by modulating adenosine receptors, indicating an important role of ADK in the regulation of basal extracellular adenosine (Pazzagli et al., 1995;Gouder et al., 2004;Boison and Stewart, 2009;Boison, 2013). (3) Lastly, the contribution of ADK expression to radiomimetic neuropathology would favor the development of adenosinebased therapeutic interventions such as stem cell therapies to augment adenosine signaling locally via transplanted adenosine releasing cells.
CONCLUSION
Our experimental data support the overall concept that a combination of neurotoxicity, astrogliosis, and elevation of ADK, resulting in a deficiency of extracellular adenosine can directly cause a broad spectrum of comorbid symptoms that are collectively present across several neurological conditions (Aronica et al., 2013;Boison and Aronica, 2015). If radiationinduced adenosine deficiency, triggered by ADK upregulation, is sufficient to precipitate neurocognitive impairments, then therapeutic adenosine augmentation (molecular, cellular or pharmacological) should ameliorate those symptoms. More work is needed to assess whether a neuroprotective treatment interferes with the therapeutic efficacy of radiotherapy or if transient treatment with ADK inhibitors post-irradiation are as effective as pre-irradiation treatments against radiation-induced CNS dysfunction. ADK inhibitors represent some of the most promising adenosine elevating agents (Kowaluk et al., 2000;McGaraughty et al., 2005;Boison, 2013). Our experimental data support the concept that such therapeutic approaches might be useful as prophylactic pre-treatment to avoid radiation-induced cognitive impairment. | 2016-06-17T20:34:37.583Z | 2016-06-03T00:00:00.000 | {
"year": 2016,
"sha1": "e89ea5aed358a398b4135ec6e67fa4ec11d4539b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2016.00042/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d183daba7fec45c364ec769f616373c067d5922b",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
257281404 | pes2o/s2orc | v3-fos-license | Placenta-mediated pregnancy complications in women with a history of late fetal loss and placental infarction without thrombophilia: risk of recurrence and efficacy of pharmacological prophylactic interventions. A 10-year retrospective study
Abstract Purpose To evaluate the risk of recurrence of severe placenta-mediated pregnancy complications and compare the efficacy of two different anti-thrombotic regimens in women with a history of late fetal loss without thrombophilia. Patients and methods We performed a 10-year retrospective observational study (2008–2018) analyzing a cohort of 128 women who suffered from pregnancy fetal loss (>20 weeks of gestational age) with histological evidence of placental infarction. All the women tested negative for congenital and/or acquired thrombophilia. In their subsequent pregnancies, 55 received prophylaxis with acetylsalicylic acid (ASA) only and 73 received ASA plus low molecular weight heparin (LMWH). Results Overall, one-third of all pregnancies (31%) had adverse outcomes related to placental dysfunction: pre-term births (25% <37 weeks, 5.6% <34 weeks), newborns with birth weight <2500 g (17%), and newborns small for gestational age (5%). The prevalence of placental abruption, early and/or severe preeclampsia, and fetal loss >20 weeks were 6%, 5%, and 4% respectively. We found a risk reduction for combination therapy (ASA plus LMWH) compared with ASA alone for delivery <34 weeks (RR 0.11, 95% CI: 0.01–0.95 p = 0.045) and a trend for the prevention of early/severe preeclampsia (RR 0.14, 95% CI: 0.01–1.18, p = 0.0715), while no statistically significant difference was observed for composite outcomes (RR 0.51, 95%CI: 0.22–1.19, p = 0.1242). An absolute risk reduction of 5.31% was observed for the ASA plus LMWH group. Multivariate analysis confirmed a risk reduction for delivery <34 weeks (RR 0.32, 95% CI 0.16–0.96 p = 0.041). Conclusion In our study population, the risk of recurrence of placenta-mediated pregnancy complications is substantial, even in the absence of maternal thrombophilic conditions. A reduction of the risk of delivery <34 weeks was detected in the ASA plus LMWH group.
Introduction
Placenta-mediated complications (including early/severe preeclampsia, placental abruption, fetus smallfor-gestational-age (SGA), and/or low birth weight) related to fetal loss represent an important health issue, with serious repercussions on women's health and on the unborn child. The recurrence risk in a patient with a history of placenta-mediated obstetrical complications is about 30-50% [1][2][3][4], thus, these patients require careful follow-up and effective interventions to prevent further obstetrical complications. The association between thrombophilia and fetal loss raised expectations about the potential clinical efficacy of anti-thrombotic prophylaxis in this setting, but limited evidence has been reported so far and no agreement among experts has been found about the optimal regimen. In particular, the role of this prophylactic treatment in patients without congenital and/or acquired thrombophilia is not clear.
Our study aimed to investigate the risk of recurrence of placenta-mediated complications in women with a history of fetal loss over 20th weeks in a previous pregnancy and without evidence of a congenital or acquired thrombophilic state. Furthermore, in the same group of patients, we investigated the efficacy of two prophylactic pharmacological regimens (ASA and LMWH þ ASA) in preventing placenta-mediated complications and improving obstetrical outcomes in a subsequent pregnancy.
Materials and methods
We performed an observational retrospective study analyzing women who delivered their first child between 2008 and 2018 at the Sant'Anna Hospital, Turin (Italy), a large maternity hospital with over 6,000 annual deliveries, recognized as a referral center for complicated pregnancies and premature newborns. Patients were identified and recruited using antenatal and obstetric databases. Case notes and laboratory results were reviewed by a team of physicians with maternal-fetal medicine expertise to select patients (N ¼ 128) according to the following inclusion criteria: 1. Fetal loss at >20th weeks, associated with one or more of the following conditions: early-onset or severe preeclampsia, HELLP syndrome, placental abruption, and fetal growth restriction (FGR) [5,6]. According to NICE guidelines [7], severe preeclampsia was defined as a form of preeclampsia with severe hypertension that does not respond to treatment or is associated with ongoing or recurring symptoms (severe headaches, visual scotomas, nausea or vomiting, epigastric pain, oliguria) as well as progressive deterioration in laboratory blood tests such as rising creatinine or liver transaminases or falling platelet count, or failure of fetal growth or abnormal doppler findings. All women tested negative for both congenital (antithrombin deficiency, protein C and protein S deficiency, factor V R506Q polymorphism, factor II G20210A polymorphism) and acquired thrombophilia (lupus anticoagulant and anticardiolipin antibodies IgG and IgM, anti-beta 2 GP1 IgG and IgM, essential thrombocythemia). The tests were carried out after the index pregnancy diagnosis and repeated during the 1st trimester to screen for acquired thrombophilia.
At our institution, during the study timeframe, this subset of patients was treated either with low-dose Acetylsalicylic Acid (ASA) plus Low Molecular Weight Heparin (LMWH) (n ¼ 73) or low-dose Acetylsalicylic Acid (n ¼ 55) alone for the prevention of placentamediated complications. All patients received the anti-thrombotic treatment from the beginning of the subsequent pregnancy (positive pregnancy test) and was stopped at the 34 th week of gestational age. The specific regimen was selected by the attending obstetrician on a case-by-case basis. In particular, 3 physicians took care of these patients within the high-risk pregnancy service of our hospital during the time period of the present study and prescribed all the anti-thrombotic treatments.
ASA was administered at the dose of 1 mg/kg/day and subcutaneous LMWH (enoxaparin sodium) at a prophylactic dose of 4,000 IU or 6,000 IU daily, according to patient weight (< or >90 kg). In our institute, this dosage of ASA was chosen because of the available evidence suggesting that 1-2 mg/kg results in a dose-dependent inhibition of thromboxane synthesis in platelets, without significantly affecting endothelial prostacyclin formation [9].
All women gave their consent to antithrombotic prophylaxis and for the use of clinical data. The Institutional Review Board approved the study (protocol n 0108566) and results are reported according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for observational studies [5]. All clinical investigations have been conducted according to the principles expressed in the Declaration of Helsinki and following Good Clinical Practice rules.
All women underwent a placental biochemical function test early in the 2nd trimester, a serial biophysical assessment with ultrasound imaging including uterine and umbilical arteries Doppler at the 20 th week, monthly biochemical test and clinical surveillance; increased surveillance was performed if fetal FGR was suspected.
All women were followed as outpatients throughout pregnancy to delivery and in case of complications, admitted to the hospital. For every pregnancy, we collected time and mode of delivery, birth weight, APGAR score, and diagnosis of preeclampsia, thrombosis, placental abruption, stillbirth, miscarriage, or fetal demise. The newborn weight percentile was reported according to the standard Italian birth weight charts, adjusted for sex, parity, and gestational age [6]. SGA was defined as a weight below the 10 th percentile. A plausible composite adverse outcome was conceived based on five clinically and equally meaningful components: prematurity <34 weeks, severe or earlyonset preeclampsia, SGA <10th percentile, pregnancy loss >20 weeks, and placental abruption.
In all cases of fetal loss, a pathological examination of the placenta and fetus was performed.
The single-center follow-up helped reduce variability bias in the assistance provided during pregnancy and delivery. A bias in treatment starting time was excluded, given that all patients started treatment within the first trimester. Moreover, all patients assumed the therapy up to the endopoint of 34 weeks of gestational time, except in cases where delivery occurred before this cutoff.
The associations between the allocated treatment regimen (ASA alone vs ASA plus LMWH) and maternal age, BMI >30, and ethnic group were analyzed to exclude potential biases.
The statistical analysis was carried out on the study group as a whole and the ASA versus ASA þ LMWHtreated patients. Continuous variables were analyzed with the Student's T-test. Differences in dichotomous outcomes between the two study groups were analyzed with the use of the chi-square test or Fisher's exact test when the anticipated cell frequencies were below five. We compared the proportion of patients experiencing one or more of the composite outcome events, using an unadjusted chi-square test of proportions. To determine the effect of key prognostic factors, a univariate analysis was performed and a multivariate logistic regression model was developed comparing outcomes as a whole and within the two groups. Descriptive data are presented as percentages or means, standard deviation (SD), and standard error of the mean (SEM), to facilitate the comparison. All statistical tests were two-sided and significance was set at p-value <0.05. Confidence intervals not including 1.00 were regarded as statistically significant.
Results
Overall, a 94% live birth rate was observed among the subsequent pregnancies. About two-thirds (69%, 88/128) of them had an uneventful clinical course with the delivery of a healthy newborn, while 40 (31%) pregnancies had at least an adverse outcome related to placental dysfunction.
The two treatment groups did not differ in their clinical and sociodemographic features, except for a higher proportion of previous spontaneous abortions <12 weeks in the group treated with ASA þ LMWH (p ¼ 0.02) ( Table 1).
Analysis of maternal/fetal outcomes according to treatment regimen
A risk reduction in favor of combined prophylaxis was observed in terms of prematurity <34 weeks (RR 0.11, 95% CI: 0.01-0.95 p ¼ 0.0455), while a trend was detected for early/severe preeclampsia (RR 0.14, 95% CI: 0.01-1.18 p ¼ 0.0715). Mean birth weight and gestational age at delivery of live births were similar between the two groups. There was no difference according to all other fetal outcomes ( Table 2). A multivariate logistic regression analysis partially confirmed the results of the univariate analysis. In particular, the risk reduction of prematurity <34 weeks remained significant after multivariate analysis (RR 0.32, 95% CI 0.16-0.96 p ¼ 0.041). All premature births <34 weeks were C-sections for placenta-mediated complications (5 for early/severe preeclampsia, 2 for placental abruption).
Discussion
Successful pregnancy outcomes depend on the efficiency of placental circulation. Placental vascular thrombosis and abnormal placenta development are at least partly responsible for placenta-mediated pregnancy complications and the risk of recurrence is substantial. Women with prior severe preeclampsia have a 25% to 65% risk of recurrent preeclampsia, a 3% risk of placental abruption, and a 10% risk of SGA <10th percentile [1][2][3][4]10].
Our study focused on a very specific subset of patients with common clinico-pathological features: obstetric history of fetal loss >20 weeks, histologically proven placental infarction and no known predisposing factors for the development of thrombosis. However, it should be noted that this latter finding should not lead physicians to overlook the increased risk of recurrence of placenta-mediated complications in subsequent pregnancies as shown by our data. A detailed obstetric history provides a high predictive value of future complications and is a simple and useful tool for clinicians. Unfortunately, although multiple potential antithrombotic prophylactic treatments are currently available, their efficacy seems to be limited and the choice of the optimal regimen for these women has not yet been adequately addressed and it is largely empirical.
Previous data shows that low dose-aspirin administered from the 1 st trimester is associated with small relative risk reductions in patients with prior preeclampsia, its efficacy being higher in those women who experienced early preeclampsia and FGR [10,11].
LMWH appears to be another promising preventive therapy for these serious pregnancy complications and it may share with ASA some additional mechanisms of action unrelated to the anticoagulant activity, including modulation of trophoblast proliferation and invasiveness, pro-angiogenic effects [12,13] and suppression of complement pathway activation [14,15].
Several studies have tried to demonstrate the usefulness of anticoagulant prophylaxis to prevent adverse obstetrical outcomes, but these results are heterogeneous and based on limited evidence. A meta-analysis conducted on the currently available main randomized controlled trials (RCT) found that prophylactic LMWH might be useful only in a subgroup of patients with prior late and severe placentamediated pregnancy complications [15,16].
A possible benefit of the combination of ASA plus LMWH is reported: some recently published RCT and systematic reviews suggest either a significant reduction of early-onset recurrent hypertensive disorders in thrombophilic women or a trend toward efficacy [17,18]. It is plausible that the combination of ASA plus LMWH may be advantageous in patients with more severe placenta-mediated pregnancy complications, but the data is still heterogeneous and conflicting [19,20].
Our study observed an overall 31% risk of placentamediated pregnancy complications in this subset of patients despite a live birth rate of over 90%. Given data from previously published studies, it was no surprise that the preterm delivery of SGA infants is an important predictor of the subsequent risk of stillbirth, preeclampsia, and preterm delivery [3]. In particular, we found a relatively high number of SGA in this cohort of patients, probably due to the high intrinsic risk of these pregnancies, despite not showing evidence of the most common acquired and congenital thrombophilic alterations.
The group of patients treated with ASA plus LMWH showed a significant advantage only in terms of the prevention of preterm delivery <34 week and this finding was also confirmed by multivariate analysis. A trend for the prevention of pre-eclampsia development was detected in the univariate analysis, while no differences were observed in terms of adverse fetal outcomes. Interestingly, all deliveries <34 week were by Cesarean section because of placenta-mediated complications (early/severepreeclampsia or placental abruption) supporting a preventive role for this event of combination therapy with ASA plus LMWH.
Our study showed a marginal, nonsignificant advantage for the combined treatment in terms of complicated pregnancies (any complication), with an absolute risk reduction of 5.31%.
The main strength of our study is that we selected a homogeneous, very high-risk group of non-thrombophilic women with histologically confirmed placentamediated pregnancy complications. This is relevant because the results of other studies derived from the analysis of heterogeneous groups of women, including both thrombophilic and non-thrombophilic patients, with prior placenta-mediated pregnancy complications of varying severity [21][22][23]. Finally, the same expert pathologist performed all the fetal autopsies and histological examinations of the placenta samples (GB). The main limitation of the present study is intrinsic to its retrospective non-randomized design leading to potential biases in terms of treatment allocation. Concerning this issue, we observed a higher rate of abortions at <12 weeks of gestational age in the combined treatment group, a characteristic which could potentially have affected treatment selection. Despite these limitations, the study addresses a relevant realworld issue and tackles the unmet clinical need of establishing effective preventive treatments for this rare population.
Conclusions
Close surveillance is mandatory in this specific group of patients since the risk of recurrence and adverse obstetric outcomes remains higher during subsequent pregnancies. The combination of ASA þ LMWH compared with ASA alone seems to provide a benefit in terms of risk reduction of prematurity <34 weeks. However, these results should be interpreted with caution due to the study design and the low number of analyzed events. Despite the rarity of the studied population, it would be highly desirable to obtain data from prospective studies to clarify the best therapeutic strategy. | 2023-03-03T06:17:03.587Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "ac4ceda3bbbf58b5960a3a426c56c47399f937ec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/14767058.2023.2183748",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d90639d02ca8ea7aea049d761d1d2d45562db05b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58158183 | pes2o/s2orc | v3-fos-license | Association of Optic Neuritis with Neuromyelitis Optica Spectrum Disorder and Multiple Sclerosis in Korea
Purpose To describe the clinical characteristics and course of optic neuritis (ON) and its association with neuromyelitis optica spectrum disorder (NMOSD) and multiple sclerosis (MS) in Korea. Methods In this retrospective case series, 125 eyes of 91 Korean patients with ON were included. The medical documents of adult patients with ON were retrospectively reviewed. Patients were assigned into idiopathic ON, NMOSD, and MS groups according to the presence of an association with NMOSD or MS for subgroup analysis. Clinical characteristics, disease course, and visual and systemic prognosis were analyzed. Results During the mean follow-up of 3.7 years, 73 patients were diagnosed as idiopathic ON, 14 patients were diagnosed as NMOSD, and four patients developed definite MS. At the final visit, there were 13 (13%) eyes out of 100 eyes with idiopathic ON, nine (43%) eyes out of 21 eyes with NMOSD, and one (25%) eye out of four eyes with MS had a severe visual loss of 20 / 200 or less. The mean Expanded Disability Status Scale was 3.1 ± 1.5 in NMOSD and 1.8 ± 1.5 in the MS group at the final visit. In the NMOSD group, 50% of patients showed severe visual loss in at least one eye or were unable to ambulate without assistance at the final visit (5.3 ± 4.4 years after the initial episode of ON). Conclusions Fourteen percent of patients showed positive results for NMO-immunoglobulin G test and 50% of patients with NMOSD showed a severe visual loss in at least one eye or were unable to ambulate without assistance. The proportion of MS was relatively low in Korean ON patients.
Optic neuritis (ON) is one of the common optic neuropathies in young adults, with an annual incidence of 5.1 / 100,000 in industrialized countries [1]. It is also a common manifestation of multiple sclerosis (MS). Various reports have shown differences in clinical expression of ON in pa-tients of Western or Asian countries [2][3][4][5][6][7]. In recent decades, neuromyelitis optica (NMO), a demyelinating neurologic disorder, has been found to be strongly associated with ON, especially in Asians. It is a severe idiopathic inflammatory disease of the central nervous system that predominantly affects optic nerves and spinal cord [8]. Many studies have shown that NMO is more prevalent in Blacks, Asians, and Indians compared to Whites [9][10][11][12][13]. Because of its poor prognosis, early and aggressive treatment is important for patients with NMO. However, there has been a limited number of studies regarding the proportion of NMO and its prognosis in Asian patients with ON. In addition, serologic tests for NMO are largely performed according to the discretion of each clinician. Clinical studies on patients who were diagnosed as NMO through serologic tests were largely limited in Korea. The objective of this study was to characterize the clinical presentation of ON in Korean patients and its association with NMO and MS over 10 years.
Materials and Methods
A retrospective review was performed for a group of ON patients who were followed-up at the Neuro-ophthalmology Department of Samsung Medical Center from March 2007 to August 2016 according to the tenets of the Declaration of Helsinki. All patients who were initially referred from the Neurology Department were not included in this review to avoid inclusion of patients who were already suspected of having a neurologic disorder, such as NMO spectrum disorder (NMOSD) or MS, at initial presentation. The study was approved by the institutional review board of Samsung Medical Center (2018-07-144). Informed consent was not obtained in this study. Identifying information about participants was not presented in this study.
All patients enrolled had experienced a clinical episode of ON. A history of acute ON episodes was confirmed by medical chart review. Diagnosis of an ON episode was based on clinical symptoms such as gradual visual loss over several days with or without pain on eye movement, documented findings of decreased visual acuity, visual field defect, loss of color vision, relative afferent pupillary defect in case of unilateral involvement, and compatible fundus examination with or without signs of abnormal optic nerve enhancement on magnetic resonance imaging (MRI). Patients with any of the following conditions were excluded: age of less than 20 years, age of 70 years or more, history of any form of neurological impairment or previous diagnosis of neurologic disorder such as NMOSD or MS prior to the first ON episode, any history of systemic vasculitis, malignancy, or other ocular pathologies that could affect visual function including retinal disease and optic neuropathies other than ON such as glaucoma.
Patients with ON were assigned to the idiopathic ON, NMOSD, or MS group according to the presence of an association with NMOSD or MS for subgroup analysis.
NMOSD was defined as suggested by Wingerchuk et al. [14]. The diagnosis of MS was based on the McDonald criteria [15].
All patients underwent a full ophthalmologic assessment including visual acuity test, color vision test with Ishihara charts, slit lamp biomicroscopy, fundus examination, and routine laboratory tests including complete blood cell count, electrolytes, chemistry profiles, erythrocyte sedimentation rate, C-reactive protein, serologic tests for human immunodeficiency virus and syphilis, and fluorescent antinuclear antibody. The visual field was tested using a Humphrey Field Analyzer with a 30-2 SITA-standard protocol. The test for NMO-immunoglobulin G (IgG) was performed either immediately after serum sampling or after storage at -70°C. NMO-IgG was measured using a commercially available cell-based immunofluorescence assay kit from Euroimmun AG (Lubeck, Germany) according to the manufacturer's instructions [16]. Brain and orbit MRI of participants were reviewed. Whether the protocol used for image acquisition and image quality was appropriate to analyze optic nerves was determined for every image by one of our authors (KAP).
If the presentation of ON recurred < 30 days from the first attack, it was considered to be a relapse. If it occurred ≥30 days later, it was considered a new attack or a recurrence as defined by McDonald et al. [17]. The disease severity in patients with NMOSD or MS was assessed via the Expanded Disability Status Scale (EDSS) score [18].
Statistical analyses were performed using SAS ver. 9.4 (SAS Institute, Cary, NC, USA). To compare baseline characteristics and visual outcomes between idiopathic ON, NMOSD, and MS groups, a Generalized Estimating Equation with an exchangeable working correlation matrix was used for repeated measures from the same subject. To assess factors affecting severe visual loss, univariate and multivariate analyses were performed.
Results
In this study, 125 eyes of 91 patients were included. The demographics and clinical characteristics of the patients were compared in the present study and with other Asian studies and the Optic Neuritis Treatment Trial (ONTT) ( Table 1) [2,4,6,7,18]. Their mean age was 43 ± 13 years (range, 20 to 69 years). Fifty-seven (63%) ON patients were female. Ocular pain was noted in 61 (56%) eyes. Disc swelling was present in 60 (53%) eyes. The mean follow-up duration was 3.7 ± 3.3 years (range, 0.5 to 16 years). Twenty-seven (30%) patients had ON in both eyes simultaneously or sequentially during the follow-up period and 19 (21%) patients presented with bilateral and simultaneous involvement at initial presentation. The initial visual acuity ranged from 20 / 20 to no light perception. Sixty-four (51%) eyes presented with an initial visual acuity of 20 / 200 or worse.
Of 91 patients, 13 (14%) patients had positive results in the NMO-IgG test. Among the 49 patients who underwent an NMO-IgG test, 13 (27%) were positive. Of the 42 patients who visited after 2011, when we first started to check NMO-IgG routinely in ON patients, seven (17%) patients showed positive results, however, seven of the patients did not undergo an NMO-IgG test. Among the seven patients who did not undergo an NMO-IgG test, six patients recovered with a normal vision of 20 / 20 and normal visual field without recurrence. Only one patient with unilateral involvement had no light perception vision during the 6 years of follow-up without any improvement. Among 35 patients who underwent an NMO-IgG test during the initial visit after 2011, the test positivity was 20% (7 / 35) (Fig. 1).
Images of brain MRI were available for 85 patients. The images revealed that periventricular lesion (PVL) was present in 11 (13%) patients at the initial presentation. Regarding optic nerve enhancement, 92 eyes were evaluated with orbit MRI within 2 weeks of onset using a contrast enhancement and fat suppression technique with a magnified view of the optic nerve. Of these 92 eyes, 68 (74%) showed optic nerve enhancement.
During the follow-up, 73 (80%) patients were diagnosed as idiopathic ON and 14 (15%) were diagnosed as NMOSD. Four (4%) developed definite MS based on clinical grounds. Clinical characteristics in patients according to subgroups are shown in Table 2. There were significant differences in initial visual acuity and the presence of PVL ( p = 0.008 and p = 0.023, respectively) according to ON subgroups. Initial visual acuity was the worst in patients with NMOSD. The number of eyes with an initial visual acuity of 20 / 200 or worse was 45 (46%) out of 97 with idiopathic ON, 16 (76%) out of 21 ON patients with NMOSD, and four (100%) out of four ON patients with MS. The presence of PVL was found more often in MS patients (75%).
All but two patients received intravenous methylprednisolone sodium succinate (250 mg four times daily for 3 to 5 days followed by oral prednisolone [1 mg/kg daily] for 11 days). One patient did not receive any treatment because disease severity was mild, and the other patient was pregnant. If the NMO-IgG test was positive, long-term immunosuppressive treatment was initiated after tapering oral prednisolone slower than in the above-mentioned regimen at the Neurology Department. Patients who developed clinically definite MS during follow-up were administered disease Table 3. There were no significant differences in the final logarithm of the minimum angle of resolution of visual acuity among the ON subgroups. There was a marginally significant difference in the proportion of severe visual loss with a visual acuity of 20 / 200 or less at the final visit among ON subgroups ( p = 0.052). Fourteen (14%) eyes in patients with idiopathic ON, nine (43%) eyes in patients with NMOSD, and one (25%) eye in patients with MS had severe final visual loss. Factors affecting severe visual loss at the final visit were analyzed (Table 4). Significant predictors of final severe visual loss included initial visual acuity, initial color vision, and initial mean deviation on visual field test ( p < 0.001, p = 0.010, and p = 0.007, respectively). Variables with a p-value of less than 0.1 were included in the multivariate analysis. Initial color vision and initial visual field loss were not included in the final multivariate analysis because of their close relationship with visual acuity. Only the initial visual acuity was found to be a significant factor affecting severe visual loss at the final visit based on multivariate analysis ( p < 0.001).
Among the four patients with MS, three had PVL at the initial presentation. One patient who had no PVL developed neurologic deficit and PVL on MRI and was diagnosed as MS later. Among 14 patients with NMOSD, one patient had multiple PVL at the initial presentation.
Recurrence was noted in 31 (34%) patients. The mean time interval from initial presentation to recurrence was 3.3 ± 6.3 years. Twenty (22%) patients experienced recurrence of ON in either eye or both eyes within 1 year, three (3%) patients had a recurrence at 1 to 3 years after the onset of ON, four (4%) patients had recurrence at 10 years to 24 years after the onset of ON. In one (1%) patient, the exact duration between recurrence and the onset of ON was unclear. Among subgroups, the recurrence rate was 32% in patients with idiopathic ON and 57% in ON patients with NMOSD. There was no recurrence among four patients with MS during the follow-up period. The time interval from the first episode to the first recurrence was 3.3 ± 6.3 years in patients with idiopathic ON. It was 3.7 ± 6.6 months in ON patients with NMOSD.
Six (43%) of 14 ON patients with NMOSD experienced one or more attack of longitudinally extensive transverse myelitis at various degrees within 7 ± 8 months (range, 0 to 24 months) after the first ON episode. The mean EDSS in ON patients with NMOSD was 3.1 ± 1.5 (range, 0.0 to 7.0) at the final visit (5.3 ± 4.4 years after the initial onset of ON). One patient had a disability affecting ambulation at the final visit. The patient used a wheelchair unaided (EDSS 7.0). Two (14%) patients had permanent legal blindness in both eyes and 3 (21%) patients had permanent legal blindness in one eye. In total, seven (50%) patients with NMOSD showed severe visual loss in at least one eye or were unable to ambulate without assistance at the final visit. Three (75%) of four ON patients with MS experienced various degrees of decrease in sensory or pyramidal functions within 11 ± 15 months (range, 1 to 29 months) after the initial onset of ON. The mean EDSS in ON patients with MS was 1.8 ± 1.5 (range, 0.0 to 3.0) at the final visit (4.9 ± 3.8 years after the initial onset of ON). One patient showed permanent blindness in one eye at the final visit.
Discussion
Clinical characteristics of patients in this study were similar to those of previous studies in other Asian coun-tries, including a lower percentage of the female gender, lower percentage of patients with pain, higher percentage of patients with severe initial visual loss, and lower percentage of patients with PVL and MS at presentation compared to data in the ONTT [2][3][4][5][6][7]19].
In this study, retrobulbar optic nerve enhancement was noted in 74% of eyes, which was greater than that (33%) reported in a study by Wang et al. [4]. The difference could be partly due to differences in the characteristics of participants. It might also be related to the location of ON [20].
In the report by Wang et al. [4], disc swelling, which suggested anterior ON, was present in 65% of patients. In the present study, disc swelling was noted in 53% of patients. This difference in the presence of disc swelling could be associated with the different age distribution of the participants between the two studies. Some studies conducted on Asians have analyzed the results of pediatric and adult ON patients together. Pediatric ON patients tend to have more anterior ON than adult ON patients [21].
During the follow-up in this study, only 4% of ON patients developed definite MS on clinical grounds, consistent with other Asian results showing a relatively low per- centage of MS in ON patients [2][3][4][5][6]. The proportion of patients with NMO-IgG antibody was 14%. The proportion of NMOSD was much higher than that of MS in the participants of this study. Although no specific study has presented the proportion of NMOSD among Asian ON patients, many studies have shown that NMO is more prevalent in Black, Asian, and Indian populations compared to Whites [9][10][11][12][13]. The reason for this difference in etiology might be genetic and environmental inf luences [7]. With regards to the poor prognosis and need of longterm immunosuppression in NMO, the high proportion of NMOSD in patients with ON in our study population was of concern. It is noteworthy that the proportion was among patients whose first presentation was ON. It is also remarkable that 21% of NMOSD patients did not show bilateral presentation or severe visual loss at the initial presentation. It is known that early diagnosis and intensive treatments are necessary for patients with NMO. Our data suggest that it is worthwhile performing an NMO-IgG test for every patient at their first attack of ON to allow the initiation of early treatment. The proportion of patients with PVL at the initial presentation was 13%, which was similar to that (14%) reported in Japan [2]. Among our study population, 36% of patients with PVL developed MS or NMOSD during the follow-up period. Although the proportion of patients with PVL on MRI at the initial presentation was relatively low in our study population, the presence of PVL in Asian patients also could be regarded as a warning sign for possible association with other neurologic disorders such as MS and NMOSD.
Recurrence was noted in 34% of ON patients in this study. Most cases that (65%) recurred showed recurrence within 1 year. However, recurrence was noted even 24 years after the first episode of ON in one case. The recurrence rate was 32% in patients with idiopathic ON and 57% in ON patients with NMOSD. Probably due to the small number included, there was no recurrent case in the MS group. In our study, 26% of patients with recurrent ON developed NMOSD during the follow-up period. In recurrent cases, an NMO-IgG test needs to be performed based on these results.
According to the ONTT data [19,22], the prognosis of ON is generally good, with 95% of patients having visual recovery ≥20 / 40. In this study, 73% of eyes achieved 20 / 40 visual acuity or better and 19% attained 20 / 200 or less vision. The difference in results could be due to different proportions of Asian patients with ON associated with NMOSD. In this study, 14% of patients in the idiopathic ON group and 43% of patients in the NMOSD group had a severe visual loss at the final follow-up. In multivariate analysis, the initial visual acuity was found to be a significant factor affecting severe visual loss at the final visit. This result was consistent with previous studies [23,24].
After the first ON episode, 43% of ON patients with NMOSD experienced one or more attacks of longitudinally extensive transverse myelitis at various degrees within 0 to 24 months. The mean EDSS in ON patients with NMOSD was 3.1 at the final visit. In this study, 50% of patients showed a severe visual loss in at least one eye or unable to ambulate without assistance within 5.3 ± 4.4 years after the initial onset of ON. With regards to MS in this study, because there were only 4 patients who developed definite MS, caution should be taken when generalizing the characteristics of these patients to others. There were 3 (75%) patients with MS who showed various degrees of decreased sensory or pyramidal functions other than visual symptoms, although the severity was relatively mild (EDSS range, 0.0 to 3.0). This study has several limitations. First, this study was retrospective in nature. There were also significant variabilities in the length of follow-up for different patients. Second, all patients with NMOSD and MS received some form of long-term immunosuppressive or immunomodulatory treatment. Therefore, the results of this study could not reflect the natural history of these patients after the ON event. Third, due to the relative rarity of MS in patients whose initial presentation was ON in this study, the analysis of this subgroup was largely limited because of the small number of subjects. Fourth, there might be other disorders related to ON that were not diagnosed. For example, we only performed a serologic test for NMO without assessing myelin oligodendrocyte glycoprotein [25,26]. Finally, all study participants were Asians. Therefore, we could not compare the difference between races directly in this study. Because all data were obtained from one ethnicity, the direct application of these data to other races might be dangerous.
In conclusion, we analyzed the characteristics of ON in Korean patients, including test positivity for NMO-IgG and visual and systemic prognosis in these patients according to the presence of an association with NMOSD or MS.
In our study, 14% of patients showed positive results in an NMO-IgG test and 50% of patients with NMOSD showed severe visual loss in at least one eye or were unable to ambulate without assistance at the final visit within 5.3 years of the initial onset of ON. Due to the poor prognosis and debilitating course of ON, an NMO-IgG test needs to be performed for every ON patient at the first attack to allow the early initiation of treatment in Asian patients. | 2019-01-16T14:18:49.884Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "f9346a721523263d57518d371eb00ae54442bde3",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc6372377?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9346a721523263d57518d371eb00ae54442bde3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11032552 | pes2o/s2orc | v3-fos-license | Asymptotics of Toeplitz Matrices with Symbols in Some Generalized Krein Algebras
Let $\alpha,\beta\in(0,1)$ and \[ K^{\alpha,\beta}:=\left\{a\in L^\infty(\T): \sum_{k=1}^\infty |\hat{a}(-k)|^2 k^{2\alpha}<\infty, \sum_{k=1}^\infty |\hat{a}(k)|^2 k^{2\beta}<\infty \right\}. \] Mark Krein proved in 1966 that $K^{1/2,1/2}$ forms a Banach algebra. He also observed that this algebra is important in the asymptotic theory of finite Toeplitz matrices. Ten years later, Harold Widom extended earlier results of Gabor Szeg\H{o} for scalar symbols and established the asymptotic trace formula \[ \operatorname{trace}f(T_n(a))=(n+1)G_f(a)+E_f(a)+o(1) \quad\text{as}\ n\to\infty \] for finite Toeplitz matrices $T_n(a)$ with matrix symbols $a\in K^{1/2,1/2}_{N\times N}$. We show that if $\alpha+\beta\ge 1$ and $a\in K^{\alpha,\beta}_{N\times N}$, then the Szeg\H{o}-Widom asymptotic trace formula holds with $o(1)$ replaced by $o(n^{1-\alpha-\beta})$.
Let sp A denote the spectrum of an operator A. If f is an analytic function in an open neighborhood of sp A, then we will simply say that f is analytic on sp A.
We assume that the reader is familiar with basics of trace class operators and their operator determinants (see Gohberg and Krein [10,Chap. 3 and 4] or Section 3). If A is a trace class operator, then trace A denotes the trace of A and det(I − A) denotes the operator determinant of I − A.
The following result was proved by Widom [26,Theorem 6.2] (see also [6,Section 10.90]). It extends earlier results by Szegő (see [11]) and now it is usually called the Szegő-Widom asymptotic trace formula.
and Ω is any bounded open set containing sp T (a) ∪ sp T ( a) on the closure of which f is analytic.
Our main result is the following refinement of Theorem 1.1, which gives a higher order asymptotic trace formula.
Notice that higher order asymptotic trace formulas are known for other classes of symbols: see [25] for W ∩K α,α with α > 1/2 (here W stands for the Wiener algebra of functions with absolutely convergent Fourier series), [14] for weighted Wiener algebras, [15] for Hölder-Zygmund spaces, [16] for generalized Hölder spaces. All these classes consist of continuous functions only. More precisely, they are decomposing algebras of continuous functions in the sense of Budyanu and Gohberg. An invertible matrix function in such an algebra admits a Wiener-Hopf factorization within the algebra. The proofs of [14,15,16] are based on a combination of this observation and an approach of Böttcher and Silbermann [3] (see also [4,] and [6, Sections 10.34-10.40]) to higher order asymptotic formulas of Toeplitz determinants with Widom's original proof of Theorem 1.1 (see [26] and [6, Section 10.90]). As far as we know, Vasil'ev, Maximenko, and Simonenko have never published a proof of the result stated in the short note [25], however, their result can be proved by the same method.
Generalized Krein algebras K α,β may contain discontinuous functions. To study them we need a more advanced factorization theory in decomposing algebras of L ∞ functions developed by Heinig and Silbermann [13]. We present main results of this theory in Section 2 and then apply them to K α,β with α + β ≥ 1 and max{α, β} > 1/2. Under these assumptions, if both Toeplitz operators T (a) and T ( a) are invertible, then a admits simultaneously canonical right and left Wiener-Hopf factorizations a = u − u + = v + v − in K α,β N ×N . The factors and their inverses in these factorizations are stable under small perturbations of a in the norm of K α,β N ×N .
We will use this fact in Section 4 for factorizations of a − λ, where λ belongs to a compact neighborhood Σ of the boundary of a set Ω containing sp T (a) ∪ sp T ( a). Section 3 contains some preliminaries on trace class operators and their determinants. Further we formulate the Borodin-Okounkov formula under weakened smoothness assumptions. This is an exact formula which relates determinants of finite Toeplitz matrices det T n (a) and operator determinants of I −Q n H(b)H( c)Q n , where Q n H(b)H( c)Q n are truncations of the product of Hankel operators H(b) and H( c) with b := v − u −1 + and c := u −1 − v + . Here Q n := I − P n and P n is the finite section projection.
If a−λ ∈ K α,β N ×N , then we can effectively estimate the speed of convergence of the trace class norm of I − Q n H[b(λ)]H[ c(λ)]Q n to zero as n → ∞ uniformly in λ ∈ Σ. This speed is o(n 1−α−β ). Combining this estimate with the Borodin-Okounkov formula for a − λ and then applying Widom's "differentiate-multiply-integrate" arguments with respect to λ ∈ Σ, we prove Theorem 1.2 in Section 4.
Wiener-Hopf factorization and generalized Krein algebras
2.1. Wiener-Hopf factorization in decomposing algebras. For a unital algebra A, let GA denote the its group of invertible elements.
Mark Krein [17] was the first to understand the Banach algebraic background of Wiener-Hopf factorization and to present the method in a crystal-clear manner. Gohberg and Krein [9] proved that a ∈ GW N ×N admits a Wiener-Hopf factorization. Later Budyanu and Gohberg developed an abstract factorization theory in decomposing algebras of continuous functions. Their results are contained in [7,Chap. 2]. Heinig and Silbermann [13] extended the theory of Budyanu and Gohberg to the case of decomposing algebras which may contain discontinuous functions. The following definitions and results are taken from [13] (see also [4,Chap. 5]).
Let A be a Banach algebra of complex-valued functions on the unit circle T under a Banach algebra norm · A . The algebra A is said to be decomposing if it possesses the following properties: (a) A is continuously embedded in L ∞ ; (b) A contains all Laurent polynomials; (c) P A ⊂ A and QA ⊂ A. Using the closed graph theorem it is easy to deduce from (a)-(c) that P and Q are bounded on A and that P A and QA are closed subalgebras of A. For k ∈ Z and t ∈ T, put χ k (t) := t k . Given a decomposing algebra A put The integers κ i are usually called the right (resp. left ) partial indices of a; they can be shown to be uniquely determined by a. If κ 1 = · · · = κ N = 0, then the Wiener-Hopf factorization is said to be canonical. A decomposing algebra A is said to have the factorization property if every matrix function in GA N ×N admits a right Wiener-Hopf factorization in A N ×N . Let R be the restriction to the unit circle T of the set of all rational functions defined on the whole plane C and having no poles on T.
Theorem 2.1. Let A be a decomposing algebra. If at least one of the sets is dense in A, then A has the factorization property.
Stability of factors and their inverses under small perturbations. Let
A be a Banach algebra equipped with a norm · A . We will always consider an admissible norm · AN×N in A N ×N . Recall that a Banach algebra norm is said to be admissible (see [ The following result can be extracted from a stability theorem for factors and their inverses in the Wiener-Hopf factorization in decomposing algebras given in [20,Theorem 6.15]. There it was assumed, in addition, that a decomposing algebra is continuously embedded in the set of all continuous functions. However, the result is also true for decomposing algebras in the sense of Heinig and Silbermann adopted in this paper. Hence, if α, β ∈ (0, 1) and α + β ≥ 1, then The following result was proved by Krein [18] for α = β = 1/2.
Proof. The statement is proved by analogy with [2, Lemma 7.7]. By [2, Lemma 6.1], the projections P and Q are bounded on K α,β . Hence K α,β is a decomposing algebra. Assume that β > 1/2. Taking into account that , where B α 2 and B β 2 are Besov spaces, from [22, Sections 3.5.1 and 3.5.5] one can deduce that R∩P K α,β is dense in P K α,β . Analogously, if α > 1/2, then R∩QK α,β is dense in QK α,β . Theorem 2.1 gives the factorization property of K α,β . Suppose α = max{α, β}. It is clear that for every β ∈ (0, 1) one has β ≥ 1/2 or In the first case from (2.1) it follows that 3. The Borodin-Okounkov formula One can show that, for every A ∈ C 1 (H) and for every orthonormal basis {ϕ j } ∞ j=0 of H, the series ∞ j=0 Aϕ j , ϕ j H converges absolutely and that its sum does not depend on the particular choice of {ϕ j } ∞ j=0 . This sum is denoted by trace A and is referred to as the trace of A. It is well known that The Hilbert-Schmidt norm of an operator A ∈ C 2 (H, K) can be expressed in the form j=0 and {ψ k } ∞ k=0 are orthonormal bases of H and K, respectively. We will need the following version of the Hölder inequality. If B ∈ C 2 (H, K) and A ∈ C 2 (K, H), then AB ∈ C 1 (H) and Let A be a bounded linear operator on H of the form I + K with K ∈ C 1 (H). If {λ j (K)} j≥0 denotes the sequence of the nonzero eigenvalues of K (counted up to algebraic multiplicity), then (1 + λ j (K)).
In the case where the spectrum of K consists only of 0 we put det(I + K) = 1. On the other hand,
Taking into account that
because A C1(H) < 1.
The Borodin-Okounkov formula under weakened hypotheses.
For a ∈ L ∞ N ×N and n ∈ Z + , define the operators a(k)χ k , Q n := I − P n .
The operator P n T (a)P n : P n H 2 N → P n H 2 N may be identified with the finite block Toeplitz matrix T n (a) = ( a(j − k)) n j,k=0 . In June 1999, Its and Deift raised the question whether there is a general formula that expresses the determinant of the Toeplitz matrix T n (a) as the operator determinant of an operator I − K where K acts on ℓ 2 {n + 1, n + 2, . . . }. Borodin and Okunkov showed in 2000 that such a formula exists (however, it was known even much earlier. In 1979, Geronimo and Case used it to prove the strong Szegő limit theorem). Further, in 2000, several different proofs of it were found by Basor and Widom and by Böttcher. We refer to the books by Simon [24] Applying [6, Proposition 2.14], we get
Then the constants
From these equalities and [10, Chap. IV, Section 1.6] it follows that By ℓ γ 2 we denote the Hilbert space of all sequences Clearly, the sequence {e k /(k + 1) γ } ∞ k=0 , where (e k ) j = δ kj and δ kj is the Kronecker delta, is an orthonormal basis of ℓ γ 2 . If γ = 0, we will simply write ℓ 2 instead of ℓ 0 2 . In this subsection we will estimate Hilbert-Schmidt norms of truncations of Hankel operators acting between ℓ 2 and ℓ γ 2 by the rules Notice that one can identify Hankel operators acting on H 2 and on ℓ 2 . For ϕ = {ϕ j } ∞ j=0 and n ∈ Z + , define For a ∈ K α,β and n ∈ N, put (a) If α ≥ γ + 1/2, then there exists a positive constant M (α, γ) depending only on α and γ such that for all sufficiently large n, only on β and γ such that for all sufficiently large n,
. We will also need a quantitative version of the above result for truncations of the product H(b)H( c).
Let Ω be a bounded open set containing the set sp T (a)∪sp T ( a) on the closure of which f is analytic. From (2.2) and [6,Theorem 7.20] it follows that Ω contains the spectrum (eigenvalues) of T n (a) for all sufficiently large n. Further, Corollary 2.4 and Theorem [6, Theorem 2.94] imply that the spectrum of a in K α,β N ×N is contained in Ω. Hence f (a) ∈ K α,β N ×N and f (T n (a)) is well defined whenever f is analytic on sp T (a) ∪ sp T ( a). | 2008-03-26T16:34:14.000Z | 2008-03-26T00:00:00.000 | {
"year": 2008,
"sha1": "de49daadd8a68ec41e19edda4da4b8a7f195088b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0803.3767v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "de49daadd8a68ec41e19edda4da4b8a7f195088b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
248154744 | pes2o/s2orc | v3-fos-license | Anaerobic oxidation of aldehydes to carboxylic acids under hydrothermal conditions
Examples of anaerobic oxidation of aldehydes in hydrothermal solutions are reported. The reaction using iron(iii) nitrate as the oxidant occurs under mild hydrothermal conditions and generates carboxylic acids in good yields. This method differs from previous studies which use atmospheric oxygen as the oxidant.
Oxidation of aldehydes is considered one of the most important organic transformations. It is a fundamental reaction that is not only critical for biological metabolism but is also involved in the synthesis of chemicals during industrial processes. 1,2 For example, aldehydes can be biologically oxidized to carboxylic acids by aldehyde dehydrogenases, which inactivates and detoxies the aldehydes to allow for easier excretion from the body. 3,4 Industrially, aldehyde oxidation has been used in the manufacturing of cosmetic products, plasticizers, bers, and biomass-derived chemicals. 5,6 Because of its widespread processes and applications, efficient and green methods for oxidizing aldehydes to carboxylic acids are a pressing need.
Traditionally, aldehydes are oxidized stoichiometrically, oen using hazardous oxidants such as the Cr(VI)-based Jones reagent, the Ag(I)-based Tollen's reagent, the Cu(II)-based Fehling's reagent, or permanganate-based catalysts. 7-10 These conventional methods also generate stoichiometric amounts of waste by-products, which are oen toxic and expensive to recycle. 5 To alleviate these problems, chemists have focused on developing "greener" processes for catalytic oxidation of aldehydes. For example, previous studies have used water as an environmentally friendly solvent for oxidation of aldehydes, 1,2,11-13 which prevents the involvement of harmful organic solvents. However, this method could suffer from the necessity for large amounts of additives and expensive or toxic catalysts. Recently, studies have focused on nding alternative catalysts that are more naturally abundant and low-cost. For example, metal salts such as iron and copper have been applied to the oxidation of aldehydes as catalysts in aqueous solutions. 8,11 Currently, most investigations on aldehyde oxidations focus on aerobic oxidation, i.e., using molecular oxygen (O 2 ) to oxidize aldehydes into the corresponding carboxylic acids. 2,8,11,14 Autoxidation of aldehydes with O 2 also takes place at ambient conditions, which usually involves a free radical chain reaction to form peracid, followed by the Baeyer-Villiger oxidation. 15,16 In comparison, fewer studies have explored the anaerobic oxidation pathway for aldehydes, especially in catalyst-free aqueous environments. Hydrothermal systems, however, may provide a unique environment for anaerobic organic redox reactions. Our recent research on alcohols, carboxylic acids, and amides have shown that organic oxidations can readily occur in the presence of metal salts such as Cu(II) under O 2 -absent hydrothermal conditions. [17][18][19] In those reactions, water serves as a green solvent, while Cu(II) acts as an efficient oxidant. The metal-promoted hydrothermal reactions also mimic natural geochemical processes on Earth, which provides the new "geomimicry" concept of using Earth-abundant metals as the oxidant/reductant for green organic reactions. 18,20,21 In this study, we investigated the oxidation of aldehydes to carboxylic acids in an anaerobic and mild hydrothermal environment, using simple Cu(II) and non-toxic Fe(III) salts as the oxidizing agent. The optimal reaction condition and the substrate scope were both studied.
Hydrothermal experiments were conducted in sealed fused silica tubes under an aqueous condition of 200 C and 15 bar (P sat , calculated using SUPCRT92) 22 in the absence of O 2 , following a previously developed method. 17,23,24 We started our investigation by examining the oxidation of benzaldehyde (compound 1) under hydrothermal conditions using various Cu(II) and Fe(III) salts, including CuCl 2 , CuSO 4 , Cu(OAc) 2 , Cu(NO 3 ) 2 , FeCl 3 , Fe 2 (SO 4 ) 3 , and Fe(NO 3 ) 3 (Table 1). In pure water without additives, only 4% of benzaldehyde reacted at 200 C aer 24 h, forming benzoic acid with a yield of 3% (entry 1). This slow reaction indicates that benzaldehyde is relatively inert under the anaerobic hydrothermal conditions, which is consistent with a previous study reporting <1% conversion for benzaldehyde at 250 C aer 36 h. 25 In the presence of CuCl 2 , CuSO 4 , and Cu(OAc) 2 , both the benzaldehyde conversion (6-9%) and the acid yield (5-8%) at 200 C were slightly increased aer 24 h (entries 2-4). However, their effects were not as prominent as Cu(NO 3 ) 2 , which increased the yield to 10% and 26% aer 0.5 and 2 h, respectively (entries 5 and 7). NaNO 3 was also tested and gave a yield of 10% aer 2 h (entry 9), suggesting that the aldehyde oxidation could be driven by the nitrate ions, but not as strong as the copper-nitrate complex. In the presence of FeCl 3 and Fe 2 (SO 4 ) 3 , the acid yields were 10-13% at 200 C aer 24 h (entries 10 and 11), which were similar to those with CuCl 2 and CuSO 4 . The most dramatic effect comes from Fe(NO 3 ) 3 , which converted 64% of benzaldehyde with a boosted yield of 60% at 200 C only aer 0.5 h (entry 15). Increasing the reaction time to 2 h allowed Fe(NO 3 ) 3 to fully oxidize benzaldehyde, reaching the highest acid yield (98%, entry 17) among all the conditions studied. Halving the starting concentration of Fe(NO 3 ) 3 lowered the yield to 90% (entry 18), while decreasing the reaction temperature also reduced the yield signicantly (entries [12][13][14]. Interestingly, combining FeCl 3 with NaNO 3 exhibited a very high yield as that with Fe(NO 3 ) 3 (entries 19 and 20), which suggests the key of this aldehyde oxidation is having both Fe(III) and nitrate ions present. Additionally, nitrates of redox-neutral metals such as Mg(NO 3 ) 2 and Ca(NO 3 ) 2 were also investigated (entries 21 and 22), which showed a signicantly lower yield than that of Fe(NO 3 ) 3 or Cu(NO 3 ) 2 . These results further indicate that both the redox metals and nitrate ions play an oxidizing role in this reaction.
Aer identifying the optimal reaction conditions, we then tested with a group of 9 different functionalized aldehydes to examine the scope of this method ( Table 2). All of these experiments were conducted using 2 equiv. Fe(NO 3 ) 3 at 200 C and 15 bar for 2 h. Compared to the 98% yield from benzaldehyde, the aromatic aldehyde with an electron-withdrawing group (compound 2) gave a yield of 82%, whereas the ones with electron-donating groups such as p-tolualdehyde (compound 6) and 4-methoxybenzaldehyde (compound 7) gave a much lower yield of 54% and 46%, respectively. However, the relatively low yields from the electron-rich aldehydes are mainly due to the less-complete conversion of the starting material, which is expected to increase at longer reaction times. Halogensubstituted aldehydes such as 2-bromobenzaldehyde (compound 3) and 4-bromobenzaldehyde (compound 4) gave an acid yield of 47% and 66%, respectively, indicating the tolerance of this method for halogens and also a potential steric effect. Non-aromatic aldehydes such as hydrocinnamaldehyde (compound 5) and cyclohexanecarboxaldehyde (compound 8) also gave a moderate yield of 45% and 58%, respectively. Unsaturated aldehyde cinnamaldehyde (compound 9) also worked with this method, but the yield (30%) was lower than that of hydrocinnamaldehyde. The result suggests that the C]C double bond potentially interferes with the oxidation, which seems consistent with the nding from another study on silvercatalyzed aldehyde oxidation. 2 In addition, other aldehydes such as 2-naphthaldehyde (compound 10) resulted in a 41% yield under the same experimental conditions, demonstrating this method is also applicable to fused-ring aldehyde structures. We also proposed a tentative mechanism for this Fe(III)involved aldehyde oxidation. As shown in Scheme 1, aldehydes are expected to undergo either the hydration followed by oxidation pathway or the oxidation followed by hydration pathway to form a radical cation intermediate. Aer losing a proton, the radical cation could be subsequently oxidized by Fe(NO 3 ) 3 to form a carbocation before the corresponding acid is produced. This reaction mechanism is also based on proposed mechanisms in previous studies, where Cu(II) has been found as an oxidant for the oxidation of benzylalcohol and benzaldehyde under similar hydrothermal conditions. 17,20 In the present study, however, the electron-donating -CH 3 and -CH 3 O substituted benzaldehyde resulted in a lower conversion than that of the -CF 3 substituted benzaldehyde (Table 2), which does not seem to support the formation of a positively charged intermediate. It is thus more likely that the hydrate formation is the rate-determining step, since the substituent effect on hydrate formation could be opposite to that on oxidation. 20,26 More ring-substituted structures and detailed kinetics studies are needed to further pinpoint the reaction mechanisms.
In addition, we performed thermodynamic calculations for the anaerobic oxidation of aldehyde under a range of hydrothermal conditions. Using acetaldehyde as a model aldehyde, results show that the logarithm of equilibrium constant (log K eq ) of oxidation of acetaldehyde to acetic acid in pure water was between 0 and 1 under the hydrothermal condition and slowly increased with temperature (Fig. S1, ESI †). In contrast, the log K eq for Fe(NO 3 ) 3 -promoted acetaldehyde oxidation was orders of magnitude higher, which suggests the reaction is highly favorable when Fe(NO 3 ) 3 is present (Fig. S1, ESI †). Although the actual log K eq values for the aromatic aldehydes could be quite different from that of acetaldehyde, the increasing trend of log K eq by the addition of Fe(NO 3 ) 3 may hold true for other aldehyde structures. The results of geochemical modeling are consistent with our experimental observations.
In conclusion, we reported the rst examples of oxidation of aldehydes with Fe(NO 3 ) 3 in water under anaerobic hydrothermal conditions. The oxidation is green and relatively clean, with relatively high yields of carboxylic acids achieved within hours. Different aldehyde structures were tested to show a good versatility of the reaction. This method also represents one of the "geomimicry" approaches that use Earth-abundant geological materials for organic synthesis and green chemistry applications. Future investigation on the reaction mechanism and the potential effects of other metal salts is anticipated.
Conflicts of interest
There are no conicts to declare. a Yield determined by gas chromatography.
Scheme 1 Proposed mechanism for anaerobic oxidation of aldehydes to carboxylic acids with Fe(NO 3 ) 3 as the oxidant. | 2022-01-13T16:13:30.098Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "85fa27aaa22abf92e2302e83072aff25f29f0585",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d1ra08444e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3af70bd99d571698315193db4f13ade1f3c857cc",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267497380 | pes2o/s2orc | v3-fos-license | A new chromosome-scale duck genome shows a major histocompatibility complex with several expanded multigene families
Background The duck (Anas platyrhynchos) is one of the principal natural hosts of influenza A virus (IAV), harbors almost all subtypes of IAVs and resists to many IAVs which cause extreme virulence in chicken and human. However, the response of duck’s adaptive immune system to IAV infection is poorly characterized due to lack of a detailed gene map of the major histocompatibility complex (MHC). Results We herein reported a chromosome-scale Beijing duck assembly by integrating Nanopore, Bionano, and Hi-C data. This new reference genome SKLA1.0 covers 40 chromosomes, improves the contig N50 of the previous duck assembly with highest contiguity (ZJU1.0) of more than a 5.79-fold, surpasses the chicken and zebra finch references in sequence contiguity and contains a complete genomic map of the MHC. Our 3D MHC genomic map demonstrated that gene family arrangement in this region was primordial; however, families such as AnplMHCI, AnplMHCIIβ, AnplDMB, NKRL (NK cell receptor-like genes) and BTN underwent gene expansion events making this area complex. These gene families are distributed in two TADs and genes sharing the same TAD may work in a co-regulated model. Conclusions These observations supported the hypothesis that duck’s adaptive immunity had been optimized with expanded and diversified key immune genes which might help duck to combat influenza virus. This work provided a high-quality Beijing duck genome for biological research and shed light on new strategies for AIV control. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-024-01817-0.
Background
Newly emerging or re-emerging influenza caused by the influenza A virus (IAV) continue to pose global public health threats.IAVs are responsible for millions of severe cases and 290,000-650,000 deaths in human each year according to the World Health Organization.Ducks serve as the principal natural reservoir for IAVs and harbor all hemagglutinin (HA) and neuraminidase (NA) subtypes of IAVs with the exception of the H17N10 and H18N11 subtypes.Many IAVs including high pathogenic H5 subtype viruses circulate in ducks and cause little harm, while they are responsible for respiratory and systemic disease when transmitted to other hosts such as chicken and human [1,2].The long co-evolution between duck and IAVs has undoubtedly fine-tuned the host immune system to combat influenza virus.Previous studies have examined the non-major histocompatibility complex (non-MHC) elements such as β-defensins, type I interferon, RIG-I and pro-inflammatory cytokines to explain duck's disease resistance strategies in response to IAV infection [3][4][5][6].However, due to high gene density, GC content, and sequence diversity, the MHC associated with disease resistance is hard to assemble [7,8], thus limits us to understand how ducks combat IAVs.
High-throughput sequencing technologies and traditional assembly tools have not enabled proper genomic draft of highly repetitive and GC-rich sequences, such as the MHC.Therefore, avian MHC gene maps tended to be constructed through sequencing of MHC-containing fosmids or BAC clones instead of de novo assembly [9].The first avian genomic MHC map was the chicken minimal and essential one on chromosome 16.This map, spanning 92 kb and harboring 19 genes, was then extended to be 242 kb containing 46 genes [10,11].After that, four galliformes (Turkey, quail, golden pheasant, and black grouse) MHC-B regions were reported to be highly syntenic to that of chicken, except for expansion of a few gene families such as BG, MHCIIB, and MHCIα and inversion of gene loci such as TAPBP, TAP1, and TAP2 [12][13][14].In contrast, study of the MHC in waterfowl is limited, where two fragments of the duck MHC map were first published by Moon et al. and Ren et al. [15,16].Limited by the shortage of gene information in the duck MHC, only a few functional studies on duck MHC have been performed [16,17].Recently, the development of single-molecule sequencing (third-generation sequencing), such as Pacific Biosciences Single Molecule Real-Time sequencing or Oxford Nanopore sequencing, make it possible to have reads with lengths of hundreds of kilobases, thus largely improved the assembly quality of repetitive regions of genomes [18,19].
Here we apply 95-fold Nanopore long reads, 117-fold 150bp paired-end Illumina genomic reads, 216-fold optical map reads and 234-fold PE150 Hi-C reads to generate a highly contiguous chromosome-scale Beijing duck reference genome (SKLA1.0)with a complete MHC genomic map.We use this high-quality genome to understand the evolution of innate and adaptive immune genes in duck and describe features relevant to resistance of influenza virus.
Results
The chromosome-scale genome assembly has high contiguity and completeness Before assembly, we estimated the genome heterozygosity of C18 duck and the heterozygosity is as low as 0.58% (Additional file 1: Table S1 and Additional file 2: Fig. S1-S3).We generated a high-quality duck genome sequence for Pekin duck (called duck hereafter), a native breed in China, using a hierarchical and hybrid approach.Using 71-fold normal and 24-fold ultra-long Nanopore reads, we assembled the duck genome into 151 contigs covering a total length of 1.22 Gb with a contig N50 of 32.81 Mb (Additional file 1: Table S2-S3).These 151 contigs were then polished with 912 million 150-bp Illumina pair-end reads, corrected and integrated with high-quality optimal maps (Additional file 1: Table S4-S5).This effort generated 69 scaffolds with a scaffold N50 of 72.53 Mb (Additional file 1: Table S6).A total of 274 Gb PE150 Hi-C data was used to order and orient duck scaffolds, correct mis-joined sections and merge overlaps, which generated 40 super-scaffolds (Additional file 1: Table S7).We further performed gap filling using 95-fold corrected Nanopore reads to remove gaps and generated the final duck assembly (SKLA1.0)representing 1. 16 Gb of the genomic sequence, which is ~99.11% of the estimated genome size (Table 1).Since duck contains 80 chromosomes (diploid, 2n=80), we inferred that this duck assembly had covered all chromosomes except W (Additional file 1: Table S8).Moreover, we compared our SKLA1.0assembly with our previous duck BGI_duck_1.0 assembly, the available duck assembly with highest contiguity (ZJU1.0)and two high-quality avian reference genomes (the chicken GRCg6a and zebra finch bTaeGut1.4.pri).These analyses indicated that the SKLA1.0 assembly represents a major improvement over the BGI_duck_1.0 and ZJU1.0 genomes in contiguity, completeness and chromosome size.The contiguity and completeness of SKLA1.0 is also higher than that of the zebra finch bTaeGut1.4.pri and the chicken GRCg6a (Fig. 1a-d and Table 1).
To determine whether evolution at the gene family level could account for different susceptibility of AIVs, we examined gene family difference between the principal natural hosts and incidental hosts of AIVs with one amphibian, three mammals and four birds (Additional file 2: Fig. S4).This effort showed that duck significantly expanded 25 and contracted 13 immune-related gene families when compared to the chicken (Additional file 1: Table S12).Interestingly, the cell fate determining protein mab21-related, complement C4-A-related, MHC class II-related, butyrophilin (BTN), death inducer-obliterator 1, MHC class I and C-type lectin superfamily members were significantly expanded, while the properdin, B-cell lymphoma 3 protein, zinc finger and BTB domain-containing protein 1 were significantly contracted in duck.
Duck has an extensive and complex major histocompatibility complex
The MHC is crucial to initiate T cell-mediated immunity response to infection.Classical MHC class I molecules (MHCI) present peptides predominantly from proteins in the cytoplasm and nucleus to CD8 + T cells and classical MHC class II molecules (MHCII) present peptides predominantly from proteins in intracellular vesicles which are in contact with the extracellular space to CD4 + T cells [20][21][22].However, function of MHC genes in duck is not well-characterized due to the lack of a complete MHC genomic map.We herein generate a 4.13-Mb contig (referred to as chromosome 17) containing the complete duck MHC genomic sequence.After annotation, we find that the duck MHC is about 1.82-Mb and encodes 183 gene loci (Fig. 2a and Additional file 3: Data S1).
Since MHC-B maps in galliformes share good synteny with each other, we compared duck MHC with these of two galliformes, namely chicken and quail.The data shows that duck MHC is more extensive and complex than the galliform MHCs.For chicken, its "minimal and essential" MHC only contains 41 loci within 0.28 Mb in the chicken GRCg6a reference (GCF_000002315.6)(Additional file 2: Fig. S5).Quail MHC map assembled from both BAC clones [14,23] and NGS (NCBI genome version: GCF_001577835.2) indicated that the organization of gene families in quail is very similar to that of chicken MHC.We further verified sequence of 5 fragments in duck MHC using Sanger sequencing (Additional file 1: Table S13) and compared the duck MHC genomic map to those of seven amphibians, eight reptiles, five birds and two mammals (Fig. 2b and Additional file 1: Table S14-S16).This analysis indicates that the duck MHC map is of high quality and characterized as a primordial, expanded and complex region.Firstly, the duck MHC is organized as a "MHCIII-MHCI-MHCII" arrangement and has the conserved ZNF692-TRIM10 subregion (Additional file 2: Fig. S5).Secondly, duck MHCI and MHCII genes are separated by duplicated BTN and DMB genes, with MHCIII genes flanked by tandemly duplicated ZFN and BTN genes (Fig. 2a).Thirdly, we defined the C-D block as a region between COL11A2 (C, collagen type XI alpha 1) gene and DAXX (D, death domain associated protein) gene, which encompasses 14 genes (COL11A2-RXRB-SLC39A7-HSD17B8-RING1-VPS52-RPS18-WDR46-B3GALT4-PFDN6-RGL2-TAPBP-ZBTB22-DAXX).The C-D block located in the extended class II subregion of human MHC gene map.We defined the D-P block as a region between DDR1 (D, discoidin domain receptor tyrosine kinase 1) gene and PPP1R11 (P, protein phosphatase 1 regulatory inhibitor subunit 11) gene, which includes 12 genes (DDR1-S100A16-GTF2H4-DHX16-C6ORF136-ATAT1-MRPS18B-PPP1R10-ABCF1-GNL1-RIM39-PPP1R11).D-P block located in the Classical class I subregion of human MHC gene map (Additional file 1: Table S17).The duck MHC has retained the D-P block, BTN gene family, and C-D block, which are present in the human MHC, but are almost lost in chicken (Fig. 2b and Additional file 1: Table S17).Fourthly, the duck MHC is organized in a primordial pattern, similar to previously proposed primordial MHC organizations based on evolutionary analysis [24][25][26].Moreover, duck MHC is similar to that of amphibians and reptiles.In addition, duck contains the ancient NKR framework of genes, which are adjacent to the MHC genes (Fig. 2b), as in the previously reported proto-MHC in ectotherms [25].We further performed gene expression profile analysis and found that, compared to control individuals, 81 MHC genes showed significantly differential expression in ducks infected by H5N1 IAV, supporting the idea that many duck MHC genes were associated with host immune response to IAV infection (Additional file 3: Data S1-S6).
Characteristics of duck MHCIA genes and TAPBP gene
The MHCI heterotrimers consist of an invariant light chain (β2M), a polymorphic heavy chain (MHCIα, encoded by MHCIA), and an antigen peptide and presents peptides through interacting with TAPBP (Tapasin).MHCI and peptide complexes evoke CTL (cytotoxic T cell) response by interacting with CD8α molecule and TCR (T cell receptor) [7].The chicken dominantly expressed MHCIA (BF2) which was associated with disease susceptibility [27].In contrast, duck has a significantly expanded MHCIA gene repertoire, where five MHCIA genes are expressed in four IAV target organs (lung, ileum, jejunum and duodenum) and one immune organ (spleen) (Additional file 2: Fig. S6).NKC, D-P, and C-D.These blocks may not necessarily be completely separate, and in some species, one block may overlap with another block.The proto-MHC represents the ancient MHC map in vertebrates, which was inferred based on alignment of MHC gene maps from duck and 22 other species as well as previous ancient MHC map.Dash line denotes that BTN and NKC blocks are found in amphibian MHC, but the location is not consistent among these amphibian species Human HLA-C/E have a conserved motif which can interact with members of the KIR family (Killer cell immunoglobulin like receptor) on natural killer cells to regulate NK killing activity and there is a potential inference that two motifs in chicken BF1 may interact with the NK cell receptor [28][29][30][31].It is hard to determine whether duck MHCIαs may function like human HLA-C/E and chicken BF1 since there are many variants in the key motifs (residues 71-82 and 141-149) (Fig. 3a and Additional file 2: Fig. S7) [15,17,28,[32][33][34][35].According to previous researches on MHCIα proteins [15,[32][33][34], duck MHCIαs have two conserved sites with negative charged side chain (Q222 and E223) which interact with three conserved duck CD8A residues (homologous to S34, Y51 and S53 of human CD8A), four residues (R83, T140, K143, and W144) in pocket F, and four residues (Y7, Y58, Y156, and Y168) in pocket A which participate in the hydrogen bonding network (Additional file 2: Fig. S7) [15,17,28,[32][33][34][35].The homologous residue 83 is R in non-mammalian vertebrates but Y in mammals, and the R83 residue may allow the C-terminus of peptide to extend beyond the peptide-binding groove [34].In addition, like human HLA-A2 protein [35], duck MHCIαs has the conserved cys-cys-trp structural triads (Additional file 2, Fig. S7) [15,17,28,[32][33][34][35].These observations suggested that all duck MHCIαs, except UCA missing a region of the α3 domain, might present peptides to CTL cells as chicken BF2 does (Fig. 3a and Additional file 2: Fig. S7) [15,17,28,[32][33][34][35].
The peptide repertoire presented by chicken BF2 is positively correlated with the opening size of its binding groove [36,37].Interestingly, duck MHCIαs are predicted to have large opening size of the peptide binding pockets (Fig. 3b, Additional file 1: Table S18 and Additional file 2: Fig. S8).Duck MHCIαs are divergent in the electrostatic potential of their peptide-binding groove and the lipophilic potential of their B pocket (Fig. 3c, d and Additional file 2: Fig. S9-12) [15].These observations reminded us that a divergent duck MHCIαs repertoire might allow binding of more types of peptides to increase duck's immune surveillance for new and dangerous pathogens.This hypothesis was supported by short peptide docking analyses which predicted that duck MHCIαs have a larger number (76 to 104) haemagglutinin binding peptides of A/chicken/Sheny/0606/2008 (SY/08) H5N1 than chicken BF2 haplotypes (B4: 7; B21: 20) (Fig. 3e).
Due to low genome quality, duck TAPBP gene is not annotated for a long period of time [38].In this genome version, we annotated a duck TAPBP gene, which loaded MHCI complex with peptide and shared 64% amino acid identity with chicken TAPBP.Duck TAPBP protein contained two tandem Ig domains and had an editing loop blocking the F pocket of MHCIα to bind peptide (Fig. 3f, g).RNA-seq analysis indicated that duck TAPBP gene was well expressed in duodenum, ileum, and liver tissues (Fig. 3h).Moreover, alternative splicing analysis showed that duck TAPBP gene encoded six transcripts (Fig. 3i).At last, we extract the UTR region and predicted promoter sequences of duck TAPBP (Additional file 1: Table S19).
Researchers carried out a large amount of valuable works on duck MHC Class I genes.Moon et al. [15] find that duck UAA is the dominantly expressed class I molecule like chicken BF2, while other duck MHCIA genes (UBA, UCA , UDA, and UEA) were not found to be expressed, with defects in the promoters, coding sequence or 3′UTR.Chan et al. showed that the promoters of UBA, UCA , and UEA failed to drive a reporter gene in a heterologous cell line and that UDA was expressed only when the miRNA let-7 family was absent [39], and thus these four genes were either non-classical or pseudogenes.Previous works based on limited samples or a single cell line failed to consider the issue of allelic polymorphism [40,41].However, the highly allelic polymorphic nature of these genes makes their characteristics and function more complex: UBA has a very low expression at the RNA level; UAA is a predominantly expressed gene at the RNA level in most cases; expression of UCA , UDA, and UEA varies largely among ducks, in some individuals, one of them even has a comparable expression level with UAA (Additional file 3: Data S2-5).In our MHC map, UCA has a big deletion in the alpha 3 region, but that does not mean all ducks have a defective UCA gene like C18 duck used in this project.UCA genes from other researchers [15] and resequencing data of other ducks indicates that many ducks have a complete UCA gene.Gene variation analysis in this region indicates that five MHCIA genes and TAPBP genes are all polymorphic (Additional file 4: Data S7).
Duck expanded both the classical MHCIIB genes and the non-classical MHCIIB genes
The MHCII repertoire contains several distinct classical and non-classical MHCII isotypes, each with characteristic α and β chains encoded by A and B gene in mammals [42].Interestingly, gene expansion event occurred in duck classical MHCII and non-classical MHCII gene families.Ducks contain one A gene (AnplMHCIIA and AnplDMA), but have a large number of B genes (nine AnplMHCIIBs and fifteen AnplDMBs) (Fig. 2a, Fig. 4a, b, and Additional file 2: Fig. S13-14).Expression pattern of these genes is different: AnplMHCIIA and AnplDMA are expressed highly in four target tissues of IAVs (ileum, jejunum, duodenum, and lung) and one immune organ (spleen); AnplMHCIIBs and AnplDMB1 are expressed moderately and tissue-specifically; AnplDMB2 genes have low expression baseline (Fig. 4c and Additional file 3: Data S2).with the α1 domain of MHCIIα to bind peptides, and researches in mammals indicated that the β2 domain contacts with the D1 domain of CD4 protein [43,44].The CD4 protein along with the T cell receptor that stimulate T cells to express cytokines and to directly stimulate B cells [43,45].Similarly, duck AnplMHCIIβ proteins contain β1 and β2 domains.According to previous researches on chicken BLBs and human HLA-DRB genes [20,43,45], we mapped key residues interacting with MHCIIα chain, CD4 protein and antigen peptide or forming P1 pocket to duck AnplMHCIIβ proteins (Fig. 4d).Multiple sequence alignment of the duck AnplMHCIIβ paralogs indicated that large numbers of variations were located in the β1 domain (except AnplMHCIIβ5 and AnplMHCIIβ9), especially in residues around the P1 pocket (Fig. 4d and Additional file 2: Fig. S15) [20,43,45].The P1 pocket has been reported to affect peptide selection, stabilization of empty MHCII peptide-binding groove and DM-susceptibility in human [20,46,47].More experiments are needed to validate the function of these duck AnplMHCIIβs paralogs.Among these variant sites, three contained loss-of-function mutations: H81N, like in AnplMHCIIβ5, disturbs the conserved H-bond formation between the β chain and the peptide backbone and impacts the ability of the exchange ligand [46].G86Y, like in AnplMHCIIβ9, fills the P1 pocket, preventing conformational changes and greatly reduced responsiveness to DM heterodimer [47,48].Substitution of a neutral non-polar amino acid F89 by neutral polar amino acid Y89, as in AnplMHCIIβ5 and AnplMHCIIβ8, might influence stabilization of the P1 pocket (Fig. 4d and Additional file 2: Fig. S15) [20,43,45,47].In contrast, β2 domains are relatively monomorphic, including five variant sites and 70 identical sites.There are 11 identical AnplMHCIIβs sites and ten of them are conserved with the homologous sites of human HLA-DR1B or chicken BLB2.These sites in mammals are known to contact with the D1 domain of CD4 molecules [45].
Mammalian DMβ proteins contain four critical parts: the β1 domain interacting with DMα and MHCII (i.e.HLA-DR) to form a heterotetramer; the β2 domain stabilizing the overall topology of HLA-DM-HLA-DR1 heterotetramer; the transmembrane domain (TM) required for HLA-DM catalytic activity; the YTPL tyrosine-based endocytosis motif in the cytoplasmic domain required for DM sorting in endosomes [47,[49][50][51].Surprisingly, only one (AnplDMβ2-14) of the fifteen AnplDMβs in duck has the above four parts and can efficiently catalyze peptide exchange like human HLA-DMβ and chicken DMβ2.However, thirteen DMβ2s cannot catalyze peptide exchange due to lack of the TM domain, the β1 domain, or YTPL motif (Fig. 4e and Additional file 2: Fig. S16).
NK cell receptor NKRP1-like and its ligand-like genes are expanded in duck MHC
The mammalian NKC (natural kill gene complex) in the MHC paralogous region encoded many (22-50) C-type lectin-like NK cell receptors (CTLRs) to regulate NK cell activation [52].Among them, the NKRP1 subfamily (including three members) modulate NK cell activation through interacting with NKC-encoded CLEC2 glycoproteins in human [53].Interestingly, we found that both NKRP1-like genes (referred as NKRP1L-1 to NKRP1L-17) and the ligands (referred to as CLEC2L-1 to CLEC2L-20) were significantly expanded in duck MHC (Fig. 2a, Fig. 5a, and Additional file 2: Fig. S17).This was in sharp contrast to the case in chicken, which contains one pair of NKRP1-like receptor/ligand (BN-K/B-lec) and one other ligand (CELC3) in its MHC [55].
Mammalian NKRP1 and CLEC2 contain two critical domains: the cytoplasmic domain includes a tyrosinebased signaling motif, where the hem-ITAM (the hemiimmunoreceptor tyrosine-based activation motif ) and the ITIM (the immunoreceptor tyrosine-based inhibition motifs)/ITSM (the immunoreceptor tyrosinebased switch motif ) stimulates or inhibits phagocytosis, cytokine production and cytotoxicity, respectively [56]; the single extracellular C-type lectin domain (CTLD) creates a compact structure using a conserved "WI/TGL" motif to bind ligands/receptors [53].
Interestingly, duck NKRP1-like genes are remarkably diversified in the tyrosine-based signaling motif, where one contained function undefined hem-ITAM-like (DDYXXL) motif, three lacked the tyrosine-based motif, four contained an ITIM or ITSM motif, and seven had a consensus D/EGYXXL hem-ITAM motif which becomes phosphorylated and rapidly recruits SYK to mediate cytolysis of malignant cells (Fig. 5b and Additional file 2: Fig. S18) [53,55,56].This is different from that human NKRP1 genes which contain a hem-ITAM domain and the chicken BN-K gene which contains an ITIM domain.
For the CTLD domain, duck NKRP1-like proteins are conserved in the "WI/TGL" motif with six invariant cysteine residues forming the core of the extracellular C-lectin like domain (CTLD).However, they were found to be diversified at three sites (homologous to S171, S182, and E183 of human NKp65) which form a hydrogenbond network with their CLEC2 ligands (Fig. 5b and Additional file 2: Fig. S18) [53,55,56].This was matched by the diversity at the hydrogen-bond network site (homologous to R138 of human KACL) in duck CLEC2s ligands.Perhaps, having a multigene family of C-lectin NK cell receptor/ligand pairs in ducks (like mice) is a different strategy from having a single receptor/ligand gene with a lot of allelic polymorphisms in chickens.Detailed sequence analysis indicated that duck CLEC2s are conserved at the critical site for forming CLEC2 homodimer (homologous to F84 of human KACL) (Fig. 5c and Additional file 5: Fig. S19).Genomic structure showed that nine pairs of duck NKRAP1-like / CLEC2 genes are next S9 to each other like human NKRP1A/LLT1, NKp80/AICL and NKp65/KACL (Fig. 5d).Further prediction suggested that nine NKRP1-like / CLEC2 dimers are very similar to human NKRP1A/LLT1 (5J2S:A/4QKH:A) in their tertiary structures with small RMSD (the global root mean square deviation, 0.33-1.35Å)and high GMQE (global model quality estimation score, 0.66-0.84)(Additional file 1: Table S18).The total expression level of these inhibitory NK receptor-like genes is higher than that of the activating ones in duck (Fig. 5e).
Expanded duck BTNs surround and co-regulate with MHCI and MHCII
BTN and BTNL (BTN-like) genes belong to the Ig superfamily and mostly compose of a membrane-distal IgV domain, a membrane-proximal IgC domain, a transmembrane region, and an intracellular C-terminal domain B30.Previous researches indicate that BTN and BTNL proteins inhibit activation and proliferation of αβ T cell such as CD4 + T cell and CD8 + T cell [57].Recent researches revealed their new function in development [58] and activation of γδ T cell [59][60][61].Unexpectedly, a large number (43) of BTNs were seen to surround MHCI, MHCII, MHCIII, TAP1, TAP2, and four ZNFs (zinc finger protein) genes in the duck MHC due to lineage-specific duplications (Fig. 2a, Fig. 6a and Additional file 5: Fig. S20a-b).Duck BTNs (except BTN-10 and BTN-12) share similar domain structure with human BTNs and most of them express at high level in three IAV target tissues, namely ileum, jejunum, and duodenum (Additional file 5: Fig. S21).Twenty-two BTNs were significantly upregulated by 2.55-to 16.68-fold in lung or spleen, while two BTNs were significantly downregulated by 3.97-to 40.22-fold in lung of H5N1-virus infected ducks compared with controls (Fig. 6b).
Using Hi-C data, we identified topologically associated domains (TADs) which serve as a structural scaffold for the establishment of regulatory landscape (RL).TADs represent a functionally privileged scale in chromosomes and genes within a TAD tend to be co-regulated [65,66].We found two TADs in duck MHC map and 18 BTNs, MHCIs, MHCIIs, TAP1, TAP2, and TAPBP were located in the second TAD (Fig. 6c).These 18 duck BTN genes might work in a co-regulating manner with MHCIrelated and MHCII-related genes which play key roles in initiation of CD8 + T cell and CD4 + T cell.The location of these 18 BTN genes may contribute to regulating the activity of CD8 + T cell and CD4 + T cell.
BTN uses the B30.2 domain to recognize phosphor-antigen and then activate the γδ T cell [59].This domain was first defined as a similar region in linear sequence of BTN and TRIM (tripartite motif-containing) genes [67][68][69].Phylogenetic tree using the B30.2 domains showed that duck BTNs are clustered with a group of mammalian BTN/ BNTLs genes (Fig. 6d and Additional file 5: Fig. S20b).
Discussion
After challenging duck and chicken with same dose of H5N1 virus, duck has much lower death rate and mild symptoms relative to chicken.To explore the sharp difference of duck and chicken's immune system, we de novo assemble a markedly improved Pekin duck genome draft SKLA1.0 with 40 chromosomes.We then compared the duck and chicken at the genome and transcriptome level and found that duck expanded classical MHCI, NKRL, BTN, classical MHCII, and non-classical MHCII genes.
The arrangement of gene families may contribute to functional divergence
The class III region is sandwiched in between the class I and class II regions in mammals, but not in most nonmammalian vertebrates [24].From the perspective of evolution, these two kinds of arrangement contribute to functional divergence.In chicken, TAP genes are next to MHCI genes they serve.Co-evolution will lead to a pathway with polymorphic interacting genes can only work effectively if the polymorphic genes are closely linked in the genome of a bird.In this circumstance, only MHCI gene nearest TAP gene expressed dominantly in most tissues and plays more important role than MHCI genes far from TAP.This model may have some drawbacks.For an individual, the antigens loaded and recognized by their TAP and MHCI are very limited, which makes them susceptible to infection.In human, MHCI and TAP genes are not adjacent and TAP genes pump a wide variety of peptides.Some of these peptides will be appropriate for any human MHCI proteins and each MHC haplotype will have a multigene family of class I molecules.Although duck UAA is next to TAP genes, there are four other MHCI genes which are not next to TAP genes.In chicken, two class I genes (BF1 and BF2) flank the two TAP genes (TAP1 and TAP2) and these two MHCI genes are highly polymorphic [70].In duck, MHCI genes near or far from TAP genes all has a relatively low allelic polymorphism which is similar to human (Additional file 4: Data S7).Perhaps, the duck MHCI gene have evolved into an intermediate state between chicken MHCI genes and human MHCI genes.
The MHCIII region contains a cohort of genes include those involved in the activation of the complement system (C2, BF, C4), inflammation and cell stress (TNFa, HSPA1A, and HSC70), and Ig-SF members (1C7) [71,72].Some genes in this region have a functional relevance with the MHCII genes, for example antibodies can use complement to regulate antibody responses [73].HSC70 plays a central role in modulating antigen transport within cells to control MHC class II presentation during nutrient stress [71].Human MHCII and MHCIII is adjacent to each other and may co-evolved to give rise to many complicate antiviral strategies.However, duck MHCII and MHCIII is separated by MHCI and the 3D MHC map also indicated that duck MHCII and MHCIII is hardly located in a same TAD.In this arrangement, MHCII may not co-evolve with MHCIII.Perhaps, the MHCII will co-evolve with BTN gene cluster since they are in the same TAD and thus may change antibody response.
Besides resistance, tolerance may be another strategy employed by duck during IAV infection
Based on gene functional studies in human and chicken [30,32,74], these expanded MHCI and MHCII genes in duck may help to recognize invading pathogens and initiate immune cells such as CD8 + T cell and CD4 + T cell.Moreover, the expanded BTN and NKR (like) genes in duck may help to motivate another types of immune cells like γδ T cell and NK cell [53,55,69,75].Activation of these cells tend to increase the potency of immunity and inflammation.However, the inflammation is so weak in duck during IAV infection (Additional file 5: Fig. S23-26).In addition, antibody plays important roles during virus eradicating, but duck has weak antibody response comparing with chicken (Additional file 5: Fig. S27-28 and Additional file 1: Table S22) [76,77], which is another confusing thing.There are two distinct host strategies to deal with an infection: antiviral resistance and disease tolerance [8,78].The former works by detection, neutralization, destruction, or expulsion of the pathogens, while the latter reduces the negative impact of infection on host fitness.From the point of resistance, we cannot explain the weak inflammation and weak antibody response, we then turn to explore whether there are some tolerance measures in duck.
The CD8 protein that interacts with classical class I molecules in mammals is a heterodimer of CD8α (encoded by CD8A) and CD8β (encoded by CD8B) to allow cytotoxicity by CTLs [79], while CD8αα homodimers in mice interact with the non-classical class I molecules called TL to deliver various signals in the intestine and thymus [80][81][82][83].CD8A1 is a variant gene of CD8A, which can not activate CD8 + T cells but contributes to inflammation [84].Duck has much less members (1 VS 24) and low expression levels of CD8A1 genes comparing with chicken, which may contribute to low inflammation level in duck (Additional file 1: Table S20, S23-S24 and Additional file 5: Fig. S29-30).Gene expansion gives rise to 13 fragmental AnplDMB2s which fail to edit the peptide and cooperatively present peptides with MHCII heterodimers due to lack of key domains.However, these fragmental AnplDMB2s may might negatively affect peptide presentation through forming defective DM dimer and MHCII-DM tetramer (Additional file 5: Fig. S31).Besides activating NKRP1-like receptors, duck also expanded four inhibitory NKRP1-like receptors, and the expression level of inhibitory NKRP1-like receptors is even higher than that of activating ones.These observations support the idea that ducks might use diverse inhibitory NKRP1-like receptors to inhibit NK activation (Fig. 5e).Besides activating γδ T cells, BTNs inhibit the proliferation and activity of αβ T cell, such as CD4 + T and CD8 + T cells [57,62,63].
These measures may not directly contribute to eliminating virus; however, they tend to protect the host from cytokine storm and too much tissue damage.A more reliable model in duck may be composed of both resistance and tolerance strategies (Additional file 5: Fig. S32).
Conclusions
To understand the antiviral strategy of duck, we assembled a chromosome-level reference genome (SKLA1.0)and performed immune-pathological and transcriptomic analyses.This new duck reference genome covers 40 chromosomes with a contig N50 of 32.90 Mb which surpasses the two current model organism genomes (chicken and zebra finch) in sequence contiguity.We also successfully assembled the complete duck MHC and verified its accuracy.Moreover, we compared the duck MHC gene map to these of fish, amphibians, reptiles, land birds and mammals.This analysis indicated that the arrangement of gene families in duck MHC is primordial with diversified AnplMHCI, AnplMHCIIβ, AnplDMB, NKRL, and BTN genes.These expanded genes are tightly organized in their linear and 3D architecture, with 183 genes contained within a 1.82-Mb region and 111 of them being present in only two TADs.These important immunerelated genes may help duck better resist to influenza viruses.
Genome assembly
Clean normal and ultra-long Nanopore reads were assembled into contigs using the NextDenovo software (https:// github.com/ Nexto mics/ NextD enovo, version 2.1-beta.0).Contigs were then polished three rounds using Illumina clean reads with the Nextpolish software (version 1.2.3) [85].At the same time, the Bionano map was assembled with the SOLVE software (https:// biona nogen omics.com/ suppo rt/ softw are-downl oads, version 3.2.1)using clean BNX files and refined according to polished contigs.Subsequently, hybrid scaffolds were generated by integrating Bionano maps and polished contigs.Hybrid scaffolds were further assigned to chromosomes to develop the draft genome after polishing, splitting, sealing, and merging with clean Hi-C reads using the Trimmomatic (version 0.36) [86], the Juicer software (version 1.5) [87], and the 3d-DNA package (version 180922) [88] with default parameters.After that, the raw genome draft was manually refined according to contact matrices of Hi-C data using the Juicebox (version 1.13.01)[89].Finally, three rounds of gap filling were performed on the refined genome draft using normal and ultra-long Nanopore reads with the Gapcloser software (version 0.56) [90] to produce the final high-quality chromosomal-level genome.
Assembly and verification of MHC genomic sequence
The MHC represents one of the most polymorphic and complex regions in vertebrate genomes, with MHCI, MHCII, and MHCIII genes being extremely difficult to assemble due to a high level of repetitive and GC-rich sequence.We performed de novo assembly using 81X Nanopore reads and 9X ultra-long reads (Additional file 1: Table S2).This only produced five small fragments with MHC genes ranging from 100kb to 1Mb.To get a complete MHC map, we further sequenced more ultralong reads (17X) (Additional file 1: Table S2).Using these three read datasets, we assemble a contig containing the complete MHC genomic sequence and named it as chromosome 17 (chr17).
We assessed the consensus accuracy of the MHC through comparing the chr17 map to the Bionano map constructed with 259.60 Gb BNX data using the SOLVE software (https:// biona nogen omics.com/ suppo rt/ softw are-downl oads, version 3.2.1).We found that the MHC genomic sequence on chr17 (0.50 to 2.09 Mb) was completely consistent with the Bionano maps (Additional file 5: Fig. S33).We further visualized Hi-C data for the MHC using the Juicebox (version 1.13.01)software [89] and found that this region had even coverage.This observation suggested that the MHC genomic sequence was of good quality (Additional file 5: Fig. S34).Moreover, we performed sequence alignment between a duck BAC (bacterial artificial chromosome) containing fragmented MHC sequences (AY885227.1)and our MHC sequence using the MAUVE software (version 2.3.1) with default parameters.This indicated that our duck MHC sequence was consistent with these available fragmented MHC sequences.At last, we selected the CLEC subregion of the MHC (containing 36 tandemly repeated genes) to design five primer pairs for PCR amplification (Additional file 1: Table S13).PCR products were purified according to the manufacturer protocols (OMEGA, Norcross, Georgia 30071 USA) and were sequenced using the Sanger sequence technology.This effort verified that the MHC genomic sequence was assembled correctly.
Annotation and synteny analysis of the MHC
Three kinds of gene annotation namely, de novo prediction, homology-based prediction and transcriptome-based prediction were carried out on chr17 data.We integrated all gene models, performed protein sequence alignment using the BLAST software with e-value < 1e−10 and carried out domain prediction using the INTERPROSCAN (https:// www.ebi.ac.uk/ inter pro/ about/ inter prosc an/) with defaults.We further manually removed redundant genes and those without conserved domains of homologous genes.MHC gene information was then collected from amphibians, reptiles, water fowls, land birds, and mammals from the NCBI Gene database to perform synteny analysis.According to previous studies and MHC gene maps [25,91], we divided the MHC into seven conserved blocks referred as the D-P block, NKC subregion, BTN gene family, MHCIII gene family, MHCI gene family, MHCII gene family, and C-D block (Additional file 1: Table S14-S16).We compared the duck MHC gene map to those species to characterize the duck MHC.
Gene expansion and contraction analysis
Protein sequences of seven species (chicken, zebra finch, emu, Egyptian rousette, greater horseshoe bat, human, and tropical clawed frog) retrieved from the NCBI together with protein sequences in SKLA1.0 were grouped using the OrthoMCL pipeline [92].Gene group ID was collected by uploading human and chicken protein ID to the PANTHER database (http:// www.panth erdb.org/) and groups with the same PATHER family ID were combined.This produced a total of 6606 gene families including 1800 single-copy gene families and 4806 multi-copy gene families.We then performed multiple sequence alignments using the Prank (version 14063) software, constructed maximum-likelihood trees using the IQ-TREE software (version 1.6.5)[93], viewed the resulting phylogenetic tree using the Figtree software (http:// tree.bio.ed.ac.uk/ softw are/ figtr ee, version 1.42) and detected gene expansion/contraction with multiplecope gene families using the CAFÉ (version 4.2.1)software [94] with default parameters.
Structure prediction, pocket size calculation, and molecular docking
Protein templates were searched using the SWISS-MODEL website.Structures of duck MHCIα (UAA to UEA) were made by point mutation and optimization by the Discovery Studio software (version 2019) according to a template (PDB: 5GJX).Structure of chicken BF2 proteins were downloaded from the PDB database [32,36,95].Structures of other protein were predicted according to templates using the I-TASSER website (https:// zhanglab.ccmb.med.umich.edu/I-TASSER/)(Additional file 1: Table S18).The opening size of the active pocket was calculated using Pymol software (https:// pymol.org/2/, version 4.2.0).HA protein sequence of A/chicken/ Sheny/0606/2008 (SY/08) H5N1 virus was divided into peptides in silico according to the motif length reported in literatures [17].Peptides were docked into peptide-binding pocket of MHCIα protein structures using the GalaxyPep-Dock website (https:// galaxy.seokl ab.org/ cgibin/ submit.cgi? type= PEPDO CK) and docking models were filtered according to interaction between B and F pockets of MHCI and peptide.Electrostatic potential (EP) and lipophilic potential (LP) maps were estimated using the MOL-CAD program in the SYBYL (version X2.1.1)software.
Promoter prediction for TAPBP gene
Gene promoter region is reported between the 2000bp upstream of the transcription start site and the translation start site (ATG).We choose the region of chr17:1843226-1845294 for promoter prediction using the BDGP with the defaults: Neural Network Promoter Prediction website (https:// www.fruit fly.org/ seq_ tools/ promo ter.html).
Identification of topologically associated domain (TAD)
Topologically associated domains (TAD) represent a major type of chromatin organization, with sizes ranging from tens of kilobases to several megabases, and they are conserved among species.TADs are characterized by pronounced long-range associations between loci located in the same domain, but with less frequent interactions between loci located in adjacent domains.Thus, TADs have two basic features: (1) self-association of regions within the TAD; (2) insulation between regions in neighboring TADs.Different methods of identifying TADs can be employed according to the above features of TADs, with a low number of interactions at TAD boundaries and higher numbers inside TADs [96][97][98].Here, we used the insulation square analysis method to call TADs.
Hi-C reads from the duck sample sequenced in this study were mapped to our duck genome SKLA1.0,processed and iteratively corrected using the HiC-Pro software (version 2.11.1) [99].We then used a perl script Cworld (https:// github.com/ dekke rlab/ cworlddekker, 0.0.1)matrix2insulation.pl to detect TAD boundaries at 20 kb resolution using the Hi-C data.Briefly, we calculated mean interaction across each bin by sliding a 400 kb × 400 kb (20 bins X 20 bins) square along the matrix diagonal and estimated insulation scores for all 20-kb diagonal bins by quantifying across each chromosome.These 20-kb valleys/ minima of insulation score were defined as the TAD boundary.All boundaries with a boundary strength < 0.1 were discarded.Regions outside of the boundaries were then extracted using the inslution2tads.plscript and were defined as TADs.TAD maps were generated using the HiCExplorer (version 3.6) [100].
Viral infection
A wild-type highly pathogenic H5N1 avian influenza virus, A/chicken/Sheny/0606/2008 (SY/08, clade 7), isolated from cloacal swabs of chickens and stored in −80°C in a biosecurity level 2+ laboratory approved by China Agricultural University was used in this study [101].The SY/08 virus was propagated in 10-day-old fertilized chicken eggs with a total volume of 0.1-0.3mL (Beijing Merial Vital Laboratory Animal Technology) at 37°C for 48-72 h and allantoic fluids were harvested and HA inhibition (HI) tests performed using a panel of reference sera [101].Positive allantoic fluids were estimated for viral titer by counting egg infectious dose (EID 50 ) individuals using the Reed and Muench method [102].The SY/08 virus with titers up to 10 8.5 EID 50 was used for the following animal infection study.
Two groups of 24-day-old specific pathogen-free (SPF) Shaoxing ducks (Vital River Laboratory, Beijing, China) were inoculated intranasally with 600μl 10 8.5 EID 50 of SY/08 H5N1 virus or PBS by dripping into the trachea.Lung tissues and blood of ducks (n=3 for each group and point) were collected on hours 12, 24, and 48 post infection respectively.Same experiments were also carried out using chickens.
RNA sequencing
Total RNA was extracted from about 100 mg of each lung tissue using the Qiagen RNeasy kit and RNA samples having an RNA integrity number (RIN) of ≥ 8.9 and a ratio of 28S:18S rRNA of > 1.0 were used to construct cDNA libraries according to the manufacturer's instruction.They were then sequenced on the HiSeq 4000 System (TruSeq SBS KIT-HS V3, Illumina).Adaptor sequences and low-quality reads (Q value < 20) were filtered from raw RNA-seq data using Trimmomatic software (version 0.33) [86].Clean reads were aligned to our duck genome SKLA1.0 using the HISAT2 (version v2.1.0)software [103] with default parameters.Multi-mapping reads were removed and only unique mapped reads were used to count gene expression level with the fea-tureCounts script in the Subread package (http:// subre ad.sourc eforge.net/, version v2.0.0) under the setting of "-g gene".Counts of uniquely mapped reads were used to calculate FPKM (fragments per kilobase million) values using the R package, edgeR (version 3.2) with default parameters.Differential expressed genes were determined with the DESeq2 algorithm [104] under thresholds of P-value ≤ 0.05 and |log 2 (fold change)| ≥ 1. Gene expression profiles were viewed using heatmaps generated by the R package ggplot2 and pheatmap.
Immuno-pathological analysis
Lung tissues were fixed in 4% paraformaldehyde for 24 h, processed for paraffin embedding and sectioned at 4 μm.Lung sections were stained with hematoxylin and eosin (H&E).The other lung sections were immunehistochemically stained as follows: subjected to antigen retrieval by heating the slides to 95°C for 20 min in 0.01 M citrate buffer and blocked with serum; sections were labeled with a rabbit polyclonal NP antibody (Abcam) overnight at 4°C, followed by incubation with goat antimouse IgG biotin-conjugated affinity-purified antibody for 1h at 37°C.Immune complexes were visualized using the diaminobenzidine-tetrahydrochloride (ZSGB-BIO, Beijing, China).
MPO assay (NJJCBIO, A044-1-1) were performed according to the manufactures' instructions.Lung specimens (10 mg) collected from ducks infected with the SY/08 H5N1 virus and from control individuals were homogenized in 190 μL homogenate medium.Suspension was heated for 15 min and then incubated with chromogenic agent for 30 min MPO activity in the solution was quantified by the microplate reader (TECAN GENios).MPO experiments were also carried out using chickens under the same conditions
Generation of a recombinant attenuated H5N1 virus and antibody test
A recombinant attenuated SY08ΔHA H5N1 virus, which expressed a mutated HA protein containing an amino acid deletion (G325) in the HA cleavage site (HACS) region along with the seven other SY/08 viral proteins, was constructed by reverse genetics as described previously [101].The SY08ΔHA virus was verified by the Sanger sequencing method and propagated in 10-dayold specific pathogen-free (SPF) chicken embryos.Two groups of 67-day-old SPF Shaoxing ducks were inoculated intranasally with 600 μL of 10 8.5 EID 50 SY08ΔHA H5N1 virus or PBS by dripping into the trachea.Serum was collected from 10 individuals in each group and antibody titer was tested using hemagglutination inhibition (HI) assays after day 5, 7, 9, 12, and 14 post inoculation.
Spleen tissues of 10 ducks from each group were collected after days 7 and 14 post inoculation and total RNA was extracted using Trizol (Invitrogen, Rockvile, MD, USA).cDNA was generated using the high-capacity cDNA reverse transcription kit (Invitrogen, Rockvile, MD, USA).Quantitative PCR was performed to quantify gene expression based on primers in Additional file 1: Table S22.Differential expression between samples was calculated using the 2 −ΔΔCt method and was normalized to the expression level of GAPDH gene (internal reference gene).The antibody test experiments were also carried out using chickens under the same condition.
Fig. 1
Fig. 1 Comparison of genome quality among duck, chicken, and zebra finch.Assessment was carried out according to the following genome versions: duck SKLA1.0;our previous duck genome (BGI duck 1.0); the available duck genome with highest contiguity (ZJU1.0);two high-quality avian genomes (the chicken GRCg6a and zebra finch bTaeGut1.4.pri).a Treemaps of five genome assemblies scaled by contig length.b Mapping ratios of 83 RNA-seq data to three duck assemblies.c Mapping ratio of 1,625,932 transcripts to three duck assemblies.Transcripts were from de novo assembly of RNA-seq reads and corrected Pacbio Iso-seq reads.d Completeness of five assemblies and reference protein sets estimated by the BUSCO software.e Number of coding genes and transcripts annotated in the five assemblies.f Percentage of genes with gaps in flanking sequence
Fig. 2
Fig. 2 Landscape and comparison of SKLA1.0MHC. a The arrangement order of genes in the duck MHC.The duck MHC is about 1.82 Mb and encodes 183 coding genes.MHCI, MHCII, MHCIII, BTN, and NKC gene families are colored in blue, pink, purple, yellow, and green respectively.The D-P and the C-D block are in green and brown dotted rectangle, respectively.b Comparison of MHC map in amphibians, reptiles, waterfowls, landfowls, and mammals.The MHC map includes seven blocks, namely MHCI, MHCII, MHCIII, BTN, NKC, D-P, and C-D.These blocks may not necessarily be completely separate, and in some species, one block may overlap with another block.The proto-MHC represents the ancient MHC map in vertebrates, which was inferred based on alignment of MHC gene maps from duck and 22 other species as well as previous ancient MHC map.Dash line denotes that BTN and NKC blocks are found in amphibian MHC, but the location is not consistent among these amphibian species
Fig. 3
Fig. 3 Characteristics of MHCI and TAPBP genes in duck.Duck and chicken had five and two MHCI genes respectively."*B4" and "*B21" represent different chicken MHC haplotypes.Lung and plasma tissues of control and infected ducks at 12 h, 24 h, and 48 h post inoculation were collected (n = 5).a Comparison of duck MHCIα proteins to chicken ones.Conserved residues interacting with CD8A or antigen peptide are highlighted in gray and green, respectively.There is a potential inference that two motifs may interact with the NK cell receptor [28], and these two motifs are shown in the orange rectangle.b Opening size of the peptide-binding pocket of duck MHCIα proteins and chicken BF2 proteins.c Electrostatic potential of duck MHCIα proteins.Area circled by white dotted lines is the peptide-binding pocket.d Lipophilic potential of duck MHCIα proteins.Pocket B is circled by white dotted lines.e Peptide recognition spectrum of duck MHCIα and chicken BF2 proteins.Short peptides (ranging from 8-10 aa) were randomly extracted from HA protein sequence and were docked into the binding groove of MHCIα proteins.Dots represented peptides could be bound by MHCIα.f Sequence alignment and domain of duck and chicken TAPBP protein.g Predicted interacting model between duck MHCI and TAPBP.h Expression heatmap of duck TAPBP gene.C1~C7 represent the numbers of 7 ducks.Detailed sample information is in Additional file 3: Data S6.i Transcripts of duck TAPBP gene
Fig. 4
Fig. 4 Expansion of MHCIIB and DMB genes in duck.a,b Maximum likelihood trees of MHCIIB (a) and DMB genes (b).Amphibian MHCIIB (classical MHCII β chain) and DMB (non-classical MHCII β chain) genes were set as outgroup.Numbers of the tree branches are bootstrap percentages with 1000 iterations.Abbreviated information on species is in Additional file 1: Table S20.c Expression of MHCII and DM genes in eight duck tissues.The MHCII heterodimer has two chains, namely MHCIIα (encoded by MHCIIA) and MHCIIβ (encoded by MHCIIB).The DM heterodimer consists of two chains, namely DMα (encoded by DMA) and DMβ (encoded by DMB).Detailed sample information is in Additional file 3: Data S6.d Multiple sequence alignment and domain of duck MHCIIβ proteins."-" denotes gap, and "•" indicates the same amino acid as the first reference sequence.e Schematic of 14 DMβ2 proteins
Fig. 5
Fig.5 Expansion of NKRP1-Like genes and its ligand-like genes in duck MHC. a Maximum likelihood tree of NKC-like genes.Duck NKC-like genes clustered into two groups, namely groups II and V[54].In group V, 15 duck NKC-like genes (referenced as NKRP1L-1 to NKRP1L-15) grouped with mammalian NKRP1 genes and 19 duck NKC-like genes (referenced as CLEC2L-1 to CLEC2L-19) grouped with mammalian NKRP1 ligands.Numbers of the tree branches are bootstrap percentages with 1000 iterations.Abbreviated information on species is in Additional file 1: TableS20.b,c Partial multiple sequence alignment of NKC-like proteins in duck MHC (see Additional file 2: Fig.S18and Additional file 5: Fig.S19for details)."•" and "-" represent the same amino acids and gap, respectively.d Distribution of 9 pairs of NKRP1-like/CLEC2-like genes in the duck MHC. e Expression of NKRP1-like genes in ducks.Total gene expression of activating and inhibitory NKRP1-like genes is shown in the bottom two rows.Detailed sample information is in Additional file 1: TableS9
Fig. 6
Fig. 6 Expansion of the BTN gene family in duck.a, d Maximum likelihood trees based on full length of BTN genes (a) and the B30 domain of BTN genes (d).Numbers on branches are the tree bootstrap percentages with 1000 iterations.The full maximum likelihood tree is shown in Additional file 5: Fig. S20b.Abbreviated information on species is in Additional file 1: Table S20.b Gene expression of BTN genes in ducks infected by H5N1 virus at 12 hours (12hpi), 1 day (1dpi), 2 days (2dpi), and 3 days (3dpi) after inoculation.Influenza virus strains, tissues, and time point are marked under the map.Detailed sample information is in Additional file 3: Data S6.c Two topologically associated domains (TADs) of the duck MHC using Hi-C data from six individuals
Table 1 .
Comparison of assembly contiguity statistics in three ducks, chicken and zebra finch genomes SUPPF_0000004299.RNA-seq data of lung tissues from ducks in treatment (infected with the SY/08 H5N1 virus) and control (infected with PBS) groups at 12 and 24 h post infection are under accession numbers SRR18934916 -SRR18934927.All data sets and research materials are available by contacting the corresponding author. | 2024-02-07T06:17:20.586Z | 2024-02-05T00:00:00.000 | {
"year": 2024,
"sha1": "b12e751e3c6d49dae8acef6ef044d59cc0994dc6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "54c3a3995f74ac339e863c037759d4deb7a0476d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245655012 | pes2o/s2orc | v3-fos-license | Efficacy of acupuncture and its influence on the emotional network in adult insomnia patients: protocol for a randomized controlled clinical trial
Introduction Insomnia disorder (ID) is characterized by dissatisfaction with the quantity or quality of sleep and is often accompanied by negative emotions such as anxiety and depression. Patients with insomnia become trapped in a vicious circle of bad moods and poor sleep. Resting-state functional magnetic resonance imaging (r-fMRI) studies have shown abnormalities in emotion-related brain networks in patients with ID. And it has been proven that reducing negative emotions improves sleep quality. As a traditional alternative therapy, acupuncture has been demonstrated to be effective not only in improving sleep quality but also in stabilizing emotions; however, the mode of action needs to be further explored. Therefore, a clinical trial was designed to explore the effect of acupuncture in improving sleep and mood and to intuitively investigate the regulation of the emotional network using fMRI. Methods and analysis A total of 60 participants with ID will be randomly allocated to a spirit-regulating group or a control group using non-effective acupoints acupuncture at a ratio of 1:1. All participants will receive 3 acupuncture treatment sessions per week for 4 weeks. In addition, 30 healthy individuals will be included in the healthy group. The primary outcome is the Pittsburgh Sleep Quality Index (PSQI). Secondary outcomes are the Hamilton Anxiety Scale (HAMA), the Hamilton Depression Scale (HAMD), the Hyperarousal Scale (HAS), and the Fatigue Scale-14 (FS-14), r-fMRI data, sleep diary, and actigraphy. The data will be collected prior to treatment, following treatment, and during the 12-week follow-up period; a sleep diary will be kept during the entire process. Ethics and dissemination This protocol has been approved by the Research Ethical Committee of Beijing Hospital of Traditional Chinese Medicine (Bejing TCM Hospital). The results will be published in peer-reviewed journals or presented at academic conferences. Trial registration Chinese Clinical Trials Register ChiCTR1800015282. Protocol version: Version 1.0. Date: Dec.2020
Background
Insomnia disorder (ID) is essentially characterized by difficulty falling asleep or maintaining sleep despite adequate opportunity and circumstances. The patients always show dissatisfaction with the quantity and/or quality of sleep and complain of impairments in daytime social functioning and work [1]. A survey found that more than half of questioned adults worldwide had sleep difficulties and 22.1% met the diagnostic criteria for insomnia disorder (DSM-4) [2]. In China, a meta-analysis suggests that the prevalence of ID has reached 15% [3].
ID is the second most prevalent mental illness worldwide and one of the leading causes of depression, anxiety, dementia, and other mental disorders. With the growing number of insomniacs, the incidence of psychological disorders is increasing [4,5]. Persistent ID can be extremely burdensome and evoke anxiety, a lack of selfsatisfaction, and even a fear of sleep. In turn, these negative emotions worsen insomnia, ultimately creating a vicious cycle of conflict [6]. A significant correlation between depression and ID has been revealed [7]; insomnia is a predictor of the onset of mental disorders [8], while individuals who have difficulties in emotional regulation are prone to ID [9].
R-fMRI indirectly reflects the level of human cortical activity according to changes in cerebral vascular blood oxygen supply. It has been suggested that there exists abnormal activity in multiple brain regions and networks in patients with ID, such as the default mode network (DMN), dorsal attention network, and sensory-motor network [10,11].
Emotional reception and expression are mostly associated with the amygdala, medial prefrontal cortex (mPFC), insula, anterior cingulate cortex (ACC), and thalamus [12,13], brain regions collectively referred to as the emotional network (EN) [12,14]. Previous studies have shown that the left mPFC and insula are associated with sleep maintenance [15,16]. Zhu et al. [17] found that the functional connectivity (FC) values of the left ACC and right insula are closely related to the severity of anxiety in ID patients. Another study suggested that the abnormal structure and function of the insula may negatively impact sleep and mood [18] and that the connection between the anterior insula and the left dorsolateral PFC is critical to antidepressant and anti-insomnia effects [19,20]. Pang et al. [21] suggested that diminished mPFC activity is a neurobiological marker of cognitive impairment and chronic ID. Sanford et al. [22] also provided evidence that the mPFC plays an important role in the regulation of sleep and cognition. These studies suggest a strong relationship among insomnia, bad moods, and abnormal EN functional connectivity.
The main first-line treatments for ID are pharmacotherapy and cognitive behavioral therapy (CBT). Dominant anti-insomnia medicines include benzodiazepines, melatonin, and appetite-stimulating receptor antagonists [23], which are often accompanied by side effects such as "hangovers," fatigue, and dependence. Some antidepressants are used in the treatment of insomnia, indicating that improving mood is beneficial for sleep regulation. CBT can also improve the poor mood of insomnia patients, but its clinical use is limited due to poor compliance [24].
As a traditional alternative therapy, acupuncture has been proven to be effective in insomnia according to evidence-based data [25][26][27]. A systematic evaluation showed that acupuncture can decrease the PSQI score, with a lower incidence of adverse events than that seen with anti-insomnia medicine [28,29]. It has also been demonstrated that acupuncture can improve sleep quality, relieve daytime sleepiness [30], and decrease HAMA, HAMD-17, and Self-Rating Depression Scale (SDS) scores [31,32]. This suggests that acupuncture can not only improve sleep quality but also relieve dysphoria.
According to traditional Chinese medicine, the method of spirit regulation is key to treating mental and emotional diseases with acupuncture. Spirit-regulating acupuncture has been proven to calm the mind, decrease hyperarousal and daytime fatigue, and adjust emotions [33].
Acupuncture may function by regulating brain activity in ID patients. Previous studies have revealed that acupuncture can decrease the excessive FC between the amygdala and hippocampus, posterior cingulate gyrus, lingual gyrus, and occipital lobe in insomniacs [11,34]. Zhou et al. [35] found that electroacupuncture stimulation of the Shenmen and Sanyinjiao acupoints can activate the anterior ventral thalamic nucleus, caudate nucleus, shell nucleus, medial pallidum, and reticular nucleus of the thalamus in ID patients. Moreover, acupuncture at the Baihui acupoint for insomnia may function by activating signals in the hypothalamus and temporal lobe areas and adjusting signals in the frontal lobe areas to relieve anxiety and depression and improve sleep [36].
Hence, we hypothesize that spirit-regulating acupuncture will regulate activity within the EN, thus calming the mood and improving sleep quality. Therefore, we propose to investigate the effect of acupuncture on EN activity in ID patients with a view to elucidating the underlying central neural mechanism.
Study design
This is a single-center, randomized, sham-controlled, observer-and patient-blinded trial involving two parallel groups using a 1:1 allocation ratio. Thirty age-and sexmatched healthy individuals who sleep well will be recruited as controls for the fMRI data analysis. A diagram of the trial design is provided in Fig. 1 and Table 1.
Objective
To assess the efficacy of acupuncture in improving sleep quality and mood in ID patients and to investigate the changes in EN activity following spirit-regulating acupuncture treatment.
Hypotheses
We hypothesize the following: 1. In comparison with the non-effective acupoints acupuncture group, the spirit-regulating group will display significantly relieved symptoms of insomnia, anxiety, and depression following treatment, as measured by valid scales. 2. In the spirit-regulating group, EN function will be modified based on fMRI examination as compared with the control group.
Participants and recruitment
A total of 60 patients diagnosed with ID according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) will be recruited from the outpatient Acupuncture Department of Beijing TCM Hospital or using advertisements via the internet or social media. The clinicians will be responsible for enrolling participants who are willing to undergo acupuncture treatment for ID. The assistant researchers will assess and record participant baseline statuses. After written informed consent has been obtained, eligible participants will be randomly allocated according to a random number table.
Randomization and allocation concealment
Random numbers will be generated by the SAS statistical analysis system and sealed in an opaque envelope. According to the group code, the ID patients will be Table 1 Enrolment, intervention, and measurement schedule randomly assigned to one of the two groups and receive different treatments (spirit-regulating acupuncture or non-effective acupoints acupuncture). Random numbers will be assigned by telephone by an individual not involved in this study. Qi Zhang will generate the allocation sequence, Tongfei Jiang will enroll participants, and Jing Guo will assign participants to interventions and perform acupuncture operations
Blinding
Owing to the characteristics of acupuncture, acupuncturists involved in this trial cannot be blinded to the assignments. The non-effective acupoints acupuncture design can however guarantee a good blinding effect for the participants. Assessors and statisticians involved in data collection and analysis will be blinded to the assignments.
Inclusion criteria
Patients who meet the diagnostic criteria for DSM-5 and the following requirements will be enrolled.
Exclusion criteria
If any of the following criteria are met, patients will be excluded from the study.
Intervention
All acupuncture procedures will be performed by one acupuncturist who is a licensed TCM practitioner. The frequency and duration of treatment in the two groups will be the same. Subjects were prohibited from receiving any sleep-assisting drugs or physical therapy during the course of the experiment.
Spirit-regulating group
The Baihui (DU20), Shenting (DU24), Sishencong (EX-HN1), Shenmen (HT7), Benshen (GB13), Neiguan (PC6), and Sanyinjiao (SP6) acupoints will be used. The positioning standard refers to the National Standard for Acupuncture and Moxibustion Meridian Point Positioning promulgated by the National Standard GB12346-90 of China. All patients will lie on a bed in the supine position, and acupoints will be sterilized using 75% alcohol on a cotton swab. The acupuncturist will insert disposable stainless-steel needles (Huatuo, Suzhou, China; 0.25 × 40 mm) into the acupuncture points. For DU20, DU24, EX-HN1, and GB13, the needle will be inserted 10 mm horizontally; for HT7 and PC6, 5 mm vertically; and for SP6, 10 mm vertically, accompanied in all cases by twisting of the needle to produce a sensation of Deqi (the key to effective acupuncture). Participants will undergo a 30-min treatment per session. There will be 3 acupuncture treatment sessions per week for 4 consecutive weeks, giving a total of 12 sessions.
Non-effective acupoints acupuncture group
In the non-effective acupoints acupuncture group, the Binao (LI14), Shousanli (LI10), Fengshi (GB31), Futu (ST32), and Liangqiu (ST34) acupoints will be subjected to vertical insertion of 1-2 mm, avoiding the sensation of Deqi. Acupuncture in non-effective acupoints on the superficial surface of the skin can achieve a blinded effect, and the patients will be convinced that effective acupuncture has been performed. Participants will also undergo a 30-min treatment per session. The frequency of treatment will be the same as that in the spiritregulating group. If a subject suffers a serious adverse injury (e.g., severe local infections, injury to vital vessels, stabbing the organs and nerves) from acupuncture during the experiment, the experiment may be terminated and the subject may voluntarily withdraw (Figs. 2 and 3).
Primary outcome
The primary outcome measure will be the change in the PSQI score at week 4 as compared with that at baseline. The PSQI is the most commonly used indicator for the evaluation of sleep quality in insomniacs with respect to sleep duration, efficiency, depth, and abnormal sleep sensations. The higher the score, the worse the sleep. Reference of PSQI scores: 0~5 sleep quality is very good; 6~10 Sleep quality is okay; 11~15 average sleep quality; 16~21 sleep quality is very poor [37].
Secondary outcomes
The Hamilton Anxiety Scale (HAMA) The HAMA is a reliable and valid anxiety evaluation questionnaire. Anxiety is the main adverse emotion in most ID patients, and it has been proven that insomnia is highly comorbid with anxiety [38]. The HAMA is composed of 14 questions and is used as an efficacy index to evaluate the severity of anxiety. The higher the score, the more anxious the patient. The Hamilton Depression Scale (HAMD) The HAMD is composed of 17 questions and is used to evaluate the severity of depressed mood. The higher the score, the more depressed the patient. In our trial, patients with mild depression symptoms (HAMD scores less than 7) will be selected for inclusion.
The Hyperarousal Scale (HAS)
Insomniacs have higher levels of cortical arousal. A total of 26 self-assessment items are included in the HAS, which is used to assess patient hyperarousal status. The higher the score, the higher the level of cortical arousal.
The Fatigue Scale (FS-14)
Patients with insomnia are more likely to experience fatigue and weakness. The FS-14, containing two parts regarding physical and mental fatigue, shows patient physical and psychological fatigue statuses. The higher the score, the greater the fatigue.
The sleep diary
Patients will be asked to fill out the sleep diary from 1 week prior to the start of acupuncture treatment until the end of the follow-up period. This diary includes the time of falling asleep and waking up, sleep quality, and factors that affected sleep.
Actigraphy
Participants will be requested to wear an actigraph unit (MTI Health Services Company, Pensacola, FL, USA) on the left wrist. Actigraphy is an objective indicator that reflects sleep time and quality in ID patients. We will collect actigraphy data for three 1-week periods (before the intervention, at the end of the intervention period, and at the end of the follow-up period).
All fMRI data will be acquired at the Imaging Department of Beijing TCM Hospital by the same skilled and professional technician using a Siemens TRio 3.0 Tesla MRI scanner. A professionally trained member of the medical staff will explain the r-fMRI precautions and procedure to the participants 30 min prior to examination. All participants will undergo whole-brain conventional structural imaging and fMRI scans. Conventional r-fMRI scan parameters are as follows: voxel size = 3.0 × Fig. 2 Locations of the acupoints in the spirit-regulating group 3.0 × 3.0 mm 3 , 32 axial slices, thickness = 3.0 mm, repetition time = 2000 ms, echo time = 30 ms, flip angle = 90°, field of view = 220 × 220 mm 2 , and matrix size = 94 × 94. During acquisition of fMRI data, all participants will be asked to lie on the MRI bed, stay relaxed, quiet, and still but awake with open eyes, and to try not to think about anything. Each participant will be examined three times during the entire process: prior to treatment, following treatment, and after the follow-up period.
We will select brain areas within the emotional network (EN): the amygdala, medial prefrontal cortex (mPFC), anterior cingulate cortex (ACC), insula, thalamus, and hypothalamus as seed points (Talairach coordinates). The mean time-series of these seed points will be extracted, and voxel FC analysis will be performed for each seed point.
Safety assessment
Acupuncture-related adverse events include severe sharp pain (visual analog scale ≥ 7), hematoma or bleeding at the site of needle insertion, nausea, and cold sweats during treatment. If the subjects experience other needlerelated discomforts, it should be recorded and will be assessed throughout the study in both groups. There is no anticipated harm and compensation for trial participation. If the subjects need it, we will continue to give sleep health instructions for post-trial care.
Data management and monitoring
To ensure standardization, clinical training sessions will be held for each investigator prior to the start of the trial. The training will include proper application of the random number table, making diagnoses, understanding inclusion and exclusion criteria, and completing the case report forms. Importantly, acupuncture will be performed by the same acupuncturist according to standard acupuncture point positioning. In this way, we can improve inter-observer consistency among researchers and ensure the reliability of clinical research findings. Data recording will be required to be timely, accurate, complete, and standardized. The information collector collects the personal information and experimental date of the participants, verifying that data collected during the course of the research will be kept strictly confidential and only accessed by members of the trial team (or individuals from the Sponsor organization or center sites were relevant to the trial). Participants will be allocated an individual trial identification number and their details Fig. 3 Locations of the acupoints in the non-effective acupoints acupuncture group will be stored on a secure database. The statistician is only responsible for data analysis and does not know the source of the information, by these ways to keep it safe and prevent leakages. The sponsor of this trial will conduct an interim analysis in due course and will have access to the clinical data. Anonymous trial data could be shared with other researchers to international prospective meta-analyses. The data of this study will be supervised by the Research Department of Beijing TCM Hospital. Auditors will monitor the progress of the experiment every 14 days and record the progress of the trial, the process that will be independent from investigators and sponsor. Any data required to support the protocol can be supplied on request.
In addition, fMRI scanning will be performed using the same scanner at Beijing TCM Hospital. All subjects will be scanned in a unified state, including open-eyed, avoiding thinking, remaining motionless, and staying awake. To obtain better compliance, the following measures will be applied: (1) following the voluntary principle, which the investigator will fully introduce the purpose of the study, the process and the possible effect of acupuncture treatment, and the subjects will voluntarily enroll and they have the right to drop out at any time after agreeing to participate in the study without any discrimination or retaliation; (2) signing a patient informed consent form; (3) making an effort to establish a good relationship between the doctor and the patients; and (4) recording their contact details for follow-up.
Protocol amendments
The study plan of this project has been registered with approval number ChiCTR1800015282. If any changes need to be made for the protocol, we will first notify the sponsor and funder, then the principal investigator (PI) will notify the centers, and that a copy of the revised protocol will be sent to the PI to add to the Investigator Site File. Any deviations from the protocol will be fully documented using the breach report form. The PI of this study will update the protocol in the clinical trial registry.
Sample size
In this trial, PSQI results are the primary outcome. In our previous pilot study [39], the PSQI score significantly decreased by 4.43±3.60 in the acupuncture group and by 1.30±2.58 in the control group. Using a t test of two independent samples with uneven variance in PASS for calculation, the withdrawal rate is 20% to ensure that the results are statistically different (α = 0.05, 1-β = 0.9); therefore, each group requires 29 subjects. To improve the reliability of the trial, we will recruit a total of 60 ID patients, with 30 in each group.
Clinical data analysis
Data from the trial will be evaluated using the SPSS software V21.0 (IBM SPSS Statistics, IBM Corp, Somers, NY, USA). Apply the multiple interpolation method to complete some of the missing data. All demographic and baseline characteristics will be analyzed using different approaches. A chi-square test will be used to compare the demographic information between the two groups. Differences between group means will be assessed using repeated-measures analysis of variance. The main objective is to assess the difference in change in PSQI score between the groups from baseline to week 4; an independent samples t test will be used for comparison. For secondary outcome data such as HAMA, HAMD, HAS, and FS-14, an independent samples t test (p < 0.05) will be used to compare differences between the two groups, and a paired t test (p < 0.05) will be used to compare patients in the same group before and after treatment. Subjects who participated in the randomized group, whether they received treatment in that group or not, were eventually included in the assigned group for statistical analysis of efficacy, and this experiment followed ITT (intention-to-treat analysis) principles [40].
Functional MRI data analysis
Data preprocessing and calculations of FC will be performed using DPABI [41] in MATLAB_R2018a (Mathworks, Inc., Natick, MA, USA). The raw data will be preprocessed as follows: format conversion, removal of the first 10 time points, slice timing, realignment, normalization, smoothing, covariates regressor application, and filtering.
After completion of data preprocessing, the next step will be FC analysis. The amygdala, mPFC, ACC, insula, thalamus, and hypothalamus will be selected as seed points (Talairach coordinates), and the mean time-series of these seed points will be extracted, and voxel FC analysis will be performed for each seed point. The Pearson correlation coefficient of the average time-series between each brain region within the EN and this seed point will be calculated, and its value will be used as the FC strength. Regions with statistically significant strength of FC with seed sites will be considered functionally relevant to the seed sites. To follow normal distribution, the resulting correlation coefficients will need to be converted to z values using Fisher's r-z transformation. Sociodemographic information, including age, years of education, PSQI score, HAMA score, and HAMD score, will be used to assess the differences between the two groups using a two-sample t test. The difference in gender between the two groups will be analyzed using a chi-square test. For MRI data, a two-sample t-test will be used to statistically derive FC values for the spirit-regulating group versus the non-effective acupoints acupuncture group. The threshold for correction of cluster levels will be set at P < 0.05 and considered statistically significant. Multiple corrections will be performed using Alphasim. Statistical analysis of data will be performed using the MATLAB, DPABI, and SPSS software. A paired t test will be used for intra-group comparisons; an independent sample t test will be used for intergroup comparisons; and Pearson's correlation analysis will be used to describe the relationship between imaging data and scale scores.
Discussion
To the best of our knowledge, this is the first clinical trial to explore the influence of acupuncture on emotional network (EN) function. By analyzing the correlation between EN functional connectivity and clinical evaluation, we expect to reveal the neurological mechanism of spirit-regulating acupuncture treatment for ID. This is a randomized, observer-and patient-blinded, controlled clinical trial sponsored and financially supported by the National Nature Science Foundation of China. According to traditional Chinese medicine, poor sleep quality is closely related to negative emotions such as depression and anxiety. Previous research has shown that acupuncture is effective in improving sleep and mind-tranquilizing in ID patients at the same time.
We hypothesize that acupuncture may relieve symptoms of depression and anxiety. by adjusting the emotional brain regions in ID patients, resulting in an improvement in sleep. The effect of acupuncture on emotional conditions and the underlying neural mechanism have been sparsely studied; thus, we designed the trial to identify the relationship between the effect of acupuncture in regulating emotions and the regulation of EN function.
In our trial, the subjective and objective outcomes including PSQI, HAMA, HAMD, HAS, FS-14, actigraphy data, and a sleep diary will be assessed. In this way, the effect of acupuncture will be evaluated from multiple perspectives. Moreover, we have established an 8-week follow-up to observe the sustained effects of acupuncture.
In addition to neurobehavioral assessment, fMRI technology has been selected to reveal the neural mechanism of acupuncture in relieving insomnia and adverse emotion. It has been shown that there is abnormal FC within the EN in insomniacs [42,43] and acupuncture may function to regulate the activity of emotion-related brain areas in ID patients [34][35][36]. In our study, FC analysis will be used to explore the effect of acupuncture on EN function in ID patients.
Inevitably, there also exist limitations to our trial. Firstly, the method of superficial and minimal needling at acupoints unrelated to insomnia has been chosen as a control; however, the method inevitably has some nonspecific physiological effects. A non-penetrating method would be more suitable but it is difficult to conduct for subject acquaintance of acupuncture. A perfect comparison for real acupuncture with fewer physiological effects should be developed in the future. Secondly, according to our previous clinical observations, participants in the non-effective acupoints acupuncture group may have poor compliance as compared with those in the spiritregulating group; hence, we will maintain close and friendly contact with all patients to minimize the dropout rate. Thirdly, the sample size for this trial is 30 patients per group, which may be underpowered for confirmation of the study hypothesis. In the future, a study will be designed with a greater sample size to give more confidence to the data.
In conclusion, the results of our trial are expected to demonstrate modulation of the EN by spirit-regulating acupuncture, which may reveal the potential central mechanism of acupuncture in the treatment of ID.
Trial status
This trial is currently in the recruitment phase.
Dissemination
The results will be communicated to our funder, the monitoring committee, and other relevant groups via publications, results reported in databases, data sharing arrangements, and WeChat social media, or through the sponsor. Authorship is available and the public can access the full protocols, shared data sets, and statistical codes.
Informed consent materials
Informed consent form is available from the corresponding author on request.
Authors' contributions GJ designed the study. JTF, QZ, and GJ wrote the manuscript for this trial. YF and FZ will conduct the acupuncture operation. All authors read and approved the final manuscript.
Funding
The National Natural Science Foundation of China (81774391), Beijing Natural Science Foundation (7212170), and the Beijing Key Laboratory of Acupuncture Neuromodulation, Beijing, China (BZ0437), funded this study.
Availability of data and materials
All study-related data will be stored securely at the Beijing TCM Hospital. The datasets analyzed during the present study are available from the corresponding author for future secondary analysis. | 2022-01-04T15:10:19.163Z | 2022-01-04T00:00:00.000 | {
"year": 2022,
"sha1": "5f7a121d7fcb9d1179a235281e8cc5159ea1b6a7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "5f7a121d7fcb9d1179a235281e8cc5159ea1b6a7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55966465 | pes2o/s2orc | v3-fos-license | Tectonic Model of the Sinai Peninsula Based on Geophysical Investigations
Sinai Peninsula lies in the northern part of Egypt, between the Gulfs of Suez and Aqaba at the southern end, and the Mediterranean Sea at the northern end. This region is considered to be an active seismic area due to the presence of the triple junction of the Gulfs of Suez and Aqaba and the Red Sea (Khalil, 1998). Many studies have been undertaken toward understanding the subsurface geological structure of this area. It is considered as a part of a Tertiary cratonic rift between northeastern Africa and the Arabian Peninsula. The rifting phase essentially ceased during the early-middle Miocene (18-14 Ma), when continental separation became more oblique due to the predominant movements of the left-lateral transform fault that extends north-eastward through the Gulf of Aqaba to the Dead Sea (Patton, et al., 1994; USGS, 1998; and Robert et al., 2006). The dynamics of the Sinai Peninsula based on the geometrical configuration of the basement rocks as revealed by magnetic analysis, the pressure-tension tectonic forces resulting from seismological focal mechanism solutions, as well as the horizontal movements detected by a GPS network (John and Peter, 1969; Mcintyre, 1991; Rabeh and Miranda., 2008).
Introduction
Sinai Peninsula lies in the northern part of Egypt, between the Gulfs of Suez and Aqaba at the southern end, and the Mediterranean Sea at the northern end. This region is considered to be an active seismic area due to the presence of the triple junction of the Gulfs of Suez and Aqaba and the Red Sea (Khalil, 1998). Many studies have been undertaken toward understanding the subsurface geological structure of this area. It is considered as a part of a Tertiary cratonic rift between northeastern Africa and the Arabian Peninsula. The rifting phase essentially ceased during the early-middle Miocene (18-14 Ma), when continental separation became more oblique due to the predominant movements of the left-lateral transform fault that extends north-eastward through the Gulf of Aqaba to the Dead Sea (Patton, et al., 1994;USGS, 1998;and Robert et al., 2006). The dynamics of the Sinai Peninsula based on the geometrical configuration of the basement rocks as revealed by magnetic analysis, the pressure-tension tectonic forces resulting from seismological focal mechanism solutions, as well as the horizontal movements detected by a GPS network (John and Peter, 1969;Mcintyre, 1991;Rabeh and Miranda., 2008).
Geological setting
The geology of the Sinai Peninsula ranges from Precambrian basement rocks to Quaternary deposits. According to the surface geologic map ( Fig. 1) after Khalil, 1998;McClay et al., 1998, Egyptian Geological Survey (1993, the Quaternary deposits cover the northern part and along the Gulf of Sinai and the Mediterranean Sea coasts. The Mesozoic limestone covers a wide area from the central part of Sinai Peninsula, while the Pre-Cambrian rocks outcrop and covers a wide areas in the southern part of the Peninsula. The regional stratigraphy of the southern part of the peninsula (Darwish and El Azaby, 1993) shows that the sedimentary sequences overlying the basement comprise rocks from Cambrian to Quaternary. Post-rift sediments include the erosional surface that marks the base of the strata deposited during thermal or post-rift subsidence phases (Purser and Bosence, 1998). The syn-rift strata were deposited in active fault-controlled depocenters of the evolving rift, and the pre-rift strata were deposited prior to rifting according to the paleoenvironment. The upper surface of the pre-rift strata is the syn-rift unconformity or a superimposed post-rift unconformity, according to the geotectonic evolution of the basin.
The Gulf of Aqaba transform apparently lessened extension in the southern part of the gulf, and restricted active rifting to the central area (Steckler et al., 1988;Bosworth et al., 1998). The northern end is comprised of Precambrian basement rocks, Paleozoic sediments of the Carboniferous and Permian, and Mesozoic, Tertiary, and Quaternary deposits. According to Barakat (1982) the geologic sequence can be described from bottom to top as sandstone intercalated with shale and claystone, with dolomitic limestone of the Jurassic occurring at the top. Its maximum thickness reaches about 2200 m. The Cretaceous sediments are divided into Lower and Upper Cretaceous, the former consisting of sandstone with intercalations of clay and limestone, and the latter of thick limestone. This sequence is about 520 m thick. The Tertiary sediments consist of thick limestone with claystone (465 m thick), while the Quaternary is represented by sand and gravels with a maximum thickness of about 100 m. Khalil, 1998;McClay et al., 1998 andEgyptian Geological Survey (1993), and RTP land magnetic map after Rabeh (2008).
Many studies were performed to understand the subsurface geo-structure of the area. It is considered as a part of a Tertiary cratonic rift between the north-eastern Africa and the Arabian Peninsula. The rifting phase essentially ceased during the early-Middle Miocene (18-14 Ma) when continental separation became more oblique due to the dominant movements on the left-lateral transform fault, that extends through the Gulf of Aqaba northeastward till the Dead Sea (Patton, et al., 1994, USGS, 1998. The Gulf of the Suez region has long been recognized as one of the best examples of long-axis segmentation with different dip polarities (Colleta et al., 1988;Moustafa, 1993;Bothworth, 1994;Patton et al., 1994;McClay et al., 1994McClay et al., & 1998. It displays examples of interaction between extensional tectonics and sedimentations (Gawthorpe et al., 1997;Gupta et al., 1999;Sharp et al., 2000). It is remarkably non-volcanic with only a few late pre-rift to early syn-rift basic dykes and www.intechopen.com isolated basaltic features (Bosworth and McClay, 2001). Four distinct depocenters (subbasins) separated by complex accommodation zones occur within the Gulf of Suez and Northwestern Red Sea (Bosworth and McClay, 2001). Each sub-basin is a symmetric, bounded on one side by NW trending border fault system with large throws 3-6 Km in general (Gupta et al., 1999). This is providing a good idea about the tectonic position of the Peninsula. The Pre-Cambrian Basement rocks appear in the southern part of the peninsula while the depositional depocenter is dipping towards the northern part (cf. Fig. 2). This is due to the compression forces due to Suez rifting that we show it in that Chapter. Guiraud and Bosworth (1999).
Geophysical evaluation
Using the integrated interpretation of seismological, GPS, potential potential-field, geological and well logging data, and several non-outcropping fault zones have been recognized and tentatively mapped in the study area (Rabeh and Miranda, 2008). Based on Grant & West (1965), Linsser technique (1967 and horizontal gradient method we were able to delineate the subsurface fault trends from the RTP land magnetic map (cf. Fig. 3). The Euler deconvolution method, published by Reid et al. (1990) serves to determine source positions and the depths of the geomagnetic inhomogeneities. This method confirmed the existence of the deduced structures whereas the Euler solutions were clustering along these structures. The different directions were then grouped into segments of 10° of azimuth each. These groups are represented according to the tectonic movements/forces prevailed in the studied area by rose diagrams (cf. Fig. 3).
www.intechopen.com The results indicate that the N35º-45ºW tectonic trend (related to Gulf of Suez and Red Sea tectonics) is more predominant at the southern part of Sinai Peninsula than N35º-65ºE tectonic trend (related to Syrian trend) while this arrange is reversed at the northern part. Aqaba trend (N15º-25º E) comes at the third order of predominance is prevailed at the southern part while the E-W trend is predominant at the northern part (related to the Mediterranean tectonics). The Stress -tension axis prevailed in the studied area derived from the focal mechanism solutions of the events located in the southern Gulf of Suez suggest pure normal faulting mechanism, with a NE-SW trending tension axis. Whereas the mechanism of this event along the Gulf of Aqaba reflects strike slip mechanism with left lateral motion along NW-SE plane (cf. Fig. 4). The stress fields based on the deduced focal mechanism of the different seismic zones according to our study have been selected and the average direction of pressure axis (P-axis) and tension axis (T-axis) are calculated for each zone Abu El Enean, (1997). The distribution of the P and T axis along the studied area (cf. Fig. 4) shows a dominant T-axis trending N 45º E. The surface faults illustrate that there is an extension stresses act in the region. These results were confirmed from recent analysis of GPS data (Rabeh and Miranda,200 The kinematic model that explains the implications of deformation, stress and tectonic activities in the Sinai Peninsula were interpreted through an integrated study using land magnetic surveys, seismology and geodynamics as well as geological analysis. The most predominant tectonic trends is N35° -45°W direction. This trend originated due to opening process of the Gulf of Suez and is normal to the NE -SW tension axis (Said 1990). It comes in the first order of predominance while N45° -65°E (connected to Syrian Arc tectonics) comes in the second order at the southern part of the Peninsula. This order is reversed at the northern part. The N25° -35°E which is related to Gulf of Aqaba tectonics can be detected at the southern part whereas the E-W (related to the Mediterranean tectonics) tectonic trend is prevailed at the northern part. They are considered as a third order of their predominance. These forces were confirmed by stress-tension relation derived from focal mechanism solutions of seismological data. Moreover, the results obtained by magnetic and seismological interpretations has been confirmed by GPS data analysis. It indicates that the velocity of Sinai Peninsula ranges from 1.8 to 2.3± 0.5 mm/yr in the NE direction Finally, the integrated analysis for the magnetic, seismic and GPS interpretations can produce a kinematic model for Sinai Peninsula (cf. Fig. 5). The term tectonics refers to the study dealing with the forces and displacements that have operated to create structures within the lithosphere. The deformations affecting the Earth's crust are result of the release and the redistribution of energy from Earth's core. The concept of plate tectonics is the chief working principle. Tectonics has application to lunar and planetary studies, whether or not those bodies have active tectonic plate systems. Petroleum and mineral prospecting uses this branch of knowledge as guide. The present book is restricted to the structure and evolution of the terrestrial lithosphere with dominant emphasis on the continents. Thirteen original scientific contributions highlight most recent developments in seven relevant domains: Gondwana history, the tectonics of Europe and the Near East; the tectonics of Siberia; the tectonics of China and its neighbourhood; advanced concepts on plate tectonics are discussed in two articles; in the frame of neotectonics, two investigation techniques are examined; finally, the relation between tectonics and petroleum researches is illustrated in one chapter. | 2018-12-11T05:17:24.097Z | 2011-02-28T00:00:00.000 | {
"year": 2011,
"sha1": "01dc689918007f7c03c62c12d5d5d713d7bfdc6b",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/14073",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "81ba6fce1291a4b089d7a819f79fcb2a45b4631d",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
238678934 | pes2o/s2orc | v3-fos-license | Sensitivity Experiments on the Role of Water Vapor in the Eastward Propagation of MJO
In this study, we employed the nudging assimilation of the Weather Research and Forecasting (WRF) model to conduct a set of sensitivity experiments on the role of water vapor in the Madden-Julian Oscillation (MJO) eastward propagation, focusing on the eastward propagating 30-60d low-frequency component in the tropical atmosphere from the Indian Ocean to the western Pacic Ocean during September-November 2004. Using 11 different cumulus parameterization schemes, the simulation results show that the ability of the regional climate model in simulating the MJO eastward propagation is sensitive to the cumulus scheme: A suitable scheme can well reproduce the MJO eastward propagation characteristics, while most schemes show no skill for the MJO eastward propagation. When the water vapor in the model domain was assimilated using reanalysis data with nudging technique, we found that the low-frequency evolution of the tropical zonal wind exhibits MJO features well, and the low-frequency phase of water vapor is ahead of the zonal wind by about 6-7 days, which suggests that the atmospheric water-vapor distribution is the key factor for the eastward propagation of the MJO, and the effect of water-vapor eld via affecting the atmospheric stability. When the atmospheric temperature assimilation was conducted, there was almost no improvement in the skill of MJO simulation.
Introduction
The Madden-Julian Oscillation (MJO) is the dominant intra-seasonal oscillation mode in the tropics (Madden and Julian 1972), which has a planetary-scale spatial structure dominated by zonal wavenumbers 1-3, eastward propagation, exhibiting a broad-band oscillation cycle of 30-60d (Li et al. 2007). A lot of previous studies have shown that similar low-frequency oscillations are found everywhere in the world, and they can propagate in different directions (Li and Li 1997;Chen et al. 2001;Li 2014). The activity and anomalies of the MJO affect regional weather and climate. For example, the convergence of the southward propagating MJO in the mid and high latitudes of East Asia and the northward propagating MJO in the Yangtze and Huai river basins of China will produce persistent precipitation in the Yangtze River basin and North China in summer; and these regions are more prone to ooding in years of strong MJO activity (Yang and Li 2003). In addition, the intensity of precipitation in East China varies with the propagation of the MJO. When the tropical MJO travels eastward to the Indian Ocean, precipitation in East China increases; whereas the MJO travels to the western Paci c, precipitation in East China decreases. Meanwhile, there are seasonal differences in MJO's effects on precipitation (Jia et al. 2011). Therefore, it is important to study the propagation mechanism of the MJO for better precipitation prediction in East China.
Simulation studies on intra-seasonal oscillations in the tropical atmosphere have shown that numerical models have di culty in capturing the characteristics of MJO activities; and none of the models participating in the Atmospheric Model Intercomparison Project (AMIP) can accurately characterize the main features of the MJO (Slingo et al. 1996). At present, most models produce short intra-seasonal oscillation periods and weak oscillation intensity, and are unable to describe the seasonal differences of the MJO, and even its continuous eastward propagation characteristics (Kim et al. 2009).
Various theories have been proposed to explain the MJO eastward propagation. The theoretical model proposed by Emanuel (1987) and Neelin (1987) emphasizes the effect of east-west asymmetry of surface heat ux on the generation and eastward propagation of the MJO. However, this theory assumes that surface mean winds are easterly winds of a certain strength, but observations show that surface easterly winds are prevalent only over the central-eastern Paci c and Atlantic Ocean; thus, it cannot explain the generation and propagation of the MJO throughout the tropics ). Based on a series of assumptions, considered the MJO as a convection-coupled Kelvin wave, and numerical experimental simulations yielded mostly unstable modes with short wavelengths and too fast eastward propagation. Wang and Li (1994) considered the MJO as a Kelvin-Rossby wave coupled with convection and boundary-layer friction effects . More recent theories emphasized the role of sea-air interactions (Wang and Xie 1998) and water-vapor distribution (Maloney 2009; Hsu and Li 2011; Sobel and Maloney 2013).
Regional climate models (RCMs) are an effective tool for obtaining high-resolution regional information on weather-climate evolution (Xu et al. 2019). The nudging method used in the process of dynamic downscaling numerical simulations using RCMs is a way to maintain small-and medium-scale dynamic characteristics in RCMs while preserving large-scale features, so that the model simulation results approximate the real conditions (Wang and Kotamarthi 2013). Sperber (2003) found that in the speci c humidity and vertical velocity elds of the MJO, speci c humidity pro le and vertical motion pro le are similar, both tilting westward with height, that is, there is a zonal asymmetry with respect to the tilt axis. Maloney (2009) showed that before the occurrence of lower-level easterly wind anomalies, the columnintegrated moist static energy (MSE) accumulates prior to precipitation, and with the occurrence of westerly wind anomalies, MSE discharges during and after precipitation. The MSE anomalies occurred in the lower troposphere are mainly regulated by speci c humidity anomaly. Hsu and Li (2012) demonstrated that the distinct zonal asymmetric distribution of the water-vapor eld in the boundary layer with respect to the convective center is the key to the maintenance of the MJO eastward propagation. In the model simulation, however, different cumulus parameterization schemes have different effects on the simulation of the water vapor eld. It is shown that the model's lack of ability to simulate the MJO is largely in uenced by the model's cumulus parametrization scheme (Duvel et al. 2013). In this paper, we use different cumulus parameterization schemes to numerically simulate an individual case of MJO eastward propagation to verify the sensitivity of the Weather Research and Forecasting (WRF) model to cumulus parameterization schemes in simulating the effect of MJO eastward propagation, and then use the nudging assimilation of the WRF model to investigate the effects of different spatial distributions of atmospheric variable elds on the simulated MJO eastward propagation process.
The article is organized as follows. In Sect. 2, we present the model, datasets and method used. In Sect. 3, we analyze the simulation results of different cumulus parameterization schemes. In Sect. 4, we describe the nudging simulation and analyze the simulation results. Conclusions are presented in Sect. 5.
Model
The WRF model is a fully compressible non-hydrostatic model with a vertical coordinate system that follows the hydrostatic coordinate system of the terrain, and uses an interleaved grid of the Arakawa C grid, which is bene cial for improving accuracy in high-resolution simulations. In this study, WRF V4.0 model is used.
Datasets
The data used in this paper include daily reanalysis products from the National Centers for Environmental Prediction and National Center for Atmospheric Research (NCEP-NCAR), with a resolution of 2.5° × 2.5°( referred to as Re1) for the period from 1 January 1985 to 31
Bandpass ltering
Bandpass ltering is often used for extracting low-frequency signals. It was pointed out that for longer time series, the Lanczos lter has the distinct advantage of effectively suppressing spurious Gibbs waves due to nite truncation and having narrow transition bands (Duchon 1979). Therefore, we used a 100point Lanczos lter to extract the 30-60d component of atmospheric variables.
Correlation coe cient
The correlation coe cient between two elds is calculated as follows,
Case selection and nudging simulation
The MJO has a planetary-scale spatial structure dominated by zonal wave numbers 1-3 and eastward propagation. To study the eastward propagation characteristics of the MJO, we select a typical 30-60d propagation process in the tropics (10°S-10°N) from the Indian Ocean to the western Paci c Ocean (30°-130°E) in the autumn (September-November) of 2004 as the study object based on 30-60d ltered time-longitude maps of tropical (10°S-10°N) mean zonal winds at 200 and 850 hPa from 1985 to 2019 of the Re1 data (not shown). Figure 1 shows the time-longitude maps of the tropical mean zonal winds at 200 and 850 hPa from the Indian Ocean to the western Paci c Ocean in September-November 2004. We can see that the 30-60d low-frequency components of the zonal winds at both 200 hPa (Fig. 1a) and 850 hPa ( Fig. 1b) have obvious eastward propagation characteristics, and the amplitude of zonal winds at 200 hPa is stronger than that at 850 hPa, but the continuous eastward propagation characteristics of the zonal winds at 850 hPa are more obvious. So, the results of the 30-60d low-frequency component of the zonal winds at 850 hPa will be our focus in the subsequent analysis.
Nudging assimilation is an option in the WRF model to assimilate the simulated variables to the largescale driving eld during the simulation, and different variables can be selected. When using the reanalysis data as the driving force for WRF model simulation, the variables selected to be nudged in the model domain will be assimilated toward the reanalysis data, ensuring that there will be no large The physical parameterizations used in this study include the WSM6 microphysics scheme (Hong and Lim 2006), the Rapid Radiative Transfer Model (RRTM) (Mlawer et al. 1997) for longwave radiation calculation, the Dudhia scheme (Dudhia 1989) for shortwave radiation calculation, the Eta similarity scheme (Monin and Obukhov 1954;Janjić 1994Janjić , 1996Janjić , 2002, the ve-layer thermal diffusion scheme (Dudhia 1996) for land surface processes, and the Mellor-Yamada-Janjić boundary layer scheme (Mesinger 1993;Janjić 1994). Eleven cumulus parameterization schemes are selected for the simulations (Tab. 1). More details of the physical parameterizations are described in the WRF user's guide (Skamarock 2019). 2004 are determined by calculating the correlation coe cients of the 30-60d component amplitude distributions on these maps from simulations and observations. According to Hsu and Li (2012), the asymmetric water-vapor distribution between the east and west sides of the maximum amplitude of the MJO is a necessary condition for its eastward propagation. The water vapor is assimilated in the worst cumulus parameterization scheme for simulating the eastward propagation of the MJO, that is, the simulated water-vapor mixing ratio is nudged to the observed value (reanalysis eld, hereafter) during the simulation, and the improvement of the model for the eastward propagation of the MJO is examined.
Experimental design
The nudging assimilation used in this paper works as follows: during the simulation, the simulated eld in the model region is assimilated using the reanalysis eld. The parameterization schemes of the physical process for the nudging simulation are the same as those used in Sect. 2.3.3, and the analysis eld for nudging is also updated every six hours. Since experiments using different nudging coe cients show that the simulation results are not sensitive to the nudging coe cients, the nudging assimilation for CPS3 takes the default value of the model nudging coe cient 0.0003 for the analysis nudging simulation of the water-vapor eld (Ndg_q). (Fig. 6a, Fig. 6b) and observations (Fig. 3a, Fig. 4a), respectively, which simulates better than experiment CPS5 for the MJO. The importance of the water vapor eld in in uencing the eastward propagation of the MJO zonal wind eld is further explored below by comparing the evolution of the vertical pro les of the 30-60d component zonal wind, temperature, and speci c humidity elds. As seen in Fig. 7, due to the inappropriate cumulus parameterization scheme adopted in experiment CPS3, the simulated 30-60d component of the zonal wind at ve-day interval shows different distribution characteristics throughout the troposphere from the observations; it not only fails to simulate the normal eastward propagation of the MJO, but even shows the westward propagation characteristics (the rst and second columns of Fig. 7). Experiment Ndg_q, on the other hand, ensures that after the water-vapor distribution is assimilated, the atmospheric thermodynamic and dynamic adjustment processes make the Figure 8 shows the height-longitude maps of the mean speci c humidity 30-60d component corresponding to Fig. 7. It can be seen that the observations show the obvious eastward propagation of the MJO (the rst column of Fig. 8). Since the water-vapor eld is continuously nudged toward the observed eld during the assimilation, experiment Ndg_q is able to simulate the eastward propagation of the speci c humidity 30-60d component relatively well (the last column of Fig. 8), and experiment CPS5
Analysis of model results
is also able to simulate the eastward propagation of the speci c humidity 30-60d component relatively well (the third column of Fig. 8), only that the simulation intensity too strong. However, experiment CPS3 gives completely inconsistent results, except that the simulated low-frequency disturbances are basically stationary most of the time, and their wavelength is only about 1/3 of the actual MJO wavelength (the second column of Fig. 8).
Comparing Figs. 7 and 8, we can see that the low-frequency propagation characteristics of water vapor and zonal wind are very similar, and the phases of both elds are basically the same: the positive perturbation zonal-wind region is accompanied by the positive perturbation water-vapor region. This is similar to the conclusion that the positive water-vapor anomaly in the mid troposphere has approximately the same phase as the MJO convection by Sperber (2003). However, the low-frequency perturbation of water vapor is 5-8 days ahead of the low-frequency perturbation of zonal wind; therefore, having su cient water vapor in the eastward propagation of the low-frequency perturbation of zonal wind to produce wet convection to match the eastward propagation of the low-frequency perturbation of zonal wind may be a factor for the eastward propagation of the low-frequency perturbation of zonal wind. Li (1985) rst introduced the conditional instability of the second kind (CISK) theory into the study of atmospheric low-frequency oscillations, and proposed a cumulus convective heating feedback mechanism for tropical atmospheric low-frequency oscillations. Lau and Peng (1987) introduced mobile wave-CISK as the generation mechanism of tropical low-frequency oscillations, which can better explain the slow eastward propagation of tropical atmospheric MJO along the equator. All these theoretical works clarify the role of tropical wet convection in the generation and propagation of the MJO, and the sensitivity experiments in this paper provide veri cation for these theories. In addition, the presence of a westerly dip of the low-frequency components of water vapor and zonal winds throughout the troposphere, that is, the zonal asymmetric distribution with respect to the tilting axis, con rms that a suitable water-vapor distribution is a key factor to ensure the observed eastward propagation of the tropical atmospheric MJO, as pointed out by Hsu and Li (2012). Figure 9 shows the height-longitude maps of mean temperature 30-60d component, corresponding to Fig. 7. Although Fig. 9 also presents the observed eastward propagation characteristics of the low-frequency temperature perturbation (the rst column of Fig. 9), its spatial structure is relatively complex compared to the eastward propagation characteristics of the zonal wind (the rst column of Fig. 7) and speci c humidity (the rst column of Fig. 8) low-frequency perturbations; and the intensity of the perturbation shows irregular variation in both horizontal and vertical directions. Similarly, both experiments CPS5 (the third column of Fig. 9) and Ndg_q (the last column of Fig. 9) can simulate the eastward propagation of low-frequency temperature perturbations, but the average correlation coe cients of the simulated and observed intensity distributions of temperature low-frequency perturbations on the altitude-longitude maps at different moments are only 0.14 and 0.28, respectively, much smaller than the corresponding correlations coe cients of speci c humidity and zonal winds. In contrast, experiment CPS3 (the second column of Fig. 9) does not simulate the eastward propagation characteristics of the temperature lowfrequency disturbance as observed.
In addition, similar to experiment Ndg_q that nudges only speci c humidity, temperature is also nudged in experiment CPS3, but the analysis of the results shows that the temperature nudging assimilation only improves the low-frequency propagation characteristics of temperature to a large extent, and the effects on the low-frequency zonal wind and low-frequency speci c humidity in terms of eastward propagation characteristics do not improve signi cantly ( gure omitted). Therefore, temperature distribution is not a key factor to control the low-frequency MJO eastward propagation compared to humidity distribution.
Mechanism analysis
From Fig. 8, it can be seen that there is a zonal asymmetry in the speci c humidity eld relative to the tilting axis during the evolution from 15 September to 15 October. To demonstrate the importance of atmospheric stability in the propagation of the MJO, we investigate the role of water vapor in in uencing the propagation of the MJO via affecting atmospheric stability by examining the evolution of equivalent potential temperature .
is determined by both temperature and humidity. If the atmosphere is initially moist but unsaturated and , the atmosphere is potentially unstable. If such atmosphere reaches saturation by su cient lifting, the entire atmosphere column becomes unstable . Figure 10 shows the height-longitude maps of the 30-60d component of corresponding to Fig. 7. It shows that both observations and simulations are similar to water vapor in terms of the low-frequency characteristics (Fig. 8), and differ signi cantly from the low-frequency characteristics of temperature (Fig. 9). The (Fig. 11). The convective instability parameter (Fig. 11a) shows eastward propagation, but compared to the 850 hPa zonal wind (Fig. 1b), the propagation is not continuous and there are some westward propagation periods. Experiment CPS3 (Fig. 11b) shows westward propagation contrary to the observation. Experiment CPS5 (Fig. 11c) simulates part of the eastward propagation, but the simulation degrades to the east of 80°E. Experiment Ndg_q (Fig. 11d) can basically simulate the eastward propagation of the low-frequency convective instability parameter at 850 hPa, but the simulated intensity is weak in some periods. These experimental results indicate that the water-vapor eld can maintain the MJO propagation by affecting the atmospheric stability, while the temperature eld has little effect on the MJO eastward propagation, so enhancing the model's simulation effect on atmospheric stability plays an important role in improving the simulation of MJO eastward propagation.
The convective instability parameter at 850 hPa (Fig. 11a) and the time-longitude map of zonal wind (Fig. 1b) show that the low-frequency propagation characteristics of both are similar, but there is a lead-lag relation in time. Figure 12 shows the time-lag correlation coe cients between the four mean convective instability parameters given in Fig. 11 and the 30-60d component time-longitude maps of the observed zonal wind (Fig. 1b). The correlation coe cient is the largest when the observed convective instability parameter is ahead of the zonal wind by 6-7 days (Fig. 12a), which is consistent with the water-vapor low-frequency disturbance estimated from Figs. 7 and 8 being ahead of the zonal wind low-frequency disturbance by 5-8 days, again demonstrating that the water-vapor eld can contribute to the propagation of the MJO by affecting the atmospheric stability and that the tropical wet convection located east of the MJO disturbance plays a key role. The simulated results of experiment Ndg_q are the closest to the observations, but the correlation coe cient between the two is the greatest when the convective instability parameter is ahead of the zonal wind by about 10 days (Fig. 12d). Although the simulated results of experiment CPS5 are better than those of experiment CPS3, the simulated results of CPS5 also do not portray the evolution of the convective instability parameter well. The convective instability parameter is ahead of the zonal wind by about 18 days (Fig. 12c), while the simulated convective instability parameter of experiment CPS3 even lags behind the zonal wind by 3-4 days (Fig. 12b). Figure 12a also shows that the correlation coe cient is the greatest when the convective instability parameter overtakes the zonal wind by 6-7 days, while the negative correlation is the greatest when it lags the zonal wind by 13-14 days, that is, convective instability exists in the lower troposphere 6-7 days before the occurrence of the low-level easterly anomaly and 13-14 days after the occurrence of the westerly anomaly. This con rms the nding of Maloney (2009), that is, column-integrated MSE accumulates before intraseasonal precipitation prior to the onset of low-level easterly anomalies, while MSE releases energy during and after precipitation during the onset of westerly anomalies.
Conclusions
In this study, we focus on the eastward propagation of the MJO in the tropical atmosphere from the Indian Ocean to the western Paci c Ocean in the autumn of 2004 (September-November). The nudging assimilation of the WRF model is used to conduct sensitive tests of the role of water vapor in the eastward propagation of the MJO. The role of water-vapor disturbance in the eastward propagation of the MJO is revealed by comparing the simulation results with observations. The following conclusions are obtained.
The regional climate model is sensitive to the cumulus parameterization scheme to simulate the eastward propagation of the MJO. An unsuitable scheme will not simulate the eastward propagation of the MJO at all. For the individual cases of MJO eastward propagation studied here, the Tiedtke scheme can simulate the MJO eastward propagation well, while the Grell-Freitas scheme nearly has no skill.
When using the Grell-Freitas scheme and the water-vapor eld in the model is simulated by assimilating the observation, the model will be able to describe the eastward propagation of MJO better. Moreover, the low-frequency water-vapor phase is ahead of the zonal-wind phase. In contrast, nudging simulations of temperature in the model cannot reasonably produce the eastward propagation of the MJO, which con rms that only the tropical atmospheric water-vapor distribution is the main factor determining the eastward propagation of the MJO.
The evolution characteristics of equivalent potential temperature and speci c humidity during the MJO propagation are basically consistent, and both show a westerly dip, that is, a zonal asymmetry with respect to the tilting axis, while there are large differences with the evolution characteristics of the temperature eld. After nudging the water-vapor eld in the model domain, the simulated effect of the convective instability parameter on the MJO at 850 hPa is enhanced, and it is 6-7 days ahead of the zonal wind. Therefore, the water-vapor eld affects the propagation of the MJO by in uencing the atmospheric stability, while temperature has little effect on the eastward propagation of the MJO. Time lag correlation coe cients of observation (a) and simulated results by CPS3 (b), CPS5 (c) and Ndg_q (d) between time-longitude maps of 30-60d components of observed zonal winds (Fig. 1b) and | 2021-09-27T18:43:27.118Z | 2021-08-17T00:00:00.000 | {
"year": 2021,
"sha1": "74432ee508c9093e7c957d68a25a7e0131c7ff7a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-792750/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1322ba36ed13e5d6d2422f6aa636a7a486a239cb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
4079220 | pes2o/s2orc | v3-fos-license | Measuring Vulnerable Population's Healthy and Unhealthy Food Access in Austin, Texas
Food deserts—areas with a significant low-income population experiencing low accessibility to healthy food sources—have been well studied in terms of their connection to obesity and its related health outcomes. Measuring food accessibility is the key component in food desert research. However, previous studies often measured food accessibility based on large geographic units (e.g. census tract, zip code) with few transportation modes (e.g. driving or taking public transit) and limited vulnerable population measures. This paper aims to demonstrate a new method to measure food access for different vulnerable population groups at a smaller geographic scale with different transportation modes. In detail, this paper improves on previous studies from the following three perspectives: (1) Measuring food accessibility with a smaller geographic scale: block group vs. census track which on average includes 1000 people vs. 4000 people; (2) Measuring food accessibility with different transportation modes: walking, biking, transit, and driving vs. driving only; and (3) Measuring food accessibility for different vulnerable population groups. The proposed method was tested in the city of Austin, which is the capital of Texas and the 11th largest city in the US, and measured people's accessibility to both healthy and unhealthy food sources within the city. The methods can be applied to address food accessibility issues in other cities or regions.
Introduction
Food deserts-areas with a significant low-income population experiencing low accessibility to healthy food sources-have been well-studied in terms of their connection to obesity and its related health outcomes. Food access as the key component in food desert studies has traditionally been measured as the physical distance between the centroids of spatial units of analysis (e.g., census tracts or the 1-km grid as the neighborhood), or between the centroids of spatial units housing the population and the closest supermarket or large grocery store.
The methodological limitations of past studies included the use of coarse levels of income data aggregation, such as zip codes or census tracts, which could overlook the stronger demand for healthy food from smaller geographic areas. Second, vulnerable populations were usually measured based on few methods and there were limited comparisons of the final findings. Third, most studies focused on driving as the default transportation mode. These limitations traditionally led to vague, rough and even inaccurate food desert identification. The proposed study sought to improve previous food access research by demonstrating a GIS-based method quantifying different transportation food access for different vulnerable groups at a smaller geographic unit (block group), which is smaller than census tracts and typically has a population of 600 to 3,000 people [1].
Data Collection and Analysis
Three categories of spatial data were utilized to perform a food accessibility analysis in Austin: vulnerable population data; food establishments; and transportation networks. Vulnerable populations were identified at the census block group level using the 2010 US Census and 2012 American Community Survey data [1,2] and were based on the following four criteria: poverty rate greater than or equal to 20 percent; median family income not exceeding 80 percent of metro-area median family income; at least 40 percent at or below double poverty level; and, more than 30 percent without access to personal vehicle [3][4][5][6].
The food establishment permit data was collected from the City of Austin Department of Public Health. The dataset lists each food source by name, location, and detailed classifications (e.g. grocery store, supermarket, convenience store, etc.). Based on previous research, these food establishments were classified as healthy food sources (supermarkets/grocery stores, and farmers' markets) and unhealthy food sources (fast food restaurants and quick service restaurants, convenience stores, corner stores) [5,7]. The accuracy of data for both healthy food sources and unhealthy food sources were verified through Google Maps. In addition, unlisted large-scale food sources (e.g. supermarkets) were identified by cross-referencing published store locations with the permitted food establishment dataset. These healthy and unhealthy food establishments were geocoded in GIS based on the permitted addresses (Figures 1 and 2).
Austin transportation network GIS data was collected from the Austin GIS and Maps
Department and the Capital Metropolitan Transportation Authority. The dataset contained information regarding streets, bicycle infrastructure, sidewalks, transit routes and stops. Using the above Austin transportation network data, four separated GIS transportation networks were built for motor vehicle, bicycle, transit, and pedestrian routes in ArcGIS. The automobile network was generated using the complete City of Austin street network shapefile. The bicycle network excluded highways, freeways, and on/off-ramps from the City of Austin streets shapefile. Pedestrian street network-defined as surface streets with sidewalk infrastructure or a street with a speed limit no greater than 35 miles per hour-were also generated using the City of Austin streets shapefile. The transit network was established by using a modified Capital Metro transit route and stop shapefile.
The travel time between stops was calculated by using the average route circulation times as published in the Capital Metro schedule book [8] and cross-verified with Google Transit data in the City of Austin.
Using the ArcGIS Network Analyst tool, ten-minute network buffers were generated for all City of Austin food establishments in each transportation network. Time impedance was used for the automobile and transit network service zones. The transit network buffer incorporated up to a half-mile walk along the pedestrian network as a requirement for transit accessibility [9,10]. A distance proxy was decided to represent the bicycle and pedestrian network buffer: ten minutes of travel was represented by either a two-mile bicycle ride or a half-mile walk [11]. The combination of the individual transportation networks created the overall transportation service areas for both healthy and unhealthy food establishments in the City of Austin.
Results
The City's auto-centric Land Development Code has designated Austin into a city dependent upon the personal motor vehicle. Therefore, as expected, owning a vehicle in Austin, Texas, provides high levels of access to any food source. Regardless of the selected vulnerability indicator examined, more than 95% of any given population has access to both healthy and unhealthy food (Figure 1 and 2), as a result, unhealthy food sources are much more accessible by alternative travel modes than healthy food sources in Austin. Vulnerable populations can walk to almost three times more unhealthy food stores than to healthy food sources (Table 1). Such differences can be clearly observed in figures 3 and 4, where the first three criteria identified much more vulnerable population groups and covered much larger areas (red, green, pink) than the fourth method (blue) (Figures 3 and 4).
Discussion
Food deserts and healthy food accessibility represent a supply-side issue (lack of healthy food sources) within a demand-side problem (citizens' access to food sources). The lack of a comprehensive, consensus method to measure food accessibility is thwarting attempts to implement public and land use policies in order to combat the problem. This research improved the previous major food accessibility measures [5,12] from the following perspectives.
Unit of Analysis and Origin of Buffers
Previous food desert research often measured healthy food accessibility at the census tract level and used the -distance from the centroid to a given healthy store‖ [4,5,13] to represent food accessibility. Although the census tract is useful for providing census data regarding economically at-risk populations, the lack of regularity in the size and shape of census boundaries does not lend well to representing spatial data issues. The inconsistent and irregular sizes and shapes of census tracts can result in over identification or under identification of food deserts. To guard against this potential error, it is better to generate network buffers from each food source rather than from individual census tract centroids and measure vulnerable populations at the block group or smaller scale.
Mode of Access
Although the motor vehicle is the primary form of transportation in the United States and more than 90 percent of workers commute to work in privately owned cars [14], it is still important to measure food accessibility with alternative transportation modes. Not all are able to drive and vulnerable populations are more likely to rely on transit or other transportation modes for grocery shopping [15,16]. To better measure people's access to different food sources, different transportation buffers were generated based on walking, biking, transit, and driving.
Different Vulnerable Population Definitions
This study identifies vulnerable populations based on different criteria used in previous studies and measured access to healthy and unhealthy food sources for these different population groups.
By varying vulnerable population definitions, this research added one more dimension to food desert identifications and helped researchers to compare and contrast food access results and helped government to better access food desert problems in a given city or region.
Conclusion
This paper demonstrated a method to measure people's access to food sources with different transportation modes. Using the block group as the unit of analysis helps researchers to better identify vulnerable populations and neighborhoods. Establishing network buffers based on different transportation networks originating from food sources can better capture people's true access to healthy and unhealthy food sources. Both public health and urban planning benefit from more accurate spatial analysis techniques, which can better determine people's access to different food sources. Varying vulnerable population definitions and understanding how different transportation modes impede food access for vulnerable populations will allow planners to better allocate transportation resources to the most needed areas. | 2018-04-03T01:07:32.579Z | 2016-09-06T00:00:00.000 | {
"year": 2016,
"sha1": "49dbb1f24470c9292e45d1a729b39d20e2c69b35",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/publichealth.2016.4.722",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49dbb1f24470c9292e45d1a729b39d20e2c69b35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
229500347 | pes2o/s2orc | v3-fos-license | Development and Validation of Simple, Rapid and Sensitive High- Performance Liquid Chromatographic Method for the Determination of Butenafine Hydrochloride
Aims: The current paper reports a simple, rapid, sensitive, accurate, and precise Reverse-phase high performance liquid chromatography (RP-HPLC) method with wide range of estimation to determine butenafine hydrochloride in nanosponges. This method has been validated as per ICH norms. Study Design: Experimental design with influence of variables such as mobile phase composition, flow rate, temperature and wavelength on the chromatographic peaks. Place and Duration of Study: Department of Pharmaceutics, College of Pharmacy, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia between Jan 2020 and March 2020. Original Research Article Ansari et al.; JPRI, 32(29): 116-125, 2020; Article no.JPRI.62582 117 Methodology: Separation was achieved by utilizing the most commonly used reverse phase column (C-18, 5 μm, 150 mm x 4.6 mm) set at 30oC and quantified by UV detection at 280 nm after isocratic elution from a mobile phase (70:30 v/v of methanol: phosphate buffer pH 3.0) flowing at 1 ml/min. Results: A sharp and symmetrical peak was observed at 4.08 ± 0.01 minutes. The low variation in peak area and retention time (1.12% and 0.29%, respectively) and a high number of theoretical plates (>2000) indicated this method’s efficiency and suitability. The least square linear regression analysis (Y = 9265.5 X + 1961.4) showed excellent correlation (r = 0.999 ± 0.0003) between concentration and peak area of butenafine hydrochloride through a wide concentration range of 1– 50 μg/ml. The limits of detection and quantification (LOD and LOQ) were 0.18 μg/ml and 0.57 μg/ml, respectively. The assay or determinations were accurate, precise and reproducible with mean accuracy and mean relative standard deviation of precision of 101.53 ± 0.43% and 0.51 ± 0.11% respectively. Conclusion: The developed RP-HPLC method was simple, sensitive, reproducible with wide range of estimation of butenafine hydrochloride in the nanosponges. The proposed method could be used for the analysis of butenafine hydrochloride in the conventional pharmaceutical formulations such as tablets, syrup, creams including novel formulations such as nanoparticles, nanosponges, nanoemulsions. The proposed method overcomes the specificity, sensitivity and reproducibility related issues of ultraviolet-visible spectroscopy.
INTRODUCTION
Fungal infections are reported to affect over a billion of people worldwide [1]. These infections may be superficial, mucosal and systemic or invasive. The superficial fungal infections are caused by a group of fungi known as dermatophytes such as Trichophyton, Microsporum and Epidermophyton that affect skin, hair and nails etc [2]. The mucosal and systemic fungal infections are caused by fungi such as Candida, Aspergillus and Pneumocystis which affect almost every organ system. The superficial fungal infections may be mild to moderate while systemic fungal infection may be life threatening when left untreated. The invasive fungal infections are known to kill over 1.5 million people globally [3]. Treatment options for fungal infections may be broadly classified as topical and systemic antifungal agents to treat superficial and systemic fungal infection, respectively. Based on the chemical nature, commonly used antifungal agents include polyene derivatives (nystatin, amphotericin, etc.) which binds to ergosterol of fungal cell membrane and make it leaky; azole derivatives (imidazoles-clotrimazole, miconazole, ketoconazole etc., and triazoles-fluconazole, itraconazole, voriconazole etc.) that prevents conversion of lanosterol to ergosterol by inhibition of lanosterol 14α-demethylase; and allylamine derivatives (amorolfine, naftifine, and terbinafine) which inhibit squalene epoxidase [4].
Butenafine hydrochloride is a novel synthetic small antifungal molecule chemically related with benzyl amine and naphthalene as shown in Fig. 1. It has a molecular formula of C 23 H 27 N.HCl and molecular weight of 353.93 [5]. It is a potent and broad spectrum antifungal agent. It selectively inhibits fungal squalene epoxidase disabling synthesis of ergosterol, an important intermediate of fungal cell membrane synthesis [6]. It is used as an antifungal cream to treat ringworm (Tinea corporis), jock itch or ringworm of groin (Tinea cruris), and athlete's foot or ringworm of feet (Tinea pedis) [7]. Clinical trials of butenafine exhibited better efficacy than terbinafine, which is a chemically related antifungal drug with a similar mechanism of action [8,9].
This study developed and validated a simple, rapid, sensitive, accurate, and precise highperformance liquid chromatography (HPLC) method for the determination of butenafine hydrochloride in a newly developed butenafine loaded nano sponge. The UV-spectroscopic methods are rapid and simple; however, the lack of required sensitivity and reproducibility can become issue for spectroscopic analysis. Moreover, in ability of UV-spectroscopy to deal with interfering materials in the pharmaceutical formulations such as excipients, impurities, residual solvents, and degraded compounds are other challenges. There is a report of an UVspectroscopic method for the determination of butenafine hydrochloride in pharmaceutical formulation [10]. However, it exhibited low sensitivity and a narrow linearity range (10-60 μg/ml). There were no reports that the spectrum of formulation or the degradation samples ruled out the interference during analysis. The HPLC methods are considered a widely used technique to determine substances in pharmaceutical formulations and biological samples due to its high selectivity, sensitivity, accuracy, and reproducibility [11]. There are few HPLC methods available for the quantification of butenafine in dosage forms such as and creams; however, these methods have either low or short linearity ranges like 0.09-0.45 μg/ml [12] thus requiring multifold dilutions of test samples; or low sensitivity with narrow linearity ranges like 80-400 µg/ml [13] or 100-300 µg/ml [14], thus not suitable for the samples containing lower amounts of target compound.
In this paper, we report a simple, sensitive, accurate, and precise HPLC method for the determination of butenafine hydrochloride by utilizing the most commonly used reverse phase column (C-18, 5 μg, 150 mm x 4.6 mm) and an organic modifier (methanol). The method was validated as per ICH norms; thus, it is reproducible to determine if butenafine hydrochloride is present in any pharmaceutical formulations [15].
Materials
Butenafine hydrochloride was purchased from Sigma Aldrich USA. HPLC grade solvents such as acetonitrile, methanol, orthophophoric acid and buffer component such as monobasic potassium phosphate was obtained from Panareac, Spain. Ultrapure water was obtained from Milli Q, Millipore.
Liquid Chromatography
The liquid chromatographic system was comprised of a separating module with efficient solvent and sample management system (Alliance e2695, Waters Co., MA, USA), column heater (Waters, alliance, 2695, Waters Co., MA, USA), and UV detector (Waters 2487). Empower Pro 2 (version 6.20) was employed for acquisition and data collection. Chromatographic separation was achieved on a C-18 reverse phase column Hypersil ODS (5 μm, 150 mm x 4.6 mm I.D, Thermo Fisher scientific, Waltham, MA, USA) maintained at 30ºC.
Calibration Standards and Quality Control Samples
An accurate amount of butenafine hydrochloride was dissolved in HPLC grade methanol to prepare a stock solution with a concentration of 100 μg/ml. The stock solution was then diluted to prepare working standards with concentration ranges from 1-50 µg/ml. Three quality control (QC) samples at three concentration levels were prepared to serve as lower quality control (LQC, 2 µg/ml), medium quality control (MQC, 20 µg/ml), and higher quality control (HQC, 40 µg/ml) samples.
Sample Preparation
The butenafine loaded nanosponges were weighed and dissolved in methanol with the help of an ultrasonicator. The obtained solution was appropriately diluted with the mobile phase. Ten ml of the prepared sample was injected in triplicate on the HPLC column for separation and evaluation for butenafine hydrochloride.
Method Development
The preliminary chromatographic parameters include a strong mobile phase (90% v/v of methanol and 10% v/v water) flowing at rate of 1 ml/min through a standard column set at an ambient temperature followed by UV detection at a wavelength of 254 nm to achieve a response after injecting 10 ml of a working standard of 10 µg/ml of butenafine hydrochloride in the methanol. Next, 10 mM monobasic potassium dihydrogen phosphate was added to the aqueous phase and acidified by using orthophosphoric acid to a pH of 3.0 to minimize peak tailing [16]. Further improvements in the size and shape of the peak was achieved by varying proportions of the organic phase, column temperature, and wavelength of detection. The optimized mobile phase consisted of 70 volumes of methanol and 30 volumes of 10 mM monobasic potassium dihydrogen phosphate buffer adjusted to a pH of 3.0 with orthophosphoric acid. The freshly prepared mobile phase was degassed by sonication and filtered by using a regenerated cellulose membrane filter (0.45-micron). Ten ml of calibrators, QC samples, or test samples were injected into the column. The isocratic separation and elution was achieved on C-18 column set at 30ºC from the mobile phase flowing at 1.0 ml/min. Drug peaks were detected by UV detector set at 280 nm.
Validation of Method
The assay method has been validated for parameters such as system suitability, linearity, sensitivity, accuracy and precision, as per ICH norms [15].
System suitability
The system suitability was first assessed by injecting and analyzing six replicates of butenafine hydrochloride at the lowest working standard of 1 µg/ml. The peak area of responses and retention times were recorded. The system and method was considered suitable if the relative standard deviation (% RSD) of the mean peak area and mean retention time was within ± 2%.
Linearity of assay method
The linearity of the assay method was determined by applying simple linear regression on the responses obtained after injecting 10 µl of working standard solutions within the range of 1-50 µg/ml of butenafine hydrochloride in triplicate. Calibration plots were constructed by plotting concentrations of calibration standards versus peak areas of the respective responses. A simple linear regression was applied and the correlation coefficient was calculated to evaluate the linearity of the plot.
Detection and quantitation limits
The limit of quantification (LOQ) and limit of detection (LOD) were calculated by using calibration line. LOQ and LOD were calculated as 3.3 σ/S and 10 σ /S, respectively, where σ is the standard deviation of intercept and S is slope of the line.
Accuracy
The accuracy of the method was determined by injecting quality control samples of butenafine hydrochloride at three levels (2, 20 and 40 µg/ml) in triplicate. Responses were evaluated and accuracy of method was established based on % recovery of quality control samples.
Precision
The precision of the method was determined by injecting quality control samples of butenafine hydrochloride at three levels (2, 20, and 40 µg/ml) in triplicate during the same day (intraday or intra-assay precision) and at different days (inter-day, inter-assay or intermediate precision). The intra-assay precision or repeatability was evaluated by calculating relative standard deviations (RSD) of the responses observed on day 1 and day 2; whereas, intermediate precision was evaluated by calculating the overall RSD on day 1 and day 2 together.
Optimization of Method
The method was optimized by varying several parameters sequentially and observing the responses. For instance, the composition of the mobile phase such as the proportion of organic phase, buffer and pH condition of aqueous phase, flow rate, column temperature and detection wavelength varied to optimize the size (sensitivity) and shape of chromatographic peak during the development phase. Table 1 shows the optimum chromatographic settings.
System suitability
The system suitability test of instruments and methods was first done before analysis as it is considered an important test for all chromatographic methods. The system suitability test is performed to verify that the system is suitable for analysis. The equipment, electronics, analytical operations, and samples to be analyzed are all considered as parts of system [17]. The repeatability of peak response (precision of peak area, peak height, and retention time), resolution factor, capacity factor, tailing factor, and column efficiency are some commonly used system suitability tests. System suitability tests verify that the chromatographic systems provide acceptable and reproducible results to ensure the reliability of chromatographic data. As per USP, system suitability tests must meet to a predefined standard before any sample analysis is performed [18]. In the case of system suitability failures, all the analytical data should be rejected. Table 2 presents the results of system suitability parameters. The variation in peak area and retention time was found as 1.12% and 0.29%, respectively. Furthermore, the number of theoretical plates was > 2000, which indicates the efficiency of the column and the suitability of the system (Table 2).
Linearity
The calibration curves were prepared in triplicate by plotting peak area against concentration. Table 3 shows the mean calibration data, which is the calibration standards and corresponding responses (Mean ± SD, n=3) along with its relative standard deviation (% RSD). The %RSD of the responses was less than 1.00, which indicates an excellent reproducibility of the chromatograms. The responses with % RSD less than 2 are considered as reproducible [19].
The linearity of the method was evaluated by simple linear regression analysis. The method was found linear within the range of 1-50 µg/ml with an excellent correlation (r2 = 0.999 ± 0.0003) as shown in Fig. 2. Table 4 shows the data for simple linear regression analyses of standard plots (n=3) such as linearity range, linearity equation, coefficient of correlation, slope, and intercept.
Limit of detection (LOD) and limit of quantitation (LOQ)
The chromatograms of samples containing very small amount of analyte may exhibit response at the retention time of analyte either due to analyte itself or due to baseline noise (fluctuating baseline). Thus, it is very important to establish the LOD and LOQ. The LOD and LOQ are the lowest amount of analyte that can be detected or quantified by the method with defined accuracy and precision. There are several methods to establish these limits namely, visual method, signal to noise method and standard deviation method. In this validation, the LOD and LOQ were determined by using standard deviation method because it the easiest one and the quickest one. The LOD and LOQ were calculated as 0.18 and 0.57 µg/ml respectively. The LOD of the method indicates that if the analyte concentration in the sample is less than this limit then the responses observed should not be attributed to presence of analyte with certainty. It may be either due to baseline noise only or due to traces of analyte and baseline noise altogether. The LOQ of the method is 0.57 µg/ml which is good as it corresponds to the lower level of the calibration standard [20].
Accuracy of determination
The accuracy of the determinations used in this study were evaluated by calculating the percentage of recoveries of 9 quality control samples of butenafine hydrochloride. The quality control samples at three different concentration levels, i.e. low, mid, and high (2, 20, and 40 µg/ml, respectively) were freshly prepared in triplicate. Table 5 presents the responses of the recovery study. The mean percentage of recovery of the samples at low, mid, and high levels were 100.97 ± 0.80, 102.23 ± 0.16, and 100.84 ± 0.37, respectively. The percentage of recoveries of all 9 samples was between 100.28-102.24 %, while the overall recovery was found to be 101.53 ± 0.43%, which indicates the accuracy of our method.
Precision of assay
The precision of the assay was evaluated on two levels: 1. intra-day (intra-assay precision or repeatability); and 2. inter-day (inter-assay precision or intermediate precision). The quality control samples at low, mid, and high concentrations (2, 20, and 40 µg/ml, respectively) were freshly prepared in triplicate and analyzed. Table 6 presents the responses as peak area, mean area, standard deviation, and relative standard deviation. The samples at higher concentration levels exhibited better precision as compared to samples with lower concentrations. The intra-assay precision or repeatability of quality control samples at different levels ranged from 0.15-0.79 % on day-1 and varied slightly on day-2 and ranged from 0.20-1.22 %. The intermediate precision was found to be 0.51 %. These results suggest that the method is highly precise and may be reproduced precisely in any lab as the precision at different levels was less than 2%.
Determination of Butenafine Hydrochloride Loaded in Nanosponges
The determination of butenafine hydrochloride loaded in nanosponges was done by the assay method used in this paper. Fig. 3 depicts the typical chromatogram of butenafine hydrochloride in nanosponges. The retention time of butenafine hydrochloride in the nanosponge samples was the same as that observed in the standard and quality control samples. Moreover, the peak was sharp, symmetrical, and well resolved; and there was no interference with any excipients of the formulation.
CONCLUSIONS
The proposed method is rapid, simple, sensitive, accurate, and precise for isocratic elution and UV determination of butenafine hydrochloride. The low relative standard deviations of system suitability, accuracy and repeatability studies indicate that the developed method is reproducible. The method offers advantages of being simple because of its utilization of the most widely used column and mobile phase; and it is rapid due to its relatively short runtime of 5 minutes. In addition, it exhibits high sensitivity and excellent linearity and encompasses a broad range of determinations with excellent accuracy and precision. | 2020-12-03T09:05:32.193Z | 2020-11-23T00:00:00.000 | {
"year": 2020,
"sha1": "e439f55ffa772bf0e3c5bb819545b8fb72698571",
"oa_license": "CCBY",
"oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/30892/57978",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "776849e09c3518a473b1cb89210b4c3c6d82528e",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
237871140 | pes2o/s2orc | v3-fos-license | Providers’ definitions of quality and barriers to providing quality care: a qualitative study in rural Mpumalanga Province, South Africa
Background: South Africa requires high-quality primary health care (PHC) to retain patients and optimize outcomes. While prior research has identified implementation challenges within the PHC system, there is less understanding of how providers define quality, their perceptions of barriers to providing quality care, and how they overcome these barriers. This study assesses provider views on quality at primary care clinics in a rural sub-district of Mpumalanga Province. Methods: We conducted in-depth interviews with providers in early 2019 on the value of quality metrics for providers and patients, what indicators they would use to assess clinic performance, and barriers and facilitators of delivering care. Interviews were conducted in Shangaan, audio-recorded, and translated into English. A deductive approach was used to develop a provisional coding schema, which was then refined using an inductive approach in response to patterns and themes emerging from the data. Results: Twenty-three providers were interviewed (83% female, 65% professional nurses). Providers did not give a single standard definition of quality care. Clinic structure and resources emerged as a key issue, as providers linked deficiencies in infrastructure and support to deficits in care delivery. Providers identified mitigating strategies including informal coordination across clinics to address medication and equipment shortages. Common across the providers’ discussion was poor communication between the district, PHC supervisors, and implementers at the facility level. Conclusion: Providers connected deficits in quality of care to inadequate infrastructure and insufficient support from district and provincial authorities; mitigating strategies across clinics could only partially address these deficits. The existence of a national quality measurement program was not broadly reflected in providers’ views on quality care. These findings underscore the need for effective district and national approaches to support individual facilities, accompanied by feedback methods designed with input from frontline service providers.
INTRODUCTION
South Africa currently faces a quadruple burden of disease driven by coexisting infectious diseases (human immunodeficiency virus [HIV]/AIDS, tuberculosis), non-communicable diseases (vascular illness, diabetes, cancer), avertable maternal and child mortality, and high levels of violence and injuries 1-4 that collectively place a heavy burden on the primary health care (PHC) system. 5 The South African government is organized into 3 levels: national, provincial, and district. 5 PHC is provided through the district health system, and health sector governance is centered within the provincial health departments, while funding and policy guidelines are made at the national level. 6 Decentralization of health care has increased access to healthcare facilities, but has also intensified problems of disparity in poor, rural areas 7 leading to low-quality health care delivery. This is particularly the case in areas that were pre-apartheid Bantustans which have historically suffered from underfunding and lack of resources. 8,9 While the South African government aims to provide universal health coverage, it must also ensure high-quality of care across the public health system to reap its benefits. 10,11 High-quality health care is important for retaining patients and optimizing outcomes for those in need of continuous clinical services. High-quality health systems consistently provide care that improves or maintains health, are valued and trusted by all people, and respond to changing population needs. 12 The foundations of high-quality health systems include the population and their health needs and expectations, governance of the health sector and partnerships across sectors, platforms for care delivery, workforce numbers and skills, and tools and resources; these foundations inform quality processes of care which lead to quality impacts. 12 (CHCs). 5 Services offered in PHC facilities include maternal and childcare, immunization, family planning, syndromic treatment of sexually transmitted infections, HIV counseling and testing (HCT), and care for chronic diseases. CHCs operate 16-24 hours a day, providing additional maternity services and accident and emergency services. Clinics typically offer services 8-9 hours a day. 5 Both CHCs and clinics offer services 7 days a week.
The national government has introduced numerous policy reforms and initiatives 13,14 including the Ideal Clinic Realization and Maintenance (ICRM) program 15 to improve quality of PHC services. 10 ICRM works to support facility level quality improvement through provision of manuals and training as well as a district support team. However, poor communication between national government, international funders and policy developers, and poor oversight of actual PHC service delivery, continues to create deficits in the performance of district health systems. 8,10 In 2016, a modeling study suggested that of the estimated 97,000 preventable deaths in South Africa, 51,000 (53%) were attributable to poor quality of care, through incorrect management or inability to retain the patient in care, among those utilizing the health system. 16 Research assessing overall health system quality and patient experience has identified numerous implementation challenges and deficiencies within the PHC system, including unequal distribution of resources, management and leadership 2,9,17 ; service delivery issues such as long wait times 2,8,18 ; and poor hygiene and infection control. 2,19 A number of studies have documented providers' insights into challenges in providing quality care across sub-Saharan Africa. 5,17,[20][21][22][23][24] Missing from the current literature is an understanding of what providers perceive as quality care, particularly situated within a framework of high-quality health systems defined by the ICRM framework and the Lancet Global Health Commission on High-Quality Health Systems. 10,12 We conducted a qualitative research study within a resource-constrained, rural South African setting to identify what providers define as quality care and the barriers they face in providing quality care. This research can identify gaps in the existing foundations of care needed to provide quality services and define priorities to advocate for resources and improve quality care at the provider level. 25,26
Study setting
The Agincourt Health and Socio-Demographic Surveillance System (HDSS) research area is operated by the Medical Research Council (MRC)/Wits University Rural Public Health and Health Transitions Research Unit (Agincourt) in the rural Bushbuckridge sub-district of Ehlanzeni District in Mpumalanga Province. The Agincourt HDSS is located about 500 km northeast of Johannesburg, near the border of Mozambique, and is home to roughly 115,000 individuals living in 31 contiguous villages. 27 Within the Agincourt HDSS, approximately 1 in 5 adults is living with HIV, 28 over half of adults 40 years and older have elevated blood pressure, and 10% have diabetes. 29 As of 2019, there are 9 health facilities in the study area (3 CHCs and 6 government satellite clinics). Three referral hospitals are situated 25 and 45 km from the study setting. 27 The MRC/Wits-Agincourt Unit maintains a longitudinal household census of area residents' socio-demographic status, and hosts a range of research studies, including clinical trials and cohort studies. 27 This research on quality of care was nested within a larger study being conducted in the site on community mobilization for HIV treatment as prevention, described elsewhere. 30 We conducted a cross-sectional qualitative study among health care workers active in public health facilities within the Agincourt HDSS study area.
Study population and sample-Health care providers were purposively sampled from the 9 PHC facilities in the Agincourt HDSS. Based on data from a recent clinic assessment, 31 the 3 CHCs were staffed by 14 to 25 nurses and 4 to 7 lay counselors, while the 6 clinics had between 6 and 14 nurse positions and 1 or 2 lay counselors. 32 On average, CHCs saw 850 antiretroviral therapy (ART) visits and 350 HCT visits per month, while clinics saw 370 ART visits and 120 HCT visits per month; the majority of providers across all facilities had seen more than 30 patients overall on their last working day. 33 Inclusion criteria for providers included being over 18 years of age and currently employed as a professional nurse, enrolled nurse, or lay HIV counsellor at a health facility within the Agincourt HDSS. In order to reflect facility and staff size in our sample, at the 3 larger 24-hour CHCs, we aimed to interview 3 providers, while we aimed to interview 2 to 3 providers at the 6 smaller clinics.
Data collection and analysis
Data collection took place between February and May 2019. The in-depth interview guide (Supplementary Data 1). was written in English, translated into the local language, Shangaan, and back translated in order to be reviewed by all members of the study team. The interview guide included open-ended questions eliciting provider perspectives on the value of health care quality metrics for providers and patients, what indicators they use to measure clinic performance, and barriers and facilitators of delivering quality care. An analysis of qualitative data from interview questions focused on ART is published elsewhere. 34 Interviews were scheduled over the phone at the providers' choice of time given their clinical schedules. They were held in person at the providers' clinic and were approximately one hour in length. Interviews were conducted by an individual experienced in conducting qualitative research in the Agincourt HDSS study area, who had completed high school and was fluent in Shangaan. Interviews were audio-recorded, translated, and transcribed into English. The study manager and qualitative interviewer reviewed all transcripts together in English for clarity prior to finalizing and coding them.
Transcripts were uploaded into NVivo qualitative analysis software (QSR International Pty Ltd. Version 12).
Coding was conducted using a thematic content analysis approach. A deductive approach was used to develop a provisional coding schema based on study questions. We used the high-quality health systems framework 12 to preliminarily identify barriers to care. Domains of quality care 35 were also used to categorize indicators of care. A sub-set of transcripts was coded using this provisional schema in conjunction with an inductive approach that West identified patterns and themes emerging from the transcripts. 36 The codebook was then reviewed and revised by members of the research team; including the study manager and 3 PhD researchers, all of whom are American researchers with extensive experience working in the study area and in other parts of South Africa. All transcripts were coded using the finalized codebook by the study manager, an American researcher who lived in the study area for 2 years. Illustrative statements of themes and sub-themes are provided and have been de-identified for inclusion in this report.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the relevant institutional and national research committees and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
RESULTS
We completed qualitative interviews with 23 participants from 9 facilities in the Agincourt HDSS, 82.6% of whom were female, which is representative of the healthcare workforce in this area. The majority of providers were professional nurses (65%), with 5 enrolled nurses and 1 lay counselor. All providers approached to participate provided written consent. Due to providers' clinical demands, one provider had to end the interview early, and one provider took a brief break in the interview to attend to a clinic-related issue.
Providers discussed many barriers to providing quality care, which fell under the foundational categories of workforce and tools. These themes and related sub-themes are discussed below.
Barriers to providing quality care
Understaffing undermines provider capacity-Providers discussed how understaffing diminished their ability to provide quality care through the creation of bottlenecks in service delivery, as well as the impacts of understaffing on their health and wellbeing. Almost all providers spoke of how they did not have enough staff working in their facility, particularly a lack of nurses but also filing clerks. Professional nurses were often required to take on duties such as checking vital signs that could be performed by enrolled nurses if they were available. Providers described how they had to hurry through patient visits and provide sub-optimal care, and some revealed they sometimes told patients to simply return the following day due to shortage of staff. As one provider said: "According to policy and guidelines it says, 'All chronic patients should have their urine taken and tested every visit.' But due to shortages, it becomes impossible… all chronic patients should be assessed fully from head to toe on a daily basis. How long would they wait if we practice that?" (Clinic 4, professional nurse) Providers discussed challenges with scheduling leave days and staffing throughout the week; weekends and Mondays were routinely understaffed despite high patient volume. Providers noted that there was no staffing buffer in the event of patient emergencies that occupied more experienced staff or for routine gaps such as lunch breaks, provider illness, or maternity leave. Providers were often forced to complete duties outside of their scope of work, such as collecting files and cleaning the clinics, as a result of these shortages: "On weekends there is no data capturer. The nurses have to do all the work… We don't have cleaners. We don't have grass cutters. As nurses we have to see that the yard is clean and we also have to clean the clinic. Again, we are the ones who have to retrieve files. This is taking our time." (Clinic 4, professional nurse) Another scheduling issue was that of lay counsellors, whose hours are shorter than typical facility hours (weekday mornings only) as they are not part of the formal employment system of the Department of Health, and are meant to provide HCT on a part-time basis. 37 As a result, patients who come for HCT outside of the lay counsellors' hours have to be seen by a professional nurse.
Many providers discussed their health and wellbeing suffering as a result of understaffing, with specific mentions of depression, elevated stress, physical pain, exhaustion, and interpersonal or marital problems. One noted that these detriments to providers' wellbeing made them feel their own health was not a priority of the Department of Health. Another provider discussed how she had difficulty taking her own HIV medication due to lack of time to eat: "The shortage of staff is a serious problem at this clinic… we cannot take the lunch or breakfast breaks that we need as we are on treatment. If we carry on like this, we will collapse as this treatment requires us to eat now and again." (Clinic 9, professional nurse)
Staff development approaches can be ineffective and counterproductive-
Providers discussed how the current approaches to training and supervision were ineffective and sometimes counterproductive. While providers identified a need for additional skills training, many raised issues with the in-service training model, in which senior providers were called to attend workshops and were responsible for relaying information back to staff. Providers reported concerns about the accuracy and comprehensiveness of information being conveyed back to them, if at all. The burden of trainings and understaffing at healthcare facilities were intersecting issues. Providers discussed being chronically understaffed because nurses were at trainings or workshops, and one provider stated that they opted not to go to workshops as they knew it would leave facility staff overwhelmed. Providers also mentioned that being understaffed when they returned to the facility meant they did not have time to disseminate information learned in training. Providers cited a need for additional skills training, including care for patients with tuberculosis and HIV, as well as training on new medications: "You find that treatment is there but we don't know how to use it. You find that the treatment comes with a pamphlet and we have to read it. But it would be better if someone was there to demonstrate it to us. Seeing it is better than reading… we all need trainings when it comes to treatment." (Clinic 5, professional nurse) The majority of providers also discussed issues with support and supervision at their facility, primarily at the district level. Providers said that when they did receive district supervision, it was overly critical and demotivating. Some felt that district supervisors only came when there was a serious issue at the clinic and were only there to "point fingers" or "shout and make noise." Providers expressed dissatisfaction with the way performance bonuses were given by the district. Some discussed simply not receiving the performance bonuses they were promised, or feeling that bonuses are distributed unfairly. Some providers spoke about how lay counsellors, who provide HCT, had recently gone on "go-slow" (working reduced hours and only serving a set number of patients) to demand their performance bonuses, which they felt had not been fairly distributed.
Facility infrastructure and limited space impact ability to provide care-Providers revealed how problems with facility infrastructure and limited space impacted their ability to provide quality care. Some providers interpreted poor facility infrastructure as an indication of the government's lack of concern for its constituents. Descriptions of inadequate space included insufficient meeting rooms, cramped service rooms, overcrowded reception areas with limited space for patients to wait comfortably, and lack of shelter for patients who had to wait outside. Structural issues in and around the clinic also impacted providers' ability to provide quality care; providers cited worries about their safety and patient safety due to issues such as lack of secure fencing, leaking roofs, and, in one clinic, a bat infestation in a collapsed roof. Periodic electricity outages were also noted in several clinics. Providers at one clinic described how the lack of filing cabinets jeopardized patient confidentiality, and providers avoided opening new files to save space.
Poor infrastructure also had a significant impact on confidentiality of patient care. Providers were concerned that patients could see or hear what was going on in consultation rooms because of the facility layout and size; in 3 facilities consultation rooms were separated only by a curtain. Providers mentioned having to take patients' vital signs in the waiting area, where they could not guarantee privacy. Another said there may often be 2 providers in one room seeing patients at the same time. Others were concerned that the facility layout made it difficult for patients to maintain confidentiality after testing for HIV: "If you cry, that side, they will hear you. If you come out, there is no other door for you to use when coming out. The doors are looking at each other and people will see that you have a problem. The infrastructure is the problem." (Clinic 5, professional nurse) Providers also cited issues with water and sanitation, including lack of clean water, broken toilets, and deficient cleaning materials, as impacting quality of care. Providers reporting issues with their water supply also claimed that the municipal water tanker responsible for delivering their water did not come regularly. Without water and proper cleaning materials, custodial staff (or providers, in clinics without custodial staff) could not clean the facility: is easy for people to get infected with tuberculosis here as the place is not clean." (Clinic 5, professional nurse) Lack of equipment and medication reduces ability to provide quality care-Providers discussed how insufficient equipment impacted their ability to provide competent care. Specific items mentioned include hemoglobin meters, beds with stirrups, wheelchairs, incubators, a child's scale, diapers, an autoclave to sterilize equipment, pregnancy tests, otoscopes, batteries, linen savers, HIV rapid test kits, and air conditioning units (important for optimal storage of medication).
The lack of equipment and/or faulty equipment caused delays in patient care, wasting patients' time and making visits longer as providers had to share equipment.
Providers discussed lacking medication, including cough medicine, diphtheria and tetanus vaccines, injectable contraceptives, blood pressure medication, and ART. Providers linked these issues to problems with deliveries of medication from the Mpumalanga Department of Health. They reported that orders placed with the medication depot were not fulfilled, fulfilled late, or fulfilled in different quantities than ordered. One provider linked this routine shortage of chronic medications (i.e., blood pressure medication, ART) contributing to patients' poor adherence: "Sometimes they come and you see that this person is really sick, but there is no treatment... sometimes we are going to the nearest clinic to ask but also those clinics have limited treatment for their patients. We are worried about this issue. We keep on reporting and tell [the Department of Health] what we have done, but still they will tell you that the depot doesn't have treatment at the moment." (Clinic 2, enrolled nurse)
Indicators of quality care
Providers discussed different components of quality care rather than sharing a single common definition. The emergent themes from these interviews are detailed below.
Clinic infrastructure-Providers from facilities with a newer clinic infrastructure identified this as an indicator of quality care. Providers from other facilities mentioned infrastructure at their clinics as an indicator of poor quality, which also impacted patient experience. As one provider stated: "I think when it comes to the clinic itself, I'm not able to provide good patient care. Our clinic is not open and it is small. The clinic environment is not attractive." (Clinic 6, professional nurse) Lack of resources-Lack of resources in the clinic, including medication, equipment, cleaning supplies, and staff were all associated with less ability to provide quality care. Shortage of medication was cited as an indicator of care quality, with providers from 3 different facilities discussing how their lack of medication was indicative of low-quality care. Providers also discussed how the lack of medical equipment meant that they could not provide quality care to their patients: "I think what can help me to provide good quality care or to do my work well is when I have equipment. We don't have enough but with the little that we have, we are trying." (Clinic 5, professional nurse) Some providers also spoke about staff shortages and how it impacted their ability to provide quality care. Other providers spoke of resource availability more generally as an indication of good quality care and a source of pride.
Respectful care-Providers cited positive staff behavior as an indicator of quality, including communicating in a positive and open manner, explaining treatments, and conducting adequate counselling. Conversely, providers who reported colleagues having "attitudes" or showing lack of empathy for patients indicated that their care was of poor quality. Respectful care also included maintaining patient confidentiality. Some providers discussed confidentiality as a factor enabling patients to come to clinic and adhere to medication.
"When it comes to HIV patients and confidentiality, we are providing high-quality compared to other clinics of Bushbuckridge. We are the best and I know that." (Clinic 9, professional nurse) Time spent in the clinic-Time spent in the clinic, including short waiting times for services and longer face-to-face visits with providers, were seen as indicators of quality care. Some clinics with short wait times credited the central chronic medication dispensing and distribution (CCMDD) program (part of the national differentiated care facility decongestion initiative) for their ability to provide quicker service to HIV-positive patients: "I would say our clinic is the best when comes to treating patients who are HIV-positive. Particularly the chronic [care]…the treatment is packed with the [recipient's] name on the outside of the package… they don't stay for more than an hour." (Clinic 1 [CHC], lay counsellor) Providers also cited their lack of ability to spend time with patients as an indicator of poor quality. Some discussed how they did not have enough staff to attend to the high patient volume, leading to long wait times and rushed care. Several providers discussed how time constraints led to diminished or complete lack of counseling, including on how to take their antibiotics properly, or HCT; or skipping procedures like Pap smears or getting sputum samples.
Adherence to clinical guidelines-Providers discussed the importance of knowing and adhering to clinical guidelines, and of attending district-supported trainings and workshops in informing quality patient care. For example: "Our utilization statistics are also low; this is proof that we don't provide good quality care. If we were, we would have higher numbers." (Clinic 5, professional nurse) Patient data are not used to define quality care-Patient outcomes data were not broadly discussed as indicators of quality care. A few providers identified patient utilization of the facility, particularly by patients from other villages who may be bypassing their nearest healthcare facility, as an indicator of quality care. As one provider explained: "According to health law, this is not a good place where people can get their treatment from. That is why many people are going to (other CHC). This is showing that we are missing something." (Clinic 5, enrolled nurse) Providers also discussed seeing patients' health improve after receiving treatment from their facility as an indicator of quality care. Despite no overarching definition of quality being shared among providers
Mitigating strategies
While challenges in providing quality care have been documented, less on how providers overcome these barriers has been documented. An emergent theme in our analysis of the data were mitigating strategies that providers used to combat challenges to address barriers to providing quality care.
Reallocation of resources within the clinic-To maximize clinic space and maintain confidentiality, several facilities moved their lay counsellors to a space that could be devoted to HIV counselling and testing, such as a mobile unit, the meeting room, or the nurses' accommodation on site. Facilities with more space also recommended designating one building for CCMDD both to keep the queue moving quickly and to streamline treatment pickup for all chronic patients. To address understaffing, providers took shorter lunch breaks and split staff into teams to balance leave days. For example: "We work as 2 teams. One team is off on Wednesday and we are working from 7 to 7. The other team is starting to work on Wednesday. We are doing like that because of the offs. We must work and after we rest." (Clinic 3 [CHC], enrolled nurse) Sharing resources across clinics-To make up for resource shortages, providers shared equipment across facilities, bought their own supplies (i.e., batteries, soap, toilet paper), and sometimes went as far as driving a patient to the hospital in their own car if an ambulance was not available. Many providers expressed a sense of duty to help their patients, despite the shortage of resources and personal costs. Providers also tried to circumvent the system by ordering quantities of medication greater than the expected patient population, sharing medication between facilities, or prescribing medications in smaller quantities at a time than recommended by guidelines (i.e., 1 month of ART instead of 3, in order to supply more patients). Providers in the Agincourt HDSS area discussed using a WhatsApp group across facilities to discuss supply of ART in particular: "With HIV treatment (stock-outs) were not happening as we are trying by all means to ask for it from nearby clinics. They are assisting us. We have a WhatsApp group that we use to talk to each other. If we have a shortage of this treatment, we will WhatsApp it so everyone in our group will know." (Clinic 5, professional nurse)
DISCUSSION
This qualitative study of providers in rural PHC facilities in South Africa elucidated their definitions of high-quality care, the barriers they face in providing care, and the mitigating strategies they employ in response to these barriers. Definitions of quality were focused on structure and resources, as well as some process elements-patient experience and competent care. Few providers identified patient outcomes like treatment success or retention in treatment as indicators of quality. In identifying barriers to high-quality care, providers linked the deficiencies in infrastructure and support to deficits in care delivery, such as long wait times and short visits due to limited staffing or privacy breaches due to insufficient space to maintain confidentiality. Finally, providers identified mitigating strategies such as coordination across clinics to address medication shortages in individual facilities. Interwoven throughout the providers' discussion was the poor communication between the district, PHC supervisors, and implementers at the facility level. This manifested in myriad ways; for example, lack of responsiveness from the depot in regards to medication stock-outs at the clinics, or a training model that did not meet providers' needs.
The disconnect between national policy and clinic-level implementation was further highlighted in discussions with providers around indicators of quality. Although the ICRM program was implemented across South Africa in 2016, collection and utilization of these indicators does not appear to be ongoing in rural PHC clinics in the Agincourt HDSS. Throughout the interview process, there were challenges clearly defining the concept of metrics and indicators in both English and Shangaan. The concept of quality measurement did not seem to resonate with providers, even though they could discuss at length the quality challenges they observed. Patient outcome metrics were largely not addressed, although some providers discussed service numbers (such as HCT), despite the national push to gather data. There is a demonstrated gap in translation from the metrics that are being collected and pushed to the provincial and national departments of health to what providers are using on the ground. This is reflected by a 2019 study conducted in Gauteng and Mpumalanga which found that PHC facility managers from these provinces reported lack of involvement in conceptualization of the ICRM program. 38 There is significant demand on providers and programs to report data, primarily on clinic volume and utilization, rather than patient-centered quality outcomes. 39,40 This may result in providers focusing on meeting reporting targets rather than how indicators can be used to inform or support patient care; prior health systems research has found that a focus on bureaucratic accountability can diminish providers' capacity to center patient needs. 41 However, there are ongoing collaborative learning platforms in the study site, such as the Verbal Autopsy Participatory Action Research Programme, which aim to engage local providers and stakeholders in health services research. 42 For quality improvement efforts to be meaningful, there is a need for data literacy education and training at the clinic operational and provider level, involvement in development of indicators, and a clear feedback loop allowing what providers see as critical information to be reflected in the national quality measurement strategy.
Providers discussed many barriers to providing quality care, which fell under the foundational categories of workforce and tools. These barriers were all tied back to lack of human and material resources available to the facilities, which impacted every element of care. Understaffing, need for additional training, poor facility infrastructure, lack of clinical care space and resources, and the challenges in providing care associated with these deficiencies have previously been documented in South Africa. Our research frames these gaps in terms of foundational elements of quality care, and the strategies providers use to complete their duties in spite of them, in order to highlight the need for attention and intervention at the meso (district) and macro (province/national) levels to equip PHC facilities with the resources they need.
The first major barrier identified was understaffing. Previous research in South Africa has identified shortage of human resources 2,8,18 and high patient-staff ratios 43 as causes of service delivery issues, including a study conducted with health systems stakeholders in the Agincourt HDSS. 17 It has been noted that health workers' heavy workload is a reason for both short-and long-term sickness absence, 20 and that among health care providers in sub-Saharan Africa, burnout is highest among nurses and is associated with their work environments. 44 In previous research, 42% of providers in the Agincourt HDSS surveyed reported planning to leave their job within the next 2 years (in some clinics, up to 81% of staff). 33 Our research supports these findings that facilities are understaffed, leading to over-crowding, rushed or incomplete services, and negatively impacting providers' health and wellbeing. Providers across the 9 facilities discussed making referrals to hospitals or other PHC facilities as a common mitigating strategy for addressing shortages of staff, medical supplies, and medications. Referring patients to other facilities experiencing similar staffing shortages and lacking resources can only exacerbate these existing service delivery issues. This practice can also consume patient time and money, and affect retention in care, as it may be an effort to get to the second facility, or patients may choose not to follow the referral. Increasing the number of lower-level nurses as well as cleaners, file clerks and data capturers could facilitate more effective task-shifting and allow higher-level nurses to focus on their clinical duties.
Providers also discussed challenges with scheduling throughout the week, including leave and training days, as contributing to their heavy workload. Another study from the MRC/ Wits-Agincourt Unit identified the practice of holding clinics in the mornings leading to flooding the facilities at certain times of day and exacerbating staffing problems. Those researchers suggested making flexible appointments as a no-cost measure to reduce such crowding 17 ; however, this would require providing additional staff to monitor appointments as well as the materials and space required to support this approach. Another strategy for improvement would be to allocate staff more evenly throughout the week, including weekends, rather than focusing staffing on Tuesdays-Thursdays as discussed in our research.
The second major barrier was staff feeling they did not receive enough skills training. Our research illuminates how the current system for training is not responsive to providers' needs and realities. Other work in the Agincourt HDSS has shown a need for additional training on HIV treatment as prevention, 31 and providers discussed a need for trainings in skills and new treatment guidelines. 34 The current training model, where one senior staff member is sent to a training and then expected to disseminate information to the rest of the staff, does not appear to meet providers' needs. Sending a master trainer to provide in-service trainings in the clinics themselves could be a viable option to address the challenge of knowledge dissemination without taking providers away from their facilities. If online trainings are a route to be explored, then data for providers to access internet on their cell phones should be distributed to providers as well.
Poor facility infrastructure was the third major barrier identified to providing quality care. Healthcare associated infections have been noted in past research due to aging infrastructure and inadequate environmental cleaning. 43 Other research has found poor hygiene and infection control in South African healthcare facilities. 2,19,34 Structural issues noted by providers such as leaking roofs, collapsed ceilings, insect and bat infestations, all require immediate attention and maintenance to ensure safe environments for providers and patients. One provider felt the poor infrastructure was indicative of the South African government's lack of attention to rural areas. Indeed, problems with facility infrastructure in the Agincourt HDSS may date back to apartheid, when the area was a "homeland" with a smaller population, in which limited and poor quality services were provided. 17 These facilities must now combat deteriorating infrastructure and growing populations. Mpumalanga Department of Health stakeholders have noted that facility maintenance is not within their control, but rather falls under the provincial Department of Public Works, Roads, and Transport, 17 thus requiring hard to attain cross-departmental coordination.
The fourth major barrier identified was a chronic lack of equipment, medication, and other resources in the facilities, also documented in previous studies in South Africa 2,43,45 and within the Agincourt HDSS study area. A 2018 study intervention to improve hypertension care in PHCs in the Agincourt HDSS was unsuccessful, in part due to unreliable blood pressure machines and cuffs, intermittent drug shortages, and lack of space. 46 Mitigating strategies to combat this lack of resources, such as relying on peer-provider networks and drug substitution 45 are not sufficient to address the problem of chronic medication shortages. Even when facilities are able to share, providers may often have to use their own money for transport to pick them up. Some providers fear that giving shorter doses of medication such as ART is problematic for patients who may have to travel far and at personal expense to get to the facility, and that it may deter them from coming back. There is an evident need for better coordination between facilities and the depot, and between the depot and their suppliers.
The limitations of this study should be addressed. First, this research was conducted in a rural sub-district of Mpumalanga, and our findings may not be generalizable to other districts or provinces throughout South Africa. Second, we did not interview a random sample of providers, and provider availability was subject to limitations based on clinical responsibility. However, we were able to interview approximately 1 in 6 providers across the 9 clinics in the Agincourt research area. Third, engaging providers in research of this kind is a challenge due to their heavy workload and the time required to complete in-depth interviews. Finally, a US researcher conducted the analysis; while they were closely familiar with the study area they may have misinterpreted or missed important contextual details during this process.
In conclusion, PHC providers in the Agincourt HDSS study area are faced with significant barriers to providing quality care driven by budget constraints and underfunding, which lead to deficits in workforce, staffing, tools and resources. These barriers to providing care are compounded by the disconnect between policymakers and implementation, and systems that are unresponsive to their needs. Given the breadth of existing evidence documenting shortages in foundations of quality care at PHC facilities in South Africa, future research should include providers as partners in quality improvement efforts, letting their perspectives inform initiatives and planning. The issues in resource allocation and support and supervision identified fit with the Lancet Global Health Commission's recommendations, and other research conducted in Mpumalanga Province 47 for health systems to focus less on individual provider/clinic interventions, and try more meso (district) and macro (provincial/ national) approaches. 12 This research elucidates the gaps in foundations of quality care; serving as a reminder that new clinic-level programs or initiatives are unlikely to succeed until the cracks in these foundations are addressed by the district and provincial Departments of Health.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. | 2021-05-22T00:03:30.360Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "e8daf098e9ada3a501880d9423b1e1f1eef3066a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.35500/jghs.2021.3.e1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acbcc981021b69c33e29c4bcfcebb595338261e2",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Political Science"
]
} |
235447857 | pes2o/s2orc | v3-fos-license | Exosomal MATN3 of Urine-Derived Stem Cells Ameliorates Intervertebral Disc Degeneration by Antisenescence Effects and Promotes NPC Proliferation and ECM Synthesis by Activating TGF-β
Objective Low back pain (LBP) is one of the top three causes of disability in developed countries, and intervertebral disc degeneration (IDD) is a major contributor to LBP. In the process of IDD, there is a gradual decrease in nucleus pulposus cells (NPCs) and extracellular matrix (ECM). Exosomes are important exocrine mediators of stem cells that can act directly on cells for tissue repair and regeneration. In this study, we determined the antisenescence, cell proliferation promotion, and ECM modulation effects of human urine-derived stem cell (USC) exosomes (USC-exos) on degenerated intervertebral discs and explored the underlying mechanism. Methods and Materials USCs were identified by multipotent differentiation and flow cytometry for mesenchymal stem cell- (MSC-) specific surface protein markers. USC-exos were isolated from the conditioned medium of USCs by ultracentrifugation and then analyzed by transmission electron microscopy (TEM), particle size analysis, and western blotting (WB) for exosome marker proteins. The effects of USC-exos on NPC proliferation and ECM synthesis were assessed by Cell Counting Kit-8 (CCK-8), WB, and immunofluorescence (IF) analyses. The protein differences between normal and degenerative intervertebral discs were mined, and the temporal and spatial variations in matrilin-3 (MATN3) content were determined by WB and IF in the intervertebral disc tissues. The candidate molecules that mediated the function of USC-exos were screened out and confirmed by multiple assays. Meanwhile, the mechanism underlying the candidate protein in USC-exos-induced cell proliferation and regulation of ECM synthesis promoting the activities of NPCs was explored. In addition, the effects of USC-exos on ameliorating intervertebral disc degeneration (IVD) in mice were examined by assessing computed tomography (CT), magnetic resonance imaging (MRI), and histological analyses. Results The flow cytometry results showed that USCs were positive for CD29, CD44, and CD73, which are USC surface-specific markers, but negative for CD34 and CD45. In addition, USCs showed osteogenic, adipogenic, and chondrogenic differentiation potential. USC-exos exhibited a cup-shaped morphology, with a mean diameter of 49.7 ± 7.3 nm, and were positive for CD63 and TSG101 and negative for calnexin. USC-exos could promote NPC proliferation and ECM synthesis. The protein content of the matrilin family was significantly reduced in degenerative intervertebral discs, and the decrease in MATN3 was the most significant. USC-exos were found to be rich in MATN3 protein, and exosomal MATN3 was required for USC-exos-induced promotion of NPC proliferation and ECM synthesis, as well as alleviation of intervertebral disc degeneration in IVD rats. In addition, the effects of MATN3 in USC-exos were demonstrated to be achieved by activating TGF-β, which elevated the phosphorylation level of SMAD and AKT. Conclusions Our study suggests that reduced MATN3 can be considered a characteristic of intervertebral disc degeneration. USC-exos may represent a potentially effective agent for alleviating intervertebral disc degeneration by promoting NPC proliferation and ECM synthesis by transferring the MATN3 protein.
Introduction
Low back pain (LBP) is a very common problem experienced by most people at a certain time in their life, and it is among the top three causes of disability in developed countries [1][2][3]. The definite causes of low back pain remain unclear; however, intervertebral disc degeneration (IDD) has been documented to be a major contributor to LBP and is the pathological basis for spinal instability, disc herniation, and other spinal degenerative diseases, which cause a considerable burden to society and families and thus are the major global public health issues [4,5].
In disc degeneration, the main pathological change is a gradual reduction in the total NPCs and extracellular matrix (ECM). NPCs are the main functional cells responsible for ECM synthesis. The homeostatic imbalance between anabolism and catabolism leads to the loss of collagen and proteoglycan [6,7]. Collagen type II (COL2) and proteoglycan (predominantly aggrecan (ACAN)) are crucial ECMs for discs to maintain proper function, particularly for the nucleus pulposus [8,9]. ACAN is a biological macromolecule formed by one or more glycosaminoglycan (GAG) chains covalently connected to a core protein. It is the main noncollagen component of the intervertebral disc. The glycosaminoglycans contained in ACAN are mainly chondroitin sulfate and keratan sulfate. The rich and unique molecular characteristics of intervertebral discs allow these structures to penetrate and withstand pressure. One of the reasons for the damage and degeneration of the intervertebral disc is the degradation and loss of ACAN. COL2 is one of the most important collagen components in intervertebral discs. COL2 is the main collagen in cartilage, accounting for more than 50% of the extracellular matrix of cartilage. COL2 is mainly expressed by chondrocytes and is abundantly present in the nucleus pulposus [10,11]. In ECM, COL2 and ACAN are the two most representative components; therefore, this experiment measured the expression of COL2 and ACAN to illustrate ECM conditions. Determining the methods of rebalancing disordered COL2 and ACAN expression and increasing their synthesis is considered a key factor for slowing down or even reversing IVD damage.
Recently, increasing evidence has revealed that mesenchymal stem cells (MSCs) can release exosomes, which are specialized extracellular vesicles that could provide therapeutic benefits [12,13]. Exosomes are membranous vesicles with a diameter of 50-200 nm, and they contain multiple cellular components, such as proteins, nucleic acids, and lipids. Exosomes act as a cell-free mediator and transfer particular cytokines into recipient cells to achieve their therapeutic paracrine effects in inhibiting senescence, modulating metabolism, and promoting regeneration [14]. Thus, stem cell exo-somes may have potential applications as effective cell-free therapeutic agents [15].
However, MSCs have a limited source and cause certain trauma to the body, which limits their application. Human urine-derived stem cells are stem cells with multidifferentiation potential obtained from human urine. These cells have a wide range of sources, are convenient to obtain, present safe and noninvasive characteristics, do not violate ethics, and represent a better choice for obtaining exosomes [16][17][18]. MATN3 is a member of the matrilin protein family, and it is mainly distributed in cartilage cells and plays an important role in the synthesis of cartilage ECM. Variations or decreases in its content can lead to cartilage and intervertebral disc degeneration. In previous studies, we found that USC-exos are rich in this protein.
A number of studies have shown that USC transplantation is beneficial to degenerated intervertebral discs [12,13]. Exosomes are exocrine vesicles of stem cells that play a paracrine role and deliver a variety of biological effectors of the parent cells. Exosomes that can be absorbed by recipient cells are involved in cellular communication, signaling pathway activation, and metabolism modulation and play an important role in MSC-based therapy. Exosomes are widely distributed and readily available and have no immunogenicity; therefore, they are ideal agents for the treatment of tissue repair and regeneration [19].
In the previous study [20], the team studied the effect of USC-exos on intervertebral disc degeneration by inhibiting the apoptosis of NPCs. We now further explored the promotion of cell proliferation and ECM modulatory effects of human urine-derived stem cell (USC) exosomes (USC-exos) on degenerated intervertebral discs in cell and rat models and explored the underlying mechanism.
Materials and Methods
2.1. Isolation and Culture of NPCs. The experimental scheme was approved by the Ethics Committee of the Affiliated Hospital of Qingdao University (approval number: QDFY-19-012-03). Nucleus pulposus cells were obtained from 6 patients with lumbar disc herniation, and they had an average age of 30 ± 4:8. The clinical symptoms and physical examinations were consistent with the surgical indications. Before surgery, three experienced chief physicians of spine surgery and one chief physician of the imaging department evaluated their MRI, and the modified Pfirrmann scale of the lumbar intervertebral disc was grade 5. This finding indicated that the nucleus pulposus and the medial annular fiber showed low signal and the intervertebral disc was highly normal. After obtaining informed consent from the patients and their relatives and signing a donation agreement for the
Flow Cytometric Analysis of Surface Markers of USCs.
After trypsin digestion, P3 USCs in a good growth state were collected and then washed with PBS 3 times after centrifugation. A cell suspension with a final concentration of 1 × 10 6 /ml was prepared. Then, 100 μl of cell suspension was added to 10 μl of monoclonal antibody working solution for CD29, CD34, CD44, CD45, and CD73 (Santa Cruz Biotechnology, USA) and incubated for 1 hour at room temperature in the dark. The cells were washed for another 3 times and analyzed by flow cytometry.
Determination of the Multidirectional Differentiation
Potential of USCs. To evaluate the differentiation potential of human urine-derived stem cells, osteogenic, adipogenic, and chondrogenic differentiation was performed according to the associated kit instructions. USCs were inoculated into 6-well plates, and differentiation was induced when the cell fusion rate reached approximately 80%. Osteogenic induction differentiation medium (CYAGEN, China) was added and replaced every 3 days. After 21 days of induction, the USCs were fixed with 4% paraformaldehyde and stained with Alizarin Red for observation. The kit contained 175 ml basal medium, 20 ml serum, 2 ml penicillin-streptomycin, 2 ml glutamine, 2 ml β-sodium glycerophosphate, 400 μl ascorbic acid, and 20 μl dexamethasone. For adipogenic induction differentiation, adipogenic induction differentiation medium A (Cyagen, China) was first added and then replaced with adipogenic induction differentiation medium B (Cyagen, China) 3 days later, and after 24 h, it was replaced with liquid A again, with this process alternated 3 times. Finally, 4% paraformaldehyde was used for fixation, and oil red O staining was used for observation. Adipogenesis induction differentiation medium A contained 175 ml basal medium, 20 ml fetal bovine serum, 2 ml penicillin-streptomycin, 2 ml glutamine, 400 μl insulin, 200 μl 3-isobutyl-1-methyl xanthine (IBMX), 200 μl dexamethasone, and 200 μl rosiglitazone. The adipogenic induction differentiation medium B contained 175 ml basal medium, 20 ml fetal bovine serum, 2 ml penicillinstreptomycin, 2 ml glutamine, and 400 μl insulin. During chondroblast induction, the cells were first counted, and then 2:5 × 10 5 USCs were collected. The supernatant was discarded after centrifugation in a 15 ml centrifuge tube at 150 g for 5 min, and then, 0.5 ml chondroblast induction medium was added and changed every 3 days. Approximately 21 days later, 4% paraformaldehyde was used for fixation, paraffin embedding was followed by sectioning, and allicin blue staining was used for observation. The chondroblast induction differentiation medium kit contained 194 ml basal medium, 600 μl ascorbic acid, 20 μl dexamethasone, 2 ml ITS+Supplement, 200 μl sodium pyruvate, 200 μl proline, and 2 ml factor-β3 (TGF-β3).
Exosome Extraction and Identification.
When the cells grew to 70-75% confluence, the culture medium was removed and washed 3 times with PBS. Serum-free medium was added, and the culture was continued for 48 h. The culture medium was then collected, centrifuged at 4°C and 500 g for 10 min to remove residual cells, and centrifuged at 4°C and 2000 g for 20 min to remove cell debris. The impurities were further removed at 4°C and 10,000 g for 30 min, and the supernatant was retained and then filtered with a 0.22 μm filter membrane to remove excess particles. The supernatant was centrifuged at 4°C and 100,000 g for 2 h, and the resulting precipitate was resuspended in PBS. The exosome morphology was observed by transmission electron microscopy (TEM) (JEM-1200EX, Japan). The number and size distribution of exosomes were analyzed using a Nano-Sight detector (Malvern, England) and NTA detection and analysis software. USCs were cleaved to obtain their cleavage products, which were used as a negative control to conduct WB with USC-exos. Western blotting was used to detect the exosome markers CD63, TSG101, and calnexin.
2.6. Exosome Uptake of NPCs. P3 NPCs with good growth status were inoculated into 24-well plates for subsequent experiments after the cells adhered to the wall. First, GFPlentivirus was transfected to observe the cell profile of NPCs. GFP virus and Lipofectamine 2000 (Thermo Fisher, Massachusetts, USA) were diluted with equal amounts of serumfree culture medium. The diluted GFP was mixed with Lipofectamine 2000 and kept at room temperature for 20 min. The mixture was added to the cell culture medium and transfected for approximately 3 h. Then, the exosomes were labeled with PKH26 (Sigma-Aldrich) fluorescent dye according to the operation instructions of the PKH26 fluorescent dye kit. The excess dye was neutralized with an equal volume of PBS containing 5% BSA. Finally, the supernatant was removed by centrifugation at 4°C for 70 min at 100,000 g and resuspended in 50 μl PBS. The prepared USC-exos labeled with PKH26 were added to GFP-transfected NPCs and incubated in the dark for 12 h. After fixation with 4% paraformaldehyde for 20 min, the nuclei were stained with DAPI. The glycerin was sealed, and uptake was observed by laser confocal microscopy. The Leica Application Suite Advanced Fluorescence software was used to analyze the images in the later stage.
CCK-8 (Cell Counting Kit-8) Detects NPC Proliferation.
NPCs were prepared into cell suspensions and counted and then seeded into 96-well plates, with 5 × 10 3 cells in each well and each group set up with 3 duplicate wells. PBS was added to the control group, and USC-exos, USC shMATN3 -exos, and USC conshRNA -exos were added to the other groups. The cellfree wells were used as blank controls. After intervention, 10 μl CCK-8 solution (Meilunbio, Dalian, China) was added on days 1, 3, 5, and 7 and then cultured in a cell incubator for approximately 3 h. A microplate reader (Molecular Devices, USA) was used to detect the absorbance at 450 nm, and then, the proliferation of NPCs was calculated based on the change in absorbance.
β-Galactosidase
Staining to Detect Cell Senescence. NPCs with good P3 growth were inoculated into a 6-well plate and treated according to the experimental groups after the cells adhered to the wall. After the intervention, the instructions of the β-galactosidase staining kit (Beyotime, China) were followed for cell senescence detection. The specific steps were as follows. Staining fixative solution was added to fix the cells at room temperature for 15 minutes. After washing 3 times with PBS, the following was added to each well: 1 ml of 930 μl β-galactosidase staining solution C, 10 μl β-galactosidase staining solution A, 10 μl β-galactosidase staining βgalactosidase staining working solution prepared from solution B, and 50 μl X-Gal solution. The cells were incubated overnight at 37°C, and the senescence of NPCs was observed under an inverted phase contrast microscope.
2.9. Western Blot (WB) Analysis. After the intervention, the samples were collected and then lysed in RIPA lysis buffer (Solarbio, Beijing, China) containing 1 mM phenyl methylsulfonyl fluoride (PMSF) and protease inhibitors to extract proteins. The extracted protein was tested to determine its concentration using a BCA kit (Solarbio, Beijing, China). Then, the protein and loading buffer were mixed at a ratio of 4 : 1 (V/V) and boiled for 10 minutes. The proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a polyvinylidene fluoride (PVDF) membrane. The PVDF membrane was blocked with 5% skimmed milk powder at room temperature and then incubated overnight at 4°C with primary antibodies against CD63, TSG101, calnexin, TGF-β3, p-SMAD3, SMAD3, AKT, p-AKT, β-actin (all the above antibodies were purchased from Santa Cruz Biotechnology, USA), MATN3 (Bioss, Beijing, China), COL2 (Bioss, Beijing, China), and ACAN (Millipore, Massachusetts, USA). The membrane was incubated with a horseradish peroxidase-(HRP-) labeled secondary antibody (ABclonal, Wuhan, China) for 1 hour, and then, an ECL kit (Thermo Fisher Scientific, Rockville, MD, USA) was used for luminescence observation. The Image Lab software (Bio-Rad, Hercules, CA, USA) was used to take images and analyze them.
Transfection of MATN3 Lentivirus-shRNA.
In the functional mechanism investigation, lentiviral shRNAs targeting MATN3 (shMATN3, sc106205V) (Santa Cruz Biotechnology, USA) and control shRNAs (Con shRNAs) were transfected before USC-exos treatment. In the feedback mechanism investigation, the overexpressed lentivirus of MATN3 (LV-MATN3) and the control lentiviral vectors (con-LV) were transfected. The lentivirus vectors were packaged by GeneChem (Shanghai, China). Transfection was conducted according to the manufacturer's instructions. Briefly, NPCs were plated into dishes 1 day before transfection. The next day, the NPCs were transfected with the lentivirus vectors at an MOI of 100 supplemented with 10 μg/ml polybrene (Cyagen) for 24 h. The culture medium was replaced with fresh complete medium, and the cells were selected with 2.5 μg/ml puromycin (Sigma) 72 h after transfection. Total RNA was harvested and subjected to qPCR analysis for efficiency. Forty-eight hours after transfection, the total RNA was harvested and subjected to qPCR analysis. Seventy-two hours after transfection, the total proteins were harvested and subjected to western blot analysis.
Quantitative Real-Time Polymerase Chain Reaction
(qRT-PCR). Total RNA was extracted from the samples using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). Then, the instructions of the reverse transcription kit for qRT-PCR were followed to reverse transcription and amplify related genes. GAPDH was used as an internal reference, and each sample was set with three auxiliary holes. The primer Oxidative Medicine and Cellular Longevity sequences are shown in Table 1. The data obtained were analyzed using the 2 -ΔΔCt algorithm.
2.12. NPCs Immunofluorescence Test (IFT). P3 NPCs were used for cell counting, the cell concentration was adjusted to 1 × 10 5 , and a cell climbing sheet was inserted after the cells adhered to the wall. PBS was added to the control group, 100 μg/ml USC-exos was added to the USC-exos group, 100 μg/ml USC conshRNA-exos was added to the USC conshRNAexos group, and 100 μg/ml USC shMATN3 -exos was added to the USC shMATN3 -exos group. After 7 days of intervention, the cells were fixed with 4% paraformaldehyde for 20 minutes at room temperature and blocked with 5% BSA for 30 minutes. Primary antibodies against COL2 (Bioss, Beijing, China) and ACAN (Santa Cruz Biotechnology, USA) were added and incubated overnight at 4°C. The next day, antimouse (Abcam, USA) and anti-rabbit (Abclonal, USA) fluorescent secondary antibodies were added separately under dark conditions. After incubation for 1 hour at room temperature, DAPI was added. After mounting in glycerol, the cells were observed under a laser confocal microscope (Nikon, Japan) to evaluate the expression of COL2 and ACAN.
Rat IDD Model Establishment and Intradisc Injection.
Our group purchased 20 3-month-old Sprague Dawley (SD) rats for in vivo experiments. Among them, 5 rats were The multifield random counting method showed that the proportion of SA-β-Gal-positive NPCs in the USC-exos group was 13:8 ± 1:4%, which was significantly lower than that in the control group (19:6 ± 2:4%). (d) CCK-8 assay showing NPC proliferation in response to USC-exos. The absorbance at 450 nm of the USCs group was markedly higher than that of the control group at 3, 5, and 7 d (n = 3, * P < 0:05). (e-g) WB analysis for NPC ECM synthesis. The expression of ACAN and COL2 was significantly increased when NPCs were stimulated with USC-exos (each group n = 3, * P < 0:05).
6
Oxidative Medicine and Cellular Longevity regarded as the normal group without any treatment and the remaining 15 rats were regarded as the experimental group. Before the operation, the rats were anesthetized with 2% pentobarbital, and the three IVDs of each rat, namely, Co4/5, Co5/6, and Co6/7, were determined on the tail vertebrae by palpation with the aid of X-ray fluoroscopy [22]. A 20G fine needle (Hamilton, USA) was used to puncture the above three intervertebral discs, causing degeneration. Then, a 33G puncture needle (Hamilton, USA) was used to inject USC conshRNA -exos (100 μg/ml) into Co4/5 cells, USC sh-MATN3 -exos (100 μg/ml) into Co5/6 cells, and PBS into Co6/7 cells (2 μl each). The first injection was performed 2 weeks after the puncture and repeated 4 weeks thereafter [23], and CT (GE, USA) and 3.0 MRI (GE, USA) were performed to observe the morphological appearance of the intervertebral discs in the 4th and 8th weeks. After 8 weeks, the rats were sacrificed, and intervertebral disc samples were collected for paraffin embedding and subsequent experiments. This experimental protocol was approved by the Animal Experiment Committee of Qingdao University. 2.16. Tissue Immunofluorescence (IF) Staining. The samples were cut into frozen sections in advance for later use. Before staining, frozen sections were rewarmed at room temperature and then cleaned with TBST to remove residual optimal cutting temperature compound (OTC). After blocking with 3% BSA at room temperature for 1 hour, the MATN3 primary antibody (Bioss, Beijing, China) was added and incubated overnight at 4°C. After rewarming at room temperature for 30 minutes the next day, fluorescent secondary antibody was added, and the samples were incubated for 1 hour under dark conditions at room temperature. After DAPI was added, glycerol mount was used to observe the expression of MATN3 under a laser confocal microscope (Nikon, Japan) or stored at -20°C in the dark for subsequent observation.
2.17. Statistical Analysis. Each group of experiments was repeated at least three times. Continuous data are expressed as the mean ± standard deviation (SD), and nonparametric data are expressed as the median and interquartile range.
One-way analysis of variance (ANOVA) was used to compare the data between groups, and the parameters of parallel groups were compared by t-test. P < 0:05 indicates that the difference is statistically significant. All data were statistically analyzed using the SPSS 20.0 software (SPSS, Chicago, IL, Oxidative Medicine and Cellular Longevity USA), and statistical graphs were drawn using GraphPad Prism 8 (GraphPad Software, USA).
USC and USC-Exos
Identification. USC colonies generally appeared 7 to 10 days after the primary cell culture and exhibited a cobblestone-like morphology under a light microscope. USCs had a relatively strong proliferation capacity and reached 90% confluence after 2-3 weeks of culture (Figure 1(a)). The characteristics of USCs were consistent with those described in a previous study [24]. Flow cytometry analysis showed that USCs were positive for USC surface markers CD29, CD44, and CD73 but negative for CD34 and CD45 (Figure 1(c)). USCs could differentiate into osteocytes, adipocytes, and chondrocytes when cultured in osteogenic, adipogenic, and chondrogenic conditioned culture media as previously reported [25] ( Figure 1(b)). Therefore, the characteristics of USCs meet the criteria of multipotential differentiation as defined by MSCs.
USC-exos were obtained by ultrahigh-speed centrifugation. USC-exos were observed under a transmission electron microscope and showed a cup-shaped morphology with a diameter of approximately 50 nm (Figure 1(d)). The results of the particle size analysis showed that the diameter of USC-exos was 49:7 ± 7:3 nm (Figure 1(e)). WB showed that USC-exos were positive for CD63 and TSG101 but negative for calnexin (Figure 1(f)).
USC-Exos Resist Senescence and Promote NPC proliferation and ECM Synthesis.
To assess the effects of USC-exos on NPCs function, we first determined the NPC uptake of USC-exos. As shown in Figure 2(a), red fluorescent dye-(PKH26) labeled USC-exos were internalized into the perinuclear region of NPCs after 3 h of incubation. To determine the functional effects, USC-exos or an equal volume of PBS was added to the conditioned medium to culture NPCs for the indicated time.
A SA-β-Gal staining assay was utilized to examine the antisenescence effect of USC-exos, and the results showed that significantly less SA-β-Gal staining-positive NPCs were observed in the USC-exos group than that in the control group (Figure 2(b)). A CCK-8 analysis was performed to evaluate the effect of USC-exos on the proliferation of NPCs. The results revealed that the proliferation of NPCs was markedly promoted in response to USC-exos stimulation (Figure 2(d)).
To investigate the ECM modulation effect of USC-exos, NPCs were treated with USC-exos and PBS for 72 h. The results of western blot and immunofluorescent staining assays showed that NPCs of the USC-exos group had significantly elevated expression of COL2 and ACAN compared to the control (Figures 2(e), 2(f), and 3).
MATN3 Was Decreased Significantly in the Nucleus
Pulposus Tissue of Intervertebral Disc. To investigate the potentially key proteins that lead to disc degeneration, a proteome analysis was applied in our previous study [26]. Further data mining was performed to compare the protein variation of normal and degenerated intervertebral discs, and we found that matrilin family proteins were significantly decreased in human degenerated intervertebral discs. The matrilin family has 4 members (MATN1, MATN2, MATN3, and MATN4), which are noncollagenous extracellular matrix proteins. Among them, MATN3 was the most differentially (Figure 4(a)). The results of WB analysis and IF staining further confirmed the decrease in MATN3 in the degenerated human nucleus pulposus (Figures 4(b)-4(d)).
To reveal the variation of MATN3 throughout the degenerated intervertebral discs, normal and degenerated SD rat IVDs were detected for further IF staining. In normal rat IVDs, the nucleus pulposus (NP) and annulus fibrosus (AF) had a good morphological structure. MATN3 was widely distributed in the nucleus pulposus region and vertebral body and moderately distributed in the annulus fibrosus and endplate (EP). The number of MATN3-positive cells predominated in the NP and AF. However, in degenerated IVDs, the NP was unclear, and the structure of the AF was disordered. MATN3 was significantly reduced in both the nucleus pulposus and annulus fibrosus regions but not in the endplate region. Because of the partial ossification of the endplate in aged rats, there was a remarkable increase in MATN3 in endplate bone substances. However, there was no excessive expression of MATN3 in the endplates of young rats (Figure 4(e)). Immunofluorescence images of rat intervertebral discs treated with exosomes showed a significant increase in MATN3 content in the intervertebral discs, which in turn promoted extracellular matrix synthesis (Figure 4(f)).
Exosomal MATN3 in USCs Mediated the Antisenescence Activity and Proliferation and Promoted ECM Synthesis in
NPCs. To investigate whether exosomal MATN3 of USCs mediates the effects, data mining was applied to previous proteomic analyses of protein expression profiles in USCexos and their parent USCs [25]. MATN3 was found to be rich in USC-exos, and our WB results also confirmed the enrichment (Figures 5(a) and 5(b)).
MATN3 shRNA was used to knockdown the expression of MATN3 in USCs, and the inhibitory efficiency of shMATN3 was examined by qRT-PCR ( Figure 5(c)). USCs transfected with shMATN3 or control shRNA (Con shRNA) were used as parental cells to generate exosomes for downstream assays. The results of WB determined the downregulation of MATN3 in exosomes from MATN3-knockdown USCs (USC shMATN3 -exos) compared to the control exosomes from USCs transfected with Con shRNA (USC conshRNA -exos) (Figures 5(d) and 5(e)).
Evidence has revealed that MATN3 can directly bind to a specific integrin, which promotes the dissociation and activation of TGF-β by changing the conformation of the TGF-β precursor complex [27]. Therefore, the activation of TGF-β and its downstream SMAD protein and proliferationrelated AKT protein [28] was further investigated.
In the USC conMATN3 -exos group, the level of TGF-β and the extent of p-SMAD, COL2, and ACAN expression were significantly increased. However, in the USC shMATN3 -exos group, the promotive ability of USC-exos was markedly compromised when MATN3 expression in USC-exos was inhibited ( Figure 6(a)). A SA-β-Gal staining assay showed that the antisenescence effect of USC-exos was mitigated, and it showed that SA-β-Gal staining-positive NPCs in
10
Oxidative Medicine and Cellular Longevity USC shMATN3 -exos group increased compared to those in the USC conMATN3 -exos group but was still lower than the control group (Figure 7(a)). The CCK-8 assay also showed a decreased ability of USC-exos to promote the proliferation of NPCs when MATN3 was knocked down. In their parent USCs (Figure 7(b)), IF (Figure 8) indicated that the promotive effect of USC-exos on ECM synthesis was also suppressed once the MATN3 content was reduced in USC-exos.
Studies have reported that senescence and proliferation are associated with activating the PI3K-Akt pathway [29,30]. Thus, we performed western blotting to detect the levels of Akt and p-Akt in NPCs following treatment with USC sh-MATN3 -exos, USC conshRNA -exos, or an equal volume of PBS for 72 h. As shown in Figure 6(a), the ability of USC-exos to induce Akt phosphorylation was markedly compromised when MATN3 expression in USC-exos was inhibited. Collectively, our findings suggest that MATN3 is required for USCexos-induced promotion of NPC proliferation and ECM synthesis.
Exosomal MATN3 Alleviates Intervertebral Disc Degeneration in the IVD Rat Model
To further verify the therapeutic effects of exosomal MATN3 of USC-exos, we applied USC ConshRNA -exos, USC shMATN3exos, and an equal volume of PBS to IVD rats. The degeneration grades of rat intervertebral discs were evaluated by CT and MRI examination at 4 and 8 weeks after the intradiscal intervention (Figures 9(a) and 9(d)-9(f)). Typical disc tissue could be seen in normal, undegenerated rat intervertebral discs (Figure 9(f)). The percent disc height index (%DHI) was measured according to the results of sagittal CT recon-struction images (Figures 9(b) and 9(c)). A low %DHI indicates collapse or narrowing of the intervertebral space, which reflects the extent of degenerative changes. At 4 weeks and 8 weeks, the %DHI of the USC conshRNA -exos group was higher than that of the USC conshRNA -exos and PBS groups, and the %DHI of the USC shMATN3 -exos group was higher than that of the PBS group. Histological grade was analyzed according to HE and Safranin O-fast green staining (Figure 9(g)).
Meanwhile, the Pfirrmann grade was based on morphological changes of the intervertebral discs in MRI, and greater degeneration corresponded to a higher grade. The Pfirrmann grade of the USC conshRNA -exos group was lower than that of the USC conshRNA -exos and PBS groups, and the Pfirrmann grade of the USC shMATN3 -exos group was lower than that of the PBS group. In normal rats, however, there was no significant disc degeneration (Figure 9(a)).
To verify the radiographic results, further histological staining and immunohistochemical analysis were applied. As shown in Figure 9, the IVDs in the USC conshRNA -exos group had higher disc heights, more ECM components, and more organized NP tissues than those in the USC shMATN3exos group and PBS group. However, the IVDs in the USC sh-MATN3 -exos group had a lower disc height, fewer ECM components, and more disorganized NP tissues than those in the USC conshRNA -exos group but had a better morphological score than that of the PBS group. That is, compared to the PBS group, the intervertebral discs of the USC conshRNA -exos group exhibited alleviated degeneration. However, in the USC shMATN3 -exos group, the ability to mitigate degeneration of the intervertebral disc was compromised when MATN3 was inhibited. Collectively, the radiographic results and morphological analyses indicated that full-ingredient USC-exos Figure 6: Immunofluorescence of NPC ECM synthesis for COL2 and ACAN. Blue indicates DAPI, green indicates COL2, and red indicates ACAN. The expression of COL2 and ACAN was significantly increased when NPCs were induced by USC conshRNA -exos; however, the promotive effects were compromised when MATN3 was knocked down in USC shMATN3 -exos. 11 Oxidative Medicine and Cellular Longevity with MATN3 could significantly ameliorate intervertebral disc degeneration, while the beneficial effect was attenuated when MATN3 was knocked down in USC-exos.
Discussion
The main causes for IDD have not been clarified. However, a consensus has been reached that a continuous decrease in NPCs and degradation of ECM are the pathological basis of IDD [7,31]. Therefore, finding a method of maintaining the number of NPCs and promoting the synthesis of ECM are the keys to alleviating or even reversing IDD.
In this study, we demonstrated that the content of MATN3 was significantly reduced in degenerated intervertebral discs. MATN3 is a member of the matrilin family and a noncollagenous ECM protein that shares a common structure, including the von Willebrand factor A (WFA) domain, epidermal growth factor (EGF) domain, and C-terminal coiled-coil oligomerization domain [32]. MATN3 is a cartilage-specific protein that can assemble the chondrocyte ECM. As an ECM protein, matrilin-3 can cross-link with collagen fibrils and multiple proteoglycans, playing a critical role in forming a fibrous matrix network [27]. In the past, MATN3 was found to be required for cartilage homeostasis [9]. Mutations in matrilin-3 in humans can cause many kinds of skeletal diseases, such as multiple epiphyseal dysplasia and early-onset osteoarthrosis [33]. The polymorphisms in the MATN3 gene were previously tested, and they indicated a genetic association with IDD. Mutation of the MATN3 region leads to susceptibility to spinal disc degeneration [34]. Here, for the first time, we revealed the spatial and temporal variation in MATN3 in normal and degenerated intervertebral discs. The change in MATN3 was most significant in the NP tissue and moderate in the AF. The decrease in MATN3 in the IVD could be considered a characteristic of IDD.
With the development of exosome research, an increasing number of researchers are studying exosomes as a potential treatment for intervertebral disc degeneration. Stem cellderived exosomes may offer cell-free therapies as an alternative to traditional stem cell therapies [35]. Intervertebral disc degeneration is usually accompanied by the apoptosis of nucleus pulposus cells and the loss of extracellular matrix. The accumulation of inflammatory factors and matrix- degrading enzymes in intervertebral discs is an important reason for this phenomenon [36,37]. In a study of degenerated and normal nucleus pulposus, Xia et al. [38] found that there were a variety of proteins related to the inflammatory response in intervertebral discs, most of which were expressed in degenerated intervertebral discs. The results showed that IL-1β, iNOS, COX-2, IL-6, MMP3, MMP13, and other inflammatory cytokines and extracellular matrix-degrading enzymes were significantly reduced after the addition of stem cell-derived exosomes. These results suggest that stem cell-derived exosomes can reduce the inflammatory response of intervertebral discs and the degradation of extracellular matrix. At the present stage, most experiments have used MSCs; however, this research group uniquely chose USCs. USCs not only have MSCrelated characteristics but also have a number of unique advantages. USCs are a population of cells isolated from urine that have the biological properties and differentiation potential of stem cells. Although limited research has been performed on USCs, studies have confirmed that USCs can be induced to differentiate into osteoblasts, chondrocytes, smooth muscle cells, cardiomyocytes, urothelial cells, neural precursor cells, skeletal muscle cells, and adipocytes; moreover, after several generations of culture, the karyotype remains stable without tumorigenicity, the acquisition pathway is noninvasive and simple, and the culture system is stable [39][40][41][42][43]. Previous studies have found that the proliferation ability of stem cells is closely related to telomerase activity and telomere length. Compared with MSCs, USCs have higher telomerase activity and longer telomere sequences; therefore, they have stronger proliferation ability [44]. The above characteristics of USCs make them a better source for exosome extraction, and their application has very broad prospects. 14 Oxidative Medicine and Cellular Longevity In previous experiments, Lu et al. [45] used MSC-derived exosomes to intervene in NPCs, and the results showed that exosomes could stimulate the phenotypes of degenerated NPCs to restore undegenerated NPCs to increase the synthesis of extracellular matrix and achieve self-repair. Their studies suggest that exosomes may play a pivotal role in the endogenous repair of IVDs. In another study, Lu et al. [46] showed that MATN3 promoted the synthesis of COL2 and ACAN by promoting IL-1ra expression and inhibited the production of IL-1β-induced catabolic matrix proteinases, thereby delaying intervertebral disc degeneration by reducing extracellular matrix degradation. In this study, our previous experiments showed that intervention with USC-exos could reduce the degradation of extracellular matrix and promote its synthesis, which delayed the effect of intervertebral disc degeneration. The presence of MATN3 in USC-exos was verified in subsequent experiments, suggesting that USC-exos may inhibit intervertebral disc degeneration through MATN3.
Studies have shown that exosomes can promote the proliferation of NPCs and the synthesis of extracellular matrix. Moreover, exosomes are complex and contain a large number of substances, which may be released in the presence of MATN3. Our results demonstrated that USC-exos could markedly promote NPC proliferation and ECM synthesis. The promotion of USC-exos was significantly reduced after siRNA was used. At the same time, the WB method was used to identify USC-exos, and the composition of MATN3 was confirmed. By mining the data of previous proteome analyses, we found that MATN3 was rich in USC-exos, which means that USC-exos could act as a vehicle to transfer the MATN3 protein. Due to the advantages of urine-derived stem cells, the source of exosomes was optimized in this study. Moreover, the role and mechanism of MATN3 in the treatment of intervertebral disc degeneration by USC-exos were verified, with the therapeutic effect achieved by regulating the TGF-β content. This study provides insights for investigating exosome-based treatments for intervertebral disc degeneration. Evidence has revealed that MATN3 can directly bind to specific integrins, which promotes the dissociation and activation of TGF-β by changing the conformation of the TGF-β precursor complex, thereby further affecting downstream gene activation [32]. Our results strongly suggested that MATN3 in USC-exos achieved the promotive function by activating TGF-β. TGF-β is a multifunctional cytokine that modulates cell fate and plasticity in a variety of tissues. The multiple cellular responses induced by TGFβ are mediated via the canonical SMAD pathway and noncanonical pathways, including the phosphatidylinositol 3 ′ -kinase-(PI3K-) protein kinase B (AKT) pathway [28,47]. TGF-β/SMAD pathway activation can promote the expression of COL2 and ACAN in the extracellular matrix of nucleus pulposus, and phosphorylation of AKT can promote antisenescence effects and cell proliferation [29,30]. In the results, we demonstrated that exosomal MATN3 from USCs mediated the promotive effects. It is likely that MATN3 fulfilled its functions by activating the canonical SMAD pathway and noncanonical pathways (PI3K-AKT). We determined that MATN3 promoted the expression of TGF-β3 and increased the phosphorylation level of SMD and AKT NPCs. Multiple further loss-offunction assays of MATN3 suggested that exosomal MATN3 of USCs mediated the antisenescence effect and promotive effects of NPCs on proliferation and ECM synthesis. Beyond that, we verified that MATN3 in USC-exos could ameliorate IVD in the IDD rat model. The ability of USC-exos to alleviate IDD was significantly compromised when MATN3 was knocked down. The radiographic and histological analysis results indicated that the USC-conshRNA -exos group exhibited a lower degree of IVD degeneration than the PBS group. However, in the USC sh-MATN3 -exos group, the promotive effects were suppressed when MATN3 was knocked down, which indicated that MATN3 in USC-exos mediated the beneficial effects on IDD.
In addition, no significant change was observed between the PBS injection discs and the no-intervention discs, indicating a negative effect of the puncture caused by 33-gauge fine needles on intervertebral disc degeneration.
Conclusions
MATN3 is not only a noncollagenous ECM protein but also a regulator that could assist in senescence and modulate NPC proliferation and ECM homeostasis. USC-exos may be a potential therapeutic agent for IDD by transferring the MATN3 protein.
Data Availability
The data analyzed in this research can be obtained from Zhu Guo, Yan Wang, BoHua Chen, and HongFei Xiang on reasonable request.
Ethical Approval
This study was approved by the ethical committee of Affiliated Hospital of Qingdao University. The study participants were required to give a written informed consent, and their data was coded for confidentiality and compliance with the Declaration of Helsinki.
Consent
Written informed consent was obtained from all the patients for publication of this research and any accompanying images. | 2021-06-17T05:21:40.960Z | 2021-05-27T00:00:00.000 | {
"year": 2021,
"sha1": "4b745a5405ddcb33edd9d82d8e0602d695196c1c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2021/5542241.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b745a5405ddcb33edd9d82d8e0602d695196c1c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1191850 | pes2o/s2orc | v3-fos-license | Mechanical and Electrical Characterization of Piezoelectric Artificial Cochlear Device and Biocompatible Packaging
This paper presents the development of a piezoelectric artificial cochlea (PAC) device capable of analyzing vibratory signal inputs and converting them into electrical signal outputs without an external power source by mimicking the function of human cochlea within an audible frequency range. The PAC consists of an artificial basilar membrane (ABM) part and an implantable packaged part. The packaged part provides a liquid environment through which incoming vibrations are transmitted to the membrane part. The membrane part responds to the transmitted signal, and the local area of the ABM part vibrates differently depending on its local resonant frequency. The membrane was designed to have a logarithmically varying width from 0.97 mm to 8.0 mm along the 28 mm length. By incorporating a micro-actuator in an experimental platform for the package part that mimics the function of a stapes bone in the middle ear, we created a similar experimental environment to cochlea where the human basilar membrane vibrates. The mechanical and electrical responses of fabricated PAC were measured with a laser Doppler vibrometer and a data acquisition system, and were compared with simulation results. Finally, the fabricated PAC in a biocompatible package was developed and its mechanical and electrical characteristics were measured. The experimental results shows successful frequency separation of incoming mechanical signal from micro-actuator into frequency bandwidth within the 0.4 kHz–5 kHz range.
Introduction
Human ear is a miniaturized acoustic transducer of great sensitivity (20 μPa-60 Pa) and wide dynamic frequency range (20 Hz-20 kHz). Human cochlea, a snail shell-shaped organ in the inner ear, plays an important role in hearing [1]. Patients with damaged cochlea require artificial cochlear implants to restore their hearing. Conventional artificial cochlear implants suffer from high power consumption and exposure of hearing loss due to their external units [2]. In an effort to overcome the shortcomings of conventional artificial cochlear implants, there have been many studies to mimic the function of a basilar membrane in human cochlea and develop the artificial cochlea [3][4][5][6][7][8][9]. However, the focus of research was to mimic the frequency separation of a basilar membrane and did not show signal conversion of mechanical movements of the basilar membrane to electrical signals. Recently, signal conversion of a basilar membrane upon sound signal input was reported over a narrow frequency bandwidth [10,11]. Previously in our group, we reported the development and characterization of an artificial basilar membrane made of piezoelectric polymer capable of frequency separation and electric signal generation within human hearing range. However, while the frequency range of human voice conversation is below 2 kHz, the frequency separation range of the previously reported device without a liquid chamber was 2.5 kHz-13.5 kHz [12].
In this paper, we report the development of a piezoelectric artificial cochlea (PAC) capable of frequency separation over 450 Hz-5000 Hz. The piezoelectric artificial membrane was assembled with a liquid chamber and the frequency separation behaviors and electrical signal measurement were carried out with a micro-actuator attached to the liquid chamber to analyze the function of PAC as a sound frequency analyzer. The experimental results were analyzed and compared with simulation results. Furthermore, an integrated PAC with PCB substrate having multiple electrical connections and implantable packages was developed, and its mechanical and electrical characteristics were measured.
Design and Simulation
The PAC consists of an artificial basilar membrane part and a packaged part. The membrane part has a piezoelectric polymer film as a membrane material, a stainless use steel (SUS) frame defining the shape of the membrane. On the top of the membrane film, there are patterned electrical lines and pads, while on the bottom, there is a common electrical layer ( Figure 1). The packaged part consists of a liquid chamber with an input port and the membrane part assembled with the packaged part is tested on a packaged platform with a micro-actuator attached ( Figure 2).
The membrane part is assembled on the liquid chamber and the liquid chamber is fixed onto the experimental platform, which can be bolted securely to a vibration isolation table. A micrometer assembled with the experimental platform can adjust the position of the micro-actuator precisely such that the tip of the micro-actuator is placed precisely on the surface of the input port. The dimension of liquid chamber is 10 mm × 37 mm × 5 mm and the tip area of micro-actuator on the input port is 3 mm × 2 mm. The length in y axis of an opening in the SUS frame is 28 mm and the width in x axis varies logarithmically from 0.97 mm to 8.0 mm. The piezoelectric polymer film is made of 25.4 μm thick polyvinylidene difluoride (PVDF) film. A thin metal layer covers the entire bottom side of piezoelectric polymer film as a common electrical pad, and 13 line electrodes across the membrane with 13 electrical pads are designed on the top side ( Figure 1). The width (along the y-axis) of each line electrode is 0.5 mm and the length (along the x-axis) is the same as the width of the membrane at each position. The line electrodes are placed over the entire width of the membrane to make the orthotropic membrane like a human basilar membrane in order to improve the membrane response. The spacing between adjacent line electrodes is 1.5 mm.
Multiphysics finite element analysis (FEA) was carried out using COMSOL Multiphysics ® software (COMSOL, Inc., Palo Alto, CA, USA) to simulate the mechanical behaviors and electrical signal generation of the designed PAC. Figure 3a shows the geometry of the finite element model and Figure 3b shows the half symmetric meshed model. The finite element model was constructed to be half of the membrane part and liquid chamber, and symmetric boundary conditions were applied to reconstruct the whole design simulation. The mode frequencies and position of maximum resonant displacement (MRD) at each mode frequency was analyzed first with the finite element model without a liquid chamber. The position of MRD moves gradually from apex area (wide width) at lowest model frequency to base (narrow width) area at higher mode frequency. The lowest mode frequency was 1.44 kHz and the position of MRD was 24.4 mm from the base (y = 0 mm), while the position of MRD was 16.5 mm from the base at the mode frequency of 4.42 kHz. The FEA for frequency separation of the PAC included the simulation of the input vibration loads of various frequencies applied on the input port of the liquid chamber, fluid-structure interaction (FSI) between the input port and the fluid in the liquid chamber, and FSI between the fluid and the piezoelectric membrane. Sinusoidal displacement load modeling the movement of the micro-actuator tip was applied on the movable area of the input port. The frequencies of the applied loads ranged from 0.1 kHz-20 kHz. Figure 4 shows the deformation of the membrane of the finite element model with a liquid chamber at different frequencies. Similarly to those of modal analysis, the position of maximum displacement (peak with red color) moves gradually from apex area (wide width) to base (narrow width) area as the frequency of applied loads increases. Figure 5 show the simulation results of the frequency separation properties of the designed PAC with a liquid chamber. The results demonstrated that the designed PAC responded tonotopically to incoming signals of varying frequencies from 0.3 kHz near the apex area and up to 10 kHz near the base area in a similar fashion as a human basilar membrane. The dots on the right upper side can be regarded as signal noise attributed to the reflected waves near the apex or a higher mode in the transverse direction. The first five modal frequencies (FM) from modal analyses without a liquid chamber and the first five local resonant frequencies (FR) from the FEA for frequency separation of the PAC with a liquid chamber are listed in Table 1 with the positions of MRD (LM for mode analysis, LR for frequency separation simulation). The LM and LR values are quite similar, but the local resonant frequencies in liquid were reduced four times compared to those of modal frequencies due to mass loading effect on the membrane. In order to lower the frequency response range of PAC, one should modify a PAC with larger membrane area on the design perspective and/or fabricate a membrane part with more flexible materials. However, there is limitation in increasing its dimension for a PAC to be implanted in the human body. However, this mass loading effect enables the designed PAC to cover the lower part of audible frequency range without increasing the size of the overall PAC. Table 1. FEA comparison between modal frequencies (FM) and the position of MRD (LM) at each FM of the membrane part without a liquid chamber and the local resonant frequencies (FR) and position of MRD (LR) at each FR of whole PAC with a liquid chamber.
Fabrication and Experiments
The fabrication process involves a microfabrication process to form patterned line electrodes and electrical pads, a corona poling process to enhance the piezoelectricity of piezoelectric polymer film, and an assembly process of a membrane part and a packaged part. The piezoelectric polymer film used was PVDF film of thickness 25.4 μm (Kynar ® Film, Professional Plastics, Singapore). 20 nm/200 nm thick titanium (Ti)/gold (Au) layers were deposited with a shadow mask on the glossy side of PVDF film with e-beam evaporator. The shadow mask was made of stainless steel and has openings for 13 patterned line electrodes and electrical pads. Thus, with one deposition process without photolithography, electrical patterns were realized on the PVDF film. Additional Ti/Au layers were deposited on the opaque side as a common electrical pad for harvesting piezoelectric signal.
A corona poling process was conducted on the processed PVDF film to improve its piezoelectricity at 80 °C for 20 min with 6.5 kV ( Figure 6). The piezoelectric constant of the poled piezoelectric film was measured with PiezoMeter System (PM300, Piezotest, UK). The d33 value of piezoelectric constant of the corona poled PVDF film was was around 3.5 pC/N.
An SUS frame having openings that defines the membrane shape and fluid introduction holes and wings for secure assembly on the packaged part was prepared with precision machining of line saw process. The processed PVDF film was attached on the SUS frame constituting a membrane part. An experimental platform and a liquid chamber were prepared with conventional machining process. The micro-actuator used was PICMA ® Stack Multilayer Piezo actuator (P-883.11, Physik Instruments (PI) Ceramic, Lederhose, Germany) and assembled on the experimental platform with a micrometer. The fabricated PAC is shown in Figure 7. The membrane part, liquid chamber, and micro-actuator with micrometer are assembled firmly to the experimental platform with bolts. The overall experimental setup to characterize the fabricated PAC is shown in Figure 8. The fabricated PAC was characterized mechanically and electrically by measuring its vibratory behaviours and electrical output signals upon vibratory input from the micro-actuator. The membrane vibration or displacement in z direction was measured with a scanning laser Doppler vibrometer system (PSV-I-400 LR and OFV-505, Polytec, Waldronn, Germany) and the electrical output signals from the electrical pads on the membrane were acquired with a high accuracy data acquisition (DAQ) module (NI Pxle-4497, National Instruments, Austin, TX, USA). The electrical signals from a function generator in the junction box of the LDV system controlled the magnitude and frequency of vibratory inputs from the micro-actuator. The input signal applied to the actuator amplifier was in the form of white noise (20 Hz-20 kHz, 10 Vpp). The frequency response of the micro-actuator operated by the actuator amplifier showed that it has near flat frequency response with 3 dB cut off frequency around 6 kHz while there was a slight fluctuation near 4 kHz. The liquid chamber of the PAC was completely filled with de-ionized water without leaving any air bubble through the fluid introduction holes. Trapped air bubbles disrupt the wave propagation from the input port to the membrane and distort the frequency response of the PAC. The prepared PAC was bolted securely to an active vibration isolation table to minimize the effects of any unwanted external vibration. Figure 9 shows the frequency separation properties of the developed PAC over the whole membrane upon the mechanical vibration input from the micro-actuator. The position of maximum displacement moved gradually from apex (with wide width) to base (with narrow width) as the frequencies of input signals increased.
To further investigate the vibratory behaviours of the membrane, the position of maximum displacement at each frequency analysed from the measurement data (red dots) was plotted with those from the FEA (blue dots). The experimental frequency separation properties of the developed PAC above 5 kHz were not as clear as those from the FEA simulation results partly due to the limitation of frequency response of the micro-actuator. However, the experimental results of the frequency selectivity of the designed PAC matched well with the simulation results as the designed PAC responded tonotopically to incoming signals of varying frequencies from 0.4 kHz near the apex area and up to 5 kHz near the base area ( Figure 10). There were also dots positioned upper right corner of the straight line between (25 mm, 0.4 kHz) and (5 mm, 5 kHz) due to the effect of the reflected wave near the apex or higher mode in the transverse direction as expected in the FEA. The electric signals of the PAC generated on each line electrode at various frequencies were measured at each electrical pad. The electrical pad that generated largest electric signal output was identified at each frequency. The electrical pad #13 (near apex) generated maximum piezoelectric signal of 0.14 mV at the lower frequency band around 0.5 kHz and the electrical pad #10 and #8 responded most at the frequency band around 1.2 kHz and 1.9 kHz, respectively ( Figure 11). The electrical pad #10 generated the maximum piezoelectric signal of 0.19 mV at 1.2 kHz. The magnitude of micro-actuator vibration at 1.2 kHz was 18 nm, while human tympani membrane displacements were around 100 nm with a sound put of 94 dBSPL [13]. However, from electrical pad #1-#7, the piezoelectric signal outputs measured were low and could not be analysed further as the magnitude of membrane vibration is much smaller near the base compared to those near the apex. The signal peak at 180 Hz was from the third harmonic of 60 Hz noise signals during the experiments. The output signal level was relatively low considering the third harmonic of 60 Hz noise signal, but was sufficient as an input signal to operate the current stimulator under development for stimulating the hearing nerve. In order for a PAC to be implantable in the human body, both the packaged part and membrane part was modified and refabricated to be more compact and biocompatible. The packaged part was made with titanium, and liquid sealing layers were fabricated with medical grade silicone rubber. Also, the FEA results were considered to redesign the liquid chamber to have an artificial round window made of thin silicone rubber film near the apex to reduce the signal noise arising from wave reflection on the wall of the liquid chamber near apex. The membrane part was constructed on a printed circuit board (PCB) instead of a SUS frame for a more compact electrical connection. The electrodes on the membrane were electrically connected to the PCB by silver paste and the PCB was connected to the external electrical wires through the embedded multiple electrical connector. The embedded multiple electrical connector can be plugged in by an external multiple electrical connection socket for simple connection and disconnection of the signal outputs from the membrane part. Fabrication result of the biocompatible PAC device and test setup for the PAC device is shown in Figure 12. The mechanical and electrical characteristics of the redesigned PAC were measured and demonstrated its frequency separation capability over the human voice frequency range ( Figure 13). Below 1 kHz, three distinct local resonant frequencies were observed both in their vibratory and electrical behaviours (Figure 14).
Conclusions
In this study, we have presented the development of a piezoelectric artificial cochlea mimicking the function of human cochlea. The PAC had a packaged part that provides artificial stapes, oval window and cochlear duct for frequency separation. The characteristics of the PAC were simulated by the FEA of mode analysis without a liquid chamber, and frequency separation with a liquid chamber, and were compared with the measurement results from its vibratory behaviour and electrical signal generation. The human audible hearing range is 20 Hz-20 kHz, but more importantly, the voice band is 0.3 kHz-3.5 kHz. The highest note reproducible by the average female human voice is C6 of 1046.5 Hz.
The FEA of frequency selectivity demonstrated that the designed PAC has frequency selectivity from 0.3 kHz-10 kHz. The PAC was fabricated with simple microfabrication and precision micromachining processes along with corona poling of piezoelectric polymer film. The fabricated PAC was characterized by measuring its vibratory behaviour with a scanning LDV system. The experimental results matched well with the simulation results and showed the frequency selectivity of the PAC over 0.4 kHz-5 kHz which covers most of the voice band. As the frequency went higher, the local resonant position moved gradually from the apex area toward base area. The electrical signals measured showed the successful frequency separation of the PAC into five frequency bands below 2 kHz, but due to the experimental setup issues and non-uniform piezoelectricity over the membrane, the electrical signals could not be measured near the base area. The maximum electrical signal was 0.19 mV at 1.2 kHz from the electrical pad #10. The packaged PAC device made of biocompatible material shows more clear frequency separation characteristics than the unpackaged PAC device without higher mode components.
Compared to the typical piezoelectric constant of poled PVDF film, the measured values were quite low. More elaborate poling processes, such as stretching during corona poling, would enhance its piezoelectricity and, eventually, the output signal levels. While the electrodes were designed such that they can make the membrane orthotropic, the design could reduce the output signal level since the piezoelectric outputs near the edge and the central area of the membrane have opposite polarity as the membrane vibrates. Electrode design modification such as narrower electrode near the edge and fabrication process change such as using different types of materials for the edge area and central area can further enhance the output signal levels. To improve the performance of the PAC and resolve the issues discussed earlier, aspects of the performance improvement such as electrical noise suppression, membrane tension control, device miniaturization and an implantable technique will be investigated further. The presented PAC is another step towards the development of total implantable cochlear systems requiring much less power and exhibiting more natural performance in sound conversion compared to conventional cochlear implants, and thus, contributing to an enhancement in the quality of life of those suffering from hearing loss. | 2015-09-18T23:22:04.000Z | 2015-07-31T00:00:00.000 | {
"year": 2015,
"sha1": "1c92dce87c33f0c063a3d33969c4a7708d999bf3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/15/8/18851/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c92dce87c33f0c063a3d33969c4a7708d999bf3",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
]
} |
244083383 | pes2o/s2orc | v3-fos-license | The Mammalian Locus Coeruleus Complex—Consistencies and Variances in Nuclear Organization
Descriptions of the nuclear parcellation of the locus coeruleus complex have been provided in approximately 80 mammal species spanning the phylogenetic breadth of this class. Within the mammalian rostral hindbrain, noradrenergic neurons (revealed with tyrosine hydroxylase and dopamine-ß-hydroxylase immunohistochemistry) have been observed within the periventricular grey matter (A4 and A6 nuclei) and parvicellular reticular nucleus (A5 and A7 nuclei), with the one exception to date being the tree pangolin, where no A4/A6 neurons are observed. The alphanumeric nomenclature system, developed in laboratory rodent brains, has been adapted to cover the variation observed across species. Cross-species homology is observed regarding the nuclear organization of noradrenergic neurons located in the parvicellular reticular nucleus (A5 and A7). In contrast, significant variations are observed in the organization of the A6 neurons of the locus coeruleus proper. In most mammals, the A6 is comprised of a moderate density of neurons, but in Murid rodents, primates, and megachiropteran bats, the A6 exhibits a very high density of neurons. In primates and megachiropterans, there is an additional moderate density of A6 neurons located rostromedial to the high-density portion. These variations are of importance in understanding the translation of findings in laboratory rodents to humans.
Introduction
The locus coeruleus was first noted in the human brain by Félix Vicq d'Azyr and described in his 1786 treatise "Traité d'Anatomie et de Physiologie" as a pigmented structure in the rostral hindbrain [1]; however, it wasn't until 1964 that the locus coeruleus neurons were shown to contain monoamines [2]. It is now known that the locus coeruleus is a complex of neuromodulatory nuclei, present in all vertebrates [3], with neurons that primarily produce the neurotransmitter noradrenaline and are the source of diffuse ascending and descending projections [4][5][6][7][8]. The release of noradrenaline in the terminal fields is correlated with a wide range of functional effects on the targeted cells [9][10][11]. Although laboratory rodent brains have been the most commonly used model to study the structure and function of the locus coeruleus complex, at present, the anatomy of this complex has been examined in around 80 mammal species (Table 1) [2,. Until recently, the use of tyrosine hydroxylase immunohistochemistry (the rate-limiting enzyme in the synthesis of catecholamines) has been the gold standard for identification of noradrenaline-containing neurons, and the use of this antibody has permitted substantive cross-species analyses. Given the broad range of mammals in which this nuclear complex has been examined and the variations that have been noted, it is of importance to contextualize this comparative research with the more often pursued translational research. It is important to elucidate where, in what lineages, and what changes took place in the nuclear organization of this complex, and how these variations may impact the potential for findings in the intensely studied laboratory rodents [62] to aid our understanding of function and dysfunction of the locus coeruleus complex in humans. Table 1. The nuclear parcellation of the locus coeruleus complex has been described in approximately 80 mammalian species, representing species across the phylogenetic breadth of this class. This table summarizes the results of the studies where the nuclei have been described and the results, as well as indicating where data is unavailable (No data). A4-dorsomedial division of locus coeruleus; A5-fifth arcuate nucleus; A6d-diffuse portion of locus coeruleus; A6cr-compact portion of locus coeruleus, rodent-type; A6cp-compact portion of locus coeruleus, primate-type; A6cm-compact portion of locus coeruleus, megachiropteran-type; A6m-medial division of locus coeruleus; A7sc-nucleus subcoeruleus, compact portion; A7d-nucleus subcoeruleus, diffuse portion; P-nucleus present; --nucleus absent; P.O.-personal observation, ?-data deficient.
Equus zebra zebra
Cape mountain zebra
Location, Nuclear Parcellation, and Nomenclature of the Locus Coeruleus Complex in Laboratory Rodents
In the adult rat (Rattus norvegicus) [2] and mouse (Mus musculus) [7] rostral hindbrain, the neurons forming the locus coeruleus complex are typically found in both the lateral aspect of the periventricular grey matter (or central grey matter or griseum pontis) and throughout the parvicellular reticular nucleus (Figure 1a). The initial alphanumeric nomenclature applied to the locus coeruleus complex, based on the study of the laboratory rat [2], described two nuclear subdivisions in the periventricular grey matter (A6 and A4 nuclei) and three within the adjacent parvicellular reticular nucleus (A6 ventral continuation, A7. and A5; Figure 1a); however, others have proposed variations of these initial subdivisions and the nomenclature applied ( Figure 1) [2] for the rat brain. (b) The combined anatomical and alphanumeric nomenclature applied by Paxinos and colleagues in their rat atlases [63,64]. (c) The anatomical nomenclature applied by Swanson [65] in his rat atlases. (d,e) The mixed anatomical and alphanumeric nomenclature applied to a limited comparative sample by Kitahama and colleagues [61] and Smeets and González [3]. (f) The flexible alphanumeric nomenclature adopted in the current study based on observations made in approximately 80 mammal species from across the phylogenetic breadth of mammals (Table 1). Note how across mammal species in general the same broad organization and distribution of noradrenergic neurons is observed, but the nomenclature based on laboratory rodents is not encompassing and required modification to be applicable across mammalian species. 4V-fourth ventricle; Vmot-trigeminal motor nucleus; A4-dorsomedial division of the locus coeruleus; A5-fifth arcuate nucleus; [2] for the rat brain. (b) The combined anatomical and alphanumeric nomenclature applied by Paxinos and colleagues in their rat atlases [63,64]. (c) The anatomical nomenclature applied by Swanson [65] in his rat atlases. (d,e) The mixed anatomical and alphanumeric nomenclature applied to a limited comparative sample by Kitahama and colleagues [61] and Smeets and González [3]. (f) The flexible alphanumeric nomenclature adopted in the current study based on observations made in approximately 80 mammal species from across the phylogenetic breadth of mammals (Table 1). Note how across mammal species in general the same broad organization and distribution of noradrenergic neurons is observed, but the nomenclature based on laboratory rodents is not encompassing and required modification to be applicable across mammalian species.
Noradrenergic Neurons (A4 and A6 Nuclei) within the Periventricular Grey Matter of the Laboratory Rodent Rostral Hindbrain
The neurons forming the A4 nucleus of the laboratory rat were described by Dahlström and Fuxe ([2], p. 14) as being located "in the lateral part of the roof of the fourth ventricle, just under the ependyma, ventral to the cerebellar nuclei." It has been shown that these neurons project to the cerebellum in the rat [67]. They further described ( [2], p. 17) the A6 nucleus as "seems to be identical with the locus coeruleus" as defined from architectonic studies, with, "All-or at least practically all-of its closely packed nerve cells belong to the catecholamine-type" ( [2], p. 17). The A6 nucleus was observed in the lateral aspect of the periventricular grey matter, extending from the floor of the fourth ventricle to the ventrolateral aspect of the periventricular grey matter. Due to the lack of a distinct anatomical border between the neurons forming the A6 and A4 nuclei of Dahlström and Fuxe [2], others have grouped these as a single nucleus within the periventricular grey matter of the laboratory rodent brain, naming these two nuclei the locus coeruleus (LC) or dorsal locus coeruleus (LCd) (e.g., [7,[63][64][65][66]) ( Figure 1).
Noradrenergic Neurons (A5 and A7) within the Parvicellular Reticular Nucleus of the Rostral Hindbrain
Within the reticular region, between one and five noradrenergic neuron clusters have been described ( Figure 1). Initially, Dahlström and Fuxe [2] described a column of cells medial to the trigeminal motor nucleus, which was indicated to be a ventral continuation of the A6 nucleus, with cells located lateral to the trigeminal motor nucleus being classified as the A7 group. In addition, they described a cluster of neurons lateral to the superior olivary nuclear complex that was labelled as the A5 nucleus. Dahlström and Fuxe ([2], p. 17) defined the neurons belonging to the A5 nucleus of the rat as being located "among the fibres of the tractus rubro-spinalis mainly at the level of the caudal and middle third of the nuc. olivaris superior". The A5 nucleus has been consistently identified in subsequent studies of laboratory rodents, with no specific nomenclature variations (e.g., [7,[63][64][65][66]) ( Figure 1).
The initial subdivision of the noradrenergic neurons in the parvicellular reticular nucleus identified a column of neurons medial to the trigeminal motor nucleus that were described as, "A row of . . . cells . . . observed to pass from the ventral part of the rostral portion of the locus coeruleus in an arch medial to the nuc. motorius n. trigemini down to the cells within group A5" ( [2], p. 17/18). In addition, cells observed lateral to the trigeminal motor nucleus were named group A7 (Figure 1a).
The reasons to differentiate these parvicellular reticular noradrenergic neurons from those in the periventricular grey matter (A6 neurons) are the clear anatomical differences, with neurons within grey matter vs. reticular matter, and the presence of the fifth mesencephalic tract between neuronal groups of the grey and reticular matter. This differentiation is supported by studies of connectivity showing differential projection patterns between the noradrenergic neurons located in the periventricular grey matter and those located in the parvicellular reticular nucleus [5,[67][68][69][70][71][72] and development [7]. In addition, there is distributional continuity of the parvicellular reticular noradrenergic neurons between the ventral continuation of the A6 and the very laterally placed neurons that in some schemes are considered separately as the A7 group [2,63,64] (Figure 1a,b) despite the lack of clear developmental genetic evidence supporting this division [7].
The parcellation schemes of the noradrenergic neurons have been modified since the initial descriptions, with, for example, Aston-Jones [71], combining the A4, A6, and dorsalmost part of the A6 ventral continuation of Dahlström and Fuxe [2] as the locus coeruleus, while separating the A5 and A7 (being the remainder of the A6 ventral continuation and the A7 group of Dahlström and Fuxe [2]). Within the rat brain atlases of Paxinos and colleagues [63,64], and studies of the mouse brain [7], the A7 cells lateral to the trigeminal motor nucleus are consistent with the initial description, but the A6 ventral column has been termed the subcoeruleus and subdivided into three parts, the subcoeruleus nucleus, alpha part (SubCA/SubCα); subcoeruleus nucleus, dorsal part (SubCD); and subcoeruleus nucleus, ventral part (SubCV). It should be noted that Robertson and colleagues [7] in their study of the mouse brain only describe the SubCD and SubCV. The rat brain atlas of Swanson [65] identifies only the most dorsal portion of the A6 ventral column, labelling this the subceruleus nucleus (SLC), while not specifically identifying the remaining noradrenergic neurons in the parvicellular reticular nucleus ( Figure 1). Kitahama and colleagues [66] identify the ventral part of locus coeruleus (A6v), which appears to correspond to the SubCα of Paxinos and colleagues [63,64] and the SLC of Swanson [65], with the remaining cells being ascribed to the locus subcoeruleus (LSC), which appears to correspond to the SubCD, SubCV, and A7 of Paxinos and colleagues [63,64] and Robertson and colleagues [7].
Location, Nuclear Parcellation, and Nomenclature of the Locus Coeruleus Complex in Other Mammals
A locus coeruleus complex has been reported in all mammalian species in which the rostral hindbrain has been investigated. Despite this generality, there are variances in the organization of the constituent nuclei that have required the application of a flexible nomenclature to accommodate these within a framework that can be related to the intensely studied laboratory rodents (Figures 1-11). The comparative literature, as detailed below, has identified A4 and A6 nuclei housed within the periventricular grey matter, and A5 and A7 nuclei housed within parvicellular reticular nucleus. Below, we detail the variations that have been observed across species and outline the nomenclature that has been applied to these variations in order to assist in the recognition of potentially homologous and potentially novel nuclei in the various mammalian lineages where the locus coeruleus complex has been studied.
The A4 Nucleus (Dorsomedial Division of the Locus Coeruleus)
This nucleus, being located in the periventricular grey matter adjacent to the lateral recess of the fourth ventricle and not populated by a large number of neurons, is not always readily observed in the various mammal species that have been investigated (e.g., [32]). The A4 nucleus was not observed in the Prototheria (monotremes) and Metatheria (marsupials) species examined (Table 1). In the Laurasiatheria radiation of Eutherian mammals (Table 1), the presence of an A4 nucleus is varied, being reported in most species of Afrotheria (Figure 2a), the hedgehog lineage of the Eulipotyphla, megachiropteran bats (Figure 2b), the Felidae lineage of carnivores, and the Perissodactyla (Table 1); however, neurons that could be assigned to the A4 nucleus were not observed in the shrew lineage of the Eulipotyphla, the microchiropteran bats, the Philodota, the non-Felidae carnivores, and most Cetartiodactyla (Table 1).
In contrast, the A4 nucleus is consistently observed in all species of the Euarchontoglires radiation of Eutherian mammals (Table 1; Figure 2c,d). While often the A4 is considered part of the A6 (see above), the variability of the presence of A4 neurons across species, primarily conforming to phylogenetic lineages, indicates that it is important to distinguish the A4 as a distinct division of the locus coeruleus complex independent of the locus coeruleus (A6). However, in the species in which the A4 is present, the precise delineation of the boundary between the A4 and A6 neurons is not straightforward, as the A4 neurons appear to be a dorsocaudal continuation of the A6 neurons within the periventricular grey matter. While generally quite a small number of neurons are found in the A4 and their distribution is limited, in both the lar gibbon and chimpanzee, A4 neurons were seen to extend considerably caudal in the ventral white matter of the cerebellum adjacent to the roof of the fourth ventricle [58]. Such an extension of A4 neurons is not observed in non-hominoid primates but does appear to be present in humans [61]. Thus, there is variation in the phylogenetic occurrence of the A4 nucleus and a broader distribution of A4 neurons in hominoids than other mammal species. The reliable delineation of the A4 from the A6 may require connectivity tracing, distinction of specific cell morphologies, or cell-specific molecular labelling [73,74]. The definition of a reliable distinction between A4 and A6 would greatly assist in the interpretation of functional studies in laboratory rodents and primates. [24], (c) the springhare (Pedetes capensis) [44], and (d) the lar gibbon (Hylobates lar) [58]. Note that across mammalian species this nucleus has a varied occurrence (Table 1), but when present is consistently located in the same location, that being dorsal to the locus coeruleus proper (A6) within the periventricular grey matter adjacent to the dorsomedial-most part of the superior cerebellar peduncle and cerebellar white matter. Insets in each image show a high-magnification image of the neurons that form the A4 nucleus in each species. In all images, dorsal is to the top and medial to the left. Scale bar in (b) = 250 µm and applies to (a,b). Scale bar in (d) = 500 µm and applies to (c,d). Scale bar in inset (c) = 25 µm and applies to all insets. 4V-fourth ventricle; Cb-cerebellum; GC-periventricular grey matter of the rostral hindbrain; scp-superior cerebellar peduncle.
In contrast, the A4 nucleus is consistently observed in all species of the Euarchontoglires radiation of Eutherian mammals (Table 1; Figure 2c,d). While often the A4 is considered part of the A6 (see above), the variability of the presence of A4 neurons across species, primarily conforming to phylogenetic lineages, indicates that it is important to distinguish the A4 as a distinct division of the locus coeruleus complex independent of the locus coeruleus (A6). However, in the species in which the A4 is present, the precise delineation of the boundary between the A4 and A6 neurons is not straightforward, as the A4 neurons appear to be a dorsocaudal continuation of the A6 neurons within the periventricular grey matter. While generally quite a small number of neurons are found in the A4 and their distribution is limited, in both the lar gibbon and chimpanzee, A4 neurons were seen to extend considerably caudal in the ventral white matter of the cerebellum adjacent to the roof of the fourth ventricle [58]. Such an extension of A4 neurons is [24], (c) the springhare (Pedetes capensis) [44], and (d) the lar gibbon (Hylobates lar) [58]. Note that across mammalian species this nucleus has a varied occurrence (Table 1), but when present is consistently located in the same location, that being dorsal to the locus coeruleus proper (A6) within the periventricular grey matter adjacent to the dorsomedial-most part of the superior cerebellar peduncle and cerebellar white matter. Insets in each image show a high-magnification image of the neurons that form the A4 nucleus in each species. In all images, dorsal is to the top and medial to the left. Scale bar in (b) = 250 µm and applies to (a,b). Scale bar in (d) = 500 µm and applies to (c,d). Scale bar in inset (c) = 25 µm and applies to all insets. 4V-fourth ventricle; Cb-cerebellum; GC-periventricular grey matter of the rostral hindbrain; scp-superior cerebellar peduncle.
The A6 Nucleus (Locus Coeruleus)
The locus coeruleus, or A6 nucleus, could be considered the most readily recognizable nucleus of the complex due to its consistent location and substantive density of neurons in mammals; however, the A6 as described in laboratory rodents [2] is atypical in comparison to other mammals [42]. In all mammals studied to date, apart from the tree pangolin [25], noradrenergic neurons are found in the ventrolateral aspect of the periventricular grey matter of the rostral hindbrain, and these are assigned to the A6 nucleus (Table 1). In the majority of mammals studied, the A6 nucleus is reported as being a moderate-to lowdensity cluster of noradrenergic neurons, which in the comparative neuroanatomical literature has been termed the A6 diffuse (A6d) nucleus ( Figure 3; Table 1). Despite this consistent appearance in most mammals, the form of this neuronal cluster varies from being absent in the tree pangolin ( Figure 4) [25], to being comprised of relatively few neurons in the rock hyrax [17], having an additional medial nucleus in the African elephant (A6m; Figure 5) [18], being comprised of a single densely packed neuronal cluster (A6c, locus coeruleus, compact portion) in Murid rodents (A6cr, the rodent-type of the A6c, Figure 6a,b) [42], or being comprised of a combination of a high-density cluster bordered by a low-density cluster in primates (A6cp, the primate-type of A6c; Figure 7) and megachiropteran bats (A6cm, the megachiropteran-type of A6c; Figure 8).
To date, the African elephant is the only species examined that shows a distinct topographically separated cluster of noradrenergic neurons in the periventricular grey matter; this cluster, comprised of relatively few neurons, is located medial to the standard A6d nucleus ( Figure 5) [18]. Within the order Rodentia, while the majority of species exhibit the typical moderate density of A6 neurons, the A6d (Figure 6c-f; Table 1), the Murid rodents, the lineage to which the commonly used laboratory rodents belong, show a distinctly different organization. In Murid rodents, the neurons forming the A6 nucleus are observed as a densely packed cluster of neurons that spans the dorsoventral extent of the ventrolateral periventricular grey matter (Figure 6a,b). This appearance and organization of the A6 in the Murid rodents appears to be a derived feature of this lineage [42] and indicates that the Murid rodents are unusual when compared to other rodent species, lagomorphs and scandents (Figure 8f).
The appearance of the A6 nucleus in primates is more complex than observed in most other mammals and clearly different from the Murid rodents. In the primate species that have been studied, the rostromedial portion of the A6 region displays a moderate to low density of neurons (akin to the typical mammalian A6d), but the more caudal regions of the A6 in primates shows a densely packed cluster of neurons, the A6cp, that does not span the periventricular grey matter to the floor of the fourth ventricle ( Figure 7; Table 1). A very similar organization of the A6 region is observed in the megachiropteran bats, the A6cm (Figure 8b,d; Table 1), but this is not seen in the microchiropteran bats (Figure 8a,c; Table 1). Thus, this primate-like organization of the A6 may have evolved convergently in the primate and megachiropteran lineages, or they may be the result of shared ancestry (see below). The A6 portion of the locus coeruleus complex in mammals displays the most variation in terms of its anatomy, with the species that are used as models for translational research (Murid rodents and primates) showing independently evolved high-density nuclei. [16], and five species of Laurasiatheria, including (b) the desert hedgehog (Paraechinus aethiopicus) [19], (c) the domestic ferret (Mustela putorius) [27], (d) the Arabian oryx (Oryx leucoryx) [31], (e) the minke whale (Balaenoptera acutorostrata) [36], and (f) the domestic horse (Equus caballus) [pers. obs.]. Note how the density of neurons in the locus coeruleus of all these species show the diffuse-type of organization (A6d) typical of mammals (Table 1). In all images, dorsal is to the top and medial to the left. Scale bar in (c) = 500 µm and applies to (a-c). Scale bars in (d-f) = 1 mm and applies to the respective images. Scale bar in inset (f) = 25 µm and applies to all insets. 4V-fourth ventricle; GC-periventricular grey matter of the rostral hindbrain; me5 -fifth mesencephalic tract; scp-superior cerebellar peduncle. [16], and five species of Laurasiatheria, including (b) the desert hedgehog (Paraechinus aethiopicus) [19], (c) the domestic ferret (Mustela putorius) [27], (d) the Arabian oryx (Oryx leucoryx) [31], (e) the minke whale (Balaenoptera acutorostrata) [36], and (f) the domestic horse (Equus caballus) [pers. obs.]. Note how the density of neurons in the locus coeruleus of all these species show the diffuse-type of organization (A6d) typical of mammals (Table 1). In all images, dorsal is to the top and medial to the left. Scale bar in (c) = 500 µm and applies to (a-c). Scale bars in (d-f) = 1 mm and applies to the respective images. Scale bar in inset (f) = 25 µm and applies to all insets. 4V-fourth ventricle; GC-periventricular grey matter of the rostral hindbrain; me5-fifth mesencephalic tract; scp-superior cerebellar peduncle. [25], revealed with dopamine-β-hydroxylase (DBH, (b)) and tyrosine hydroxylase (TH, (c)) immunostaining, to compare with an adjacent Nissl-stained section (a). Note the presence of a larger-than-usual compact portion of the subcoeruleus (A7sc) and the diffuse portion of the subcoeruleus (A7d). No apparent dorsolateral division of the locus coeruleus (A4?) or locus coeruleus proper (A6?) is observed within the periventricular grey matter of the rostral hindbrain (GC). The tree pangolin is the only species in which the absence of a locus coeruleus, A6, has been observed [25]. Insets in (b,c) show a high-magnification image of the neurons that form the A7d. In all images, dorsal is to the top and medial to the left. Scale bar in (c) = 500 µm and applies to all images. Scale bar in inset c = 25 µm and applies to both insets. scp-superior cerebellar peduncle. [25], revealed with dopamine-β-hydroxylase (DBH, (b)) and tyrosine hydroxylase (TH, (c)) immunostaining, to compare with an adjacent Nissl-stained section (a). Note the presence of a larger-than-usual compact portion of the subcoeruleus (A7sc) and the diffuse portion of the subcoeruleus (A7d). No apparent dorsolateral division of the locus coeruleus (A4?) or locus coeruleus proper (A6?) is observed within the periventricular grey matter of the rostral hindbrain (GC). The tree pangolin is the only species in which the absence of a locus coeruleus, A6, has been observed [25]. Insets in (b,c) show a high-magnification image of the neurons that form the A7d. In all images, dorsal is to the top and medial to the left. Scale bar in (c) = 500 µm and applies to all images. Scale bar in inset c = 25 µm and applies to both insets. scp-superior cerebellar peduncle. [18] revealed with tyrosine hydroxylase immunostaining. The standard mammalian diffuse portion of the locus coeruleus (A6d) is present in the ventrolateral periventricular grey matter of the rostral hindbrain (GC), but in addition, a medially located cluster of immunopositive neurons, the medial portion of the locus coeruleus (A6m), is observed and appears to be a lineage-specific addition to the locus coeruleus complex (A6). The neurons of the A6m (b,d) appear to have a slightly more arborized dendritic field than those of the A6d (c,e). In all images, dorsal is to the top and medial to the left. Scale bar in (a) = 1 mm. Scale bar in (c) = 500 µm and applies to (b,c). Scale bar in (e) = 50 µm and applies to (d,e). 4V-fourth ventricle; A7d-locus subcoeruleus, diffuse portion; GC-periventricular grey matter of the rostral hindbrain. [40,41], (b) the African pygmy mouse (Mus minutoides) [42], (c) the highveld mole-rat (Cryptomys hottentotus) [43], (d) Beecroft's scalytailed squirrel (Anomalurus beecrofti) [44], (e) the black-rumped agouti (Dasyprocta primnolopha) [pers. obs.], and (f) the crested porcupine (Hystrix africaeaustralis) [45]. Note that in the two Murid rodents depicted, (a,b), the neurons forming the A6 are densely packed and extend dorsally to the floor of the fourth ventricle, forming what we term the compact portion of the locus coeruleus, rodent-type (A6cr). In contrast, the non-Murid rodents (c-f) evince an A6 nucleus that has less densely packed neurons, forming the diffuse portion of the locus coeruleus (A6d) as seen in most mammals. Insets show high-magnification images of the neurons from the A6 in each species. In all images, dorsal is to the top and medial to the left. Scale bar in (b) = 250 µm and applies to (b) only. Scale bar in (f) = 500 µm and applies to (a,c-f). Scale bar in inset (d) = 25 µm and applies to all insets. 4V-fourth ventricle; A7d-locus subcoeruleus, diffuse portion; A7sc-locus subcoeruleus, compact portion; GC-periventricular grey matter of the rostral hindbrain; Me5-fifth mesencephalic nucleus; me5 -fifth mesencephalic tract; scp-superior cerebellar peduncle. [52], (b) ringtailed lemur (Lemur catta) [52], and a rostro-caudal series, with each image being approximately 1 mm apart, through the A6 of the lar gibbon (Hylobates lar) [58]. Note the presence of both diffuse (A6d) and compact (A6cp, compact portion of the locus coeruleus, primate-type) portions of the A6 in primates, with the caudal end of the A6 (d) showing the region of highest density of immunostained neurons. Insets in (a,b,e,f) show a high-magnification image of the neurons from the A6d (a,e) and A6cp (b,d). In all images, dorsal is to the top and medial to the left. Scale bar in (b) = 500 µm and applies to (a,b). Scale bar in (f) = 1 mm and applies to (c-f). Scale bar in inset f = 25 µm and applies to all insets. 4V-fourth ventricle; ca-cerebral aqueduct; GC-periventricular grey matter of the rostral hindbrain; me5 -fifth mesencephalic tract; scpsuperior cerebellar peduncle. [21] and (c) the African sheath-tailed bat (Coleura afra) [22], two species of megachiropteran bat, (b) the Egyptian rousette (Rousettus aegyptiacus) [23] and (d) Wahlberg's epauletted fruit bat (Epomophorus wahlbergi) [24], (e) the greater forest shrew (Sylvisorex ollula) [19], and (f) the northern tree shrew (Tupaia belangeri) [50]. Note how the density of neurons in the locus coeruleus of the microchiropterans (a,c), the greater forest shrew (e), and the northern tree shrew (f) show the diffuse-type of organization (A6d) typical of mammals. In contrast, the locus coeruleus in the two species of megachiropterans (b,d) show a very high density of cells (A6cm, compact portion of the locus coeruleus, megachiropteran-type), as well as peripheral regions of low density. This appearance is very similar to what is observed in primates (see Figure 7). In all images, dorsal is to the top and medial to the left. Scale bar in (e) = 250 µm and applies to (a,c,e). Scale bar in (f) = 500 µm and applies to (b,d,f). Scale bar in inset (f) = 25 µm and applies to all insets. 4V-fourth ventricle; A7d-locus subcoeruleus, diffuse portion; A7sc-locus subcoeruleus, compact portion; GC-periventricular grey matter of the rostral hindbrain; Me5-fifth mesencephalic nucleus; me5 -fifth mesencephalic tract; scp-superior cerebellar peduncle. [21] and (c) the African sheath-tailed bat (Coleura afra) [22], two species of megachiropteran bat, (b) the Egyptian rousette (Rousettus aegyptiacus) [23] and (d) Wahlberg's epauletted fruit bat (Epomophorus wahlbergi) [24], (e) the greater forest shrew (Sylvisorex ollula) [19], and (f) the northern tree shrew (Tupaia belangeri) [50]. Note how the density of neurons in the locus coeruleus of the microchiropterans (a,c), the greater forest shrew (e), and the northern tree shrew (f) show the diffuse-type of organization (A6d) typical of mammals. In contrast, the locus coeruleus in the two species of megachiropterans (b,d) show a very high density of cells (A6cm, compact portion of the locus coeruleus, megachiropteran-type), as well as peripheral regions of low density. This appearance is very similar to what is observed in primates (see Figure 7). In all images, dorsal is to the top and medial to the left. Scale bar in (e) = 250 µm and applies to (a,c,e). Scale bar in (f) = 500 µm and applies to (b,d,f). Scale bar in inset (f) = 25 µm and applies to all insets. 4V-fourth ventricle; A7d-locus subcoeruleus, diffuse portion; A7sc-locus subcoeruleus, compact portion; GC-periventricular grey matter of the rostral hindbrain; Me5-fifth mesencephalic nucleus; me5-fifth mesencephalic tract; scp-superior cerebellar peduncle.
The A5 Nucleus (Fifth Arcuate Nucleus)
The A5 nucleus has been reported in all mammals studied (Table 1) and is the least variable of all the nuclei of the locus coeruleus complex in terms of location and the low number of neurons across mammalian species (Figure 9). No specific variations have been noted in this nucleus across species and, as such, it is likely that this nucleus is homologous across mammals with its actions likely being analogous.
Brain Sci. 2021, 11, x FOR PEER REVIEW 18 of 26 Figure 9. Low-magnification photomicrographs of the fifth arcuate, or A5, nucleus, revealed with immunostaining for tyrosine hydroxylase, in (a) the southern African hedgehog (Atelerix frontalis) [19], (b) Beecroft's scaly-tailed squirrel (Anomalurus beecrofti) [44], (c) the Cape mountain zebra (Equus zebra zebra) [pers. obs.], and (d) the lar gibbon (Hylobates lar) [58]. Note that across mammalian species this columnar nucleus appears to be invariably present and is consistently located in the same region of the brain, that being ventrolateral to the subcoeruleus, diffuse portion (A7d), and dorsolateral to the superior olivary nuclear complex (SON). Insets in each image show a high-magnification image of the neurons that form the A5 nucleus in each species. In all images, dorsal is to the top and medial to the left. Scale bars in (a-d) = 1 mm and apply to each specific image. Scale bar in inset (d) = 25 µm and applies to all insets.
The A7 Nuclei (Subcoeruleus)
In the broader comparative context, all noradrenergic neurons located within the parvicellular reticular nucleus of the rostral hindbrain that are not assigned to the A5 nucleus are combined to form the A7 group or subcoeruleus. An additional reason to differentiate the A7 neurons from the A6 neurons is the complete absence of periventricular grey matter noradrenergic neurons in some mammalian species, for example the tree pangolin ( Figure 4) [25]. Within this definition of the parvicellular reticular noradrenergic neurons, two distinct populations are consistently observed in the mammals that have been investigated, which include an A7 nucleus subcoeruleus compact (A7sc) and an A7 nucleus subcoeruleus diffuse (A7d) portion (Figures 1f, 10, and 11). The A7sc lies immediately adjacent to the fifth mesencephalic tract in the dorsal-most part of the parvicellular reticular nucleus and is characterized by a moderate to high density of noradrenergic neurons. This portion corresponds to the most dorsal part of the ventral continuation of A6 neurons of Dahlström and Fuxe [2], the subcoeruleus nucleus, alpha part of Paxinos and colleagues [63,64], the subceruleus nucleus (SLC) of Swanson [65], and the ventral part of locus coeruleus (LVc) of Kitahama and colleagues [66]. The A7d noradrenergic neurons are far lower in density and are spread more broadly across the parvicellular reticular nucleus and are topographically continuous with the noradrenergic neurons in the parabrachial region, with these parabrachial neurons being named the A7 group [2,63,64]. The extent of these neurons does vary somewhat across Figure 9. Low-magnification photomicrographs of the fifth arcuate, or A5, nucleus, revealed with immunostaining for tyrosine hydroxylase, in (a) the southern African hedgehog (Atelerix frontalis) [19], (b) Beecroft's scaly-tailed squirrel (Anomalurus beecrofti) [44], (c) the Cape mountain zebra (Equus zebra zebra) [pers. obs.], and (d) the lar gibbon (Hylobates lar) [58]. Note that across mammalian species this columnar nucleus appears to be invariably present and is consistently located in the same region of the brain, that being ventrolateral to the subcoeruleus, diffuse portion (A7d), and dorsolateral to the superior olivary nuclear complex (SON). Insets in each image show a high-magnification image of the neurons that form the A5 nucleus in each species. In all images, dorsal is to the top and medial to the left. Scale bars in (a-d) = 1 mm and apply to each specific image. Scale bar in inset (d) = 25 µm and applies to all insets.
The A7 Nuclei (Subcoeruleus)
In the broader comparative context, all noradrenergic neurons located within the parvicellular reticular nucleus of the rostral hindbrain that are not assigned to the A5 nucleus are combined to form the A7 group or subcoeruleus. An additional reason to differentiate the A7 neurons from the A6 neurons is the complete absence of periventricular grey matter noradrenergic neurons in some mammalian species, for example the tree pangolin ( Figure 4) [25]. Within this definition of the parvicellular reticular noradrenergic neurons, two distinct populations are consistently observed in the mammals that have been investigated, which include an A7 nucleus subcoeruleus compact (A7sc) and an A7 nucleus subcoeruleus diffuse (A7d) portion (Figures 1f, 10 and 11). The A7sc lies immediately adjacent to the fifth mesencephalic tract in the dorsal-most part of the parvicellular reticular nucleus and is characterized by a moderate to high density of noradrenergic neurons. This portion corresponds to the most dorsal part of the ventral continuation of A6 neurons of Dahlström and Fuxe [2], the subcoeruleus nucleus, alpha part of Paxinos and colleagues [63,64], the subceruleus nucleus (SLC) of Swanson [65], and the ventral part of locus coeruleus (LVc) of Kitahama and colleagues [66]. The A7d noradrenergic neurons are far lower in density and are spread more broadly across the parvicellular reticular nucleus and are topographically continuous with the noradrenergic neurons in the parabrachial region, with these parabrachial neurons being named the A7 group [2,63,64]. The extent of these neurons does vary somewhat across species, but there is no compelling evidence to parcellate the parabrachial noradrenergic neurons from those located more medially [7].
Across all mammalian species studied to date, these two portions of the A7 are consistently reported (Figures 10 and 11; Table 1). The most unusual A7 is found in the tree pangolin (Figure 4) [25], where no noradrenergic neurons are observed within the periventricular grey matter, but the extent of the A7sc and the relative number of neurons comprising the A7sc are expanded in comparison to other mammals. This indicates that these A7sc and A7d nuclei are likely to be homologous nuclei shared by all mammalian species studied to date. species, but there is no compelling evidence to parcellate the parabrachial noradrenergic neurons from those located more medially [7]. Across all mammalian species studied to date, these two portions of the A7 are consistently reported (Figures 10 and 11; Table 1). The most unusual A7 is found in the tree pangolin ( Figure 4) [25], where no noradrenergic neurons are observed within the periventricular grey matter, but the extent of the A7sc and the relative number of neurons comprising the A7sc are expanded in comparison to other mammals. This indicates that these A7sc and A7d nuclei are likely to be homologous nuclei shared by all mammalian species studied to date. Figure 10. Low-magnification photomicrographs of the subcoeruleus, or A7, region revealed with immunostaining for tyrosine hydroxylase in four mammalian species belonging to the Laurasiatheria superorder of Eutherian mammals, including (a) the southern African hedgehog (Atelerix frontalis) [19], (b) rock hyrax (Procavia capensis) [17], (c) the African wild dog (Lycaon pictus) [pers. obs.], and (d) the river hippopotamus (Hippopotamus amphibius) [37]. Note the presence of the compact portion of the subcoeruleus (A7sc) in the dorsal aspect of the tegmentum, with scattered more widely distributed neurons throughout the parvicellular reticular nucleus forming the diffuse portion of the subcoeruleus (A7d). Insets in each image show a high-magnification image of the neurons that form the A7d in each species. In all images, dorsal is to the top and medial to the left. Scale bars in (a,b) = 500 µm and apply to the respective images. Scale bars in (c,d) = 1 mm and apply to the respective images. Scale bar in inset d = 25 µm and applies to all insets. scp-superior cerebellar peduncle. [50], (c) the ringtailed lemur (Lemur catta) [52], and (d) the chimpanzee (Pan troglodytes) [58]. Note the presence of the compact portion of the subcoeruleus (A7sc) in the dorsal aspect of the parvicellular reticular nucleus, with scattered more widely distributed neurons throughout the parvicellular reticular nucleus forming the diffuse portion of the subcoeruleus (A7d). Insets in each image show a high-magnification image of the neurons that form the A7d in each species. In all images, dorsal is to the top and medial to the left. Scale bars in (a,b) = 500 µm and apply to the respective images. Scale bars in (c,d) = 1 mm and apply to the respective images. Scale bar in inset d = 25 µm and applies to all insets. scp-superior cerebellar peduncle.
Consistencies in the Organization of the Mammalian Locus Coeruleus Complex
Given the phylogenetic range of mammalian species in which the locus coeruleus complex has been described (Table 1), it is reasonable to assume that the locus coeruleus complex of mammals is invariably located in the rostral hindbrain. These noradrenergic neurons have been shown to be derived from rhombomeres 1-5 in the developing mouse brain [7], and this is likely to be a common developmental origin for the neurons of the locus coeruleus complex in all mammals. In addition, it is reasonable to state that in mammals, with one known exception [25], the noradrenergic neurons of the locus coeruleus complex are found within the periventricular grey matter and the parvicellular reticular nucleus of the rostral hindbrain. It is also reasonable to posit that the locus coeruleus complex is comprised of several nuclei. Despite this overall similarity, there are distinct structural variances that have required the application of a flexible nomenclature to accommodate these within a framework that can be related to the intensely studied laboratory rodents. Figure 11. Low-magnification photomicrographs of the subcoeruleus, or A7, region revealed with immunostaining for tyrosine hydroxylase in four mammalian species belonging to the Euarchontoglires superorder of Eutherian mammals, including (a) the east African root-rat (Tachyoryctes splendens) [pers. obs.], (b) Cape hare (Lepus capensis) [50], (c) the ringtailed lemur (Lemur catta) [52], and (d) the chimpanzee (Pan troglodytes) [58]. Note the presence of the compact portion of the subcoeruleus (A7sc) in the dorsal aspect of the parvicellular reticular nucleus, with scattered more widely distributed neurons throughout the parvicellular reticular nucleus forming the diffuse portion of the subcoeruleus (A7d). Insets in each image show a high-magnification image of the neurons that form the A7d in each species. In all images, dorsal is to the top and medial to the left. Scale bars in (a,b) = 500 µm and apply to the respective images. Scale bars in (c,d) = 1 mm and apply to the respective images. Scale bar in inset (d) = 25 µm and applies to all insets. scp-superior cerebellar peduncle.
Consistencies in the Organization of the Mammalian Locus Coeruleus Complex
Given the phylogenetic range of mammalian species in which the locus coeruleus complex has been described (Table 1), it is reasonable to assume that the locus coeruleus complex of mammals is invariably located in the rostral hindbrain. These noradrenergic neurons have been shown to be derived from rhombomeres 1-5 in the developing mouse brain [7], and this is likely to be a common developmental origin for the neurons of the locus coeruleus complex in all mammals. In addition, it is reasonable to state that in mammals, with one known exception [25], the noradrenergic neurons of the locus coeruleus complex are found within the periventricular grey matter and the parvicellular reticular nucleus of the rostral hindbrain. It is also reasonable to posit that the locus coeruleus complex is comprised of several nuclei. Despite this overall similarity, there are distinct structural variances that have required the application of a flexible nomenclature to accommodate these within a framework that can be related to the intensely studied laboratory rodents.
Of the constituent nuclei of the locus coeruleus, the A5 nucleus has been reported in all mammals studied and can be considered a homologous nucleus across species. In rats, the A5 nucleus projects to the intermediolateral cell column of the interramal region of the spinal cord with a specific focus on the sympathetic preganglionic neurons [72]. These projections and the associated functional actions are likely to be a consistent feature in mammals. In addition, the A7sc and A7d nuclei, despite the differing parcellation schemes that have been proposed (Figure 1), appear to be very consistent, probably homologous, features of the locus coeruleus complex across mammals (Table 1). This general consistency in the organization of the noradrenergic neurons within the parvicellular reticular nucleus may be a reflection of their role in the control of the visceral and motor systems [9,10,71]. In contrast to the consistency in the organization of the noradrenergic neurons within the parvicellular reticular nucleus, the organization of those within the periventricular grey appears to be, in an evolutionary sense, more plastic. This in turn may relate to the observations that these neurons, especially those of the A6 nucleus, project to the forebrain and thus they may undergo organizational changes related to evolution of the forebrain [7].
Variations in the Organization of the Mammalian Locus Coeruleus Complex
The variances noted in the studies undertaken across mammalian species primarily relate to the neurons of the locus coeruleus complex that are found within the periventricular grey matter, specifically the A4 and A6 nuclei as defined by Dahlström and Fuxe [2]. These variances may have important implications for the extrapolation of findings in laboratory rodents to other mammals, particularly humans, where the organization of the A6 is quite different. As several species of the Euarchontoglires superorder have been studied, and the species that occupy phylogenetic positions between the Murid rodents and primates-the non-Murid rodents, lagomorphs and scandents-do not show the compact morphology of the A6, it is clear that the A6cp is a derived feature of the primate (or closely related species, see below) lineage. This increased nuclear complexity in the primate A6 region may also indicate altered projection patterns, functionality, and even internal neurophysiological interactions of the A6 neurons that may be specific to primates. These anatomical (and possibly connectional) variances raise concerns about the direct translatability of the results of functional studies in Murid rodents to the human.
Gaps in Our Comparative Knowledge of the Mammalian Locus Coeruleus Complex
While the locus coeruleus complex has been described in approximately 80 mammalian species (Table 1), this only represents a small proportion (less than 2%) of mammal species. Despite this, there is considerable understanding of the consistency and variance already obtained, but there are significant gaps in our knowledge that are amenable to further investigation and clarification. Of the approximately 350 Metatherian (marsupial) species, the locus coeruleus complex has only been described in full in two species (Table 1). It is likely that there will be potentially informative variations in the organization of the locus coeruleus complex within the Metatheria. Within the Eutheria, there are specific clades and orders that have not been examined. To date, no descriptions of the locus coeruleus complex have been provided in any Xenarthrans (anteaters, sloths, and armadillos), Tubulidentata (aardvark), Sirenia (sea cows), or Dermoptera (colugos or flying lemurs). The Dermoptera are of particular interest as they are the recognized sister group to the primates (e.g., [75]) and as such occupy an important phylogenetic position in terms of understanding the evolution of the specialized A6 region in primates. Indeed, as the megachiropteran bats exhibit an A6 organization that is very similar to that observed in primates [23,24] and have been proposed to have evolved from the Dermoptera [76], if the Dermoptera were to show an organization of the A6 that is similar to that observed in primates, and the concept that the megachiropterans evolved from the Dermoptera supported, the megachiropterans may become a very useful model species for understanding what may have changed in locus coeruleus function in the primate lineage compared to the Murid rodents. It could also mean that the megachiropterans may be very useful species in translational research regarding the locus coeruleus complex and possibly other regions of the brain.
How Does the Nuclear Definition of the LC Complex Developed in the Laboratory Rodent Brain Accommodate Variations in Mammalian Species?
It must be openly acknowledged that the laboratory rodents typically studied are but two of several thousand mammal species, and that the commonly used laboratory rodents represent specific strains that have been bred for specific reasons. This may lead to differences in brain structure or function that can undermine the imposition of anatomical nomenclature derived from the rodent brain to other mammals. This is important as there is a growing concern regarding the extrapolation of scientific findings in the laboratory rodents to humans (e.g., [77][78][79]).
When examining species from across the phylogenetic breadth of mammals, the nomenclatures developed in the laboratory rodent brains are to some extent applicable, but in several cases also appear to be untenable. The studies undertaken across mammalian species have mostly employed the alphanumeric nomenclature [2], but the variations noted have required that minor changes and flexibility within this nomenclature were needed in order to be able to describe the variations observed accurately, without inferring potential homologies that may or may not be correct. Indeed, determining the precise homologies of the different portions of the locus coeruleus complex across mammalian species, using an evolutionary developmental, "evo-devo", approach, is important information that needs to be obtained. Initially, determining the true homologous nuclei of the locus coeruleus in the laboratory rodents and primates is of utmost importance to our understanding of findings made in the laboratory rodents and their relationship to the function and dysfunction of the human locus coeruleus. This determination, and the methods applied, could then be more broadly investigated across mammals, and other vertebrates, in order to improve our understanding of the structure and function of the locus coeruleus complex, leading to an improved understanding of the behaviour and associated neural processes of less commonly studied species. Comparative research has the potential to provide "shortcuts" to develop our understanding of the structure and function of the nervous system (e.g., the giant axon of the squid and the discovery of the action potential is a classic example) that may not be accessible through the more commonly used approach of investigating laboratory rodents and may be the conduit through which we improve the success rate of studies aimed at understanding the function, dysfunction, and treatments of dysfunction of the human brain. Institutional Review Board Statement: Ethical review and approval were waived for this study due to this being a review of previously published work. | 2021-11-14T16:28:43.311Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "033d3dfa498b4524f86e79a30ed0e64a88b272e5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/11/11/1486/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d97a5a4dcc17826b2c5c9accacf3511d0c6a71a6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3561160 | pes2o/s2orc | v3-fos-license | Identification of a Group’s Physiological Synchronization with Earth’s Magnetic Field
A new analysis technique for the evaluation of the degree of synchronization between the physiological state of a group of people and changes in the Earth’s magnetic field based on their cardiac inter-beat intervals was developed and validated. The new analysis method was then used to identify clusters of similar synchronization patterns in a group of 20 individuals over a two-week period. The algorithm for the identification of slow wave dynamics for every person was constructed in order to determine meaningful interrelationships between the participants and the local magnetic field data. The results support the hypothesis that the slow wave rhythms in heart rate variability can synchronize with changes in local magnetic field data, and that the degree of synchronization is affected by the quality of interpersonal relationships.
Introduction
There are numerous investigations examining correlations between human health and the Earth's magnetic field activity. Many interesting studies have shown strong influences on a number of human pathologic and behavioral states.
It is well established that geomagnetic field line resonances and the cavity between Earth and the ionosphere generate a number of resonant frequencies that directly overlap those of the human brain, autonomic nervous system, and cardiovascular system. Of all the physiological systems studied thus far, the rhythms of the heart and the brain are most strongly associated with changes in geomagnetic conditions [1][2][3][4][5][6][7][8][9][10][11].
For example, numerous studies have demonstrated significant relationships between magnetic storms and decreased heart rate variability (HRV), the measurement of beat-to-beat changes in heart rate [12], which is suggestive of a potential mechanism linking geomagnetic activity with increased incidents of myocardial infarction and coronary disease [7,[13][14][15][16][17][18][19]. In a review of the health effects of geomagnetic disturbances [20], Palmer et al. suggest these "definite conclusions": (1) geomagnetic disturbances have a greater effect on humans at higher geomagnetic latitudes; (2) unusually high values of geomagnetic activity have an effect on cardiovascular system health; (3) unusually low values of geomagnetic activity appear to have a negative effect on health; (4) approximately 10% to 15% of people in areas studied are negatively affected by disturbed geomagnetic activity; and (5) HRV is negatively correlated with geomagnetic disturbance.
There is a wide range of magnetic waves occurring in the magnetosphere that are excited by various processes within the magnetosphere as well as the solar wind. The most common source of ultra low-frequency wave energy measured on the ground is due to the field line resonances that exhibit the largest magnetic wave amplitudes occurring in the magnetosphere [21]. The frequency of these oscillations depends on the field strength, the number of charged ions spinning around the field lines, and specifically, the length of the magnetic field line. Quasi-sinusoidal oscillations are called "Pc" (pulsations continuous), and irregular waveforms are called "Pi" (pulsations irregular). Each major type is divided into frequency ranges associated with distinct phenomena. Standing field line oscillations are typically in the Pc3 to Pc5 range, which correspond to a frequency range between 1 and 100 mHz. Oscillations classified as Pc1 and 2 are oscillations with frequencies up to 5 hertz, which are typically excited by geomagnetic substorms [22].
The ionosphere is a layer of plasma, a term that describes highly ionized gases threaded by magnetic fields, which surrounds the Earth. The charged particles in the plasma can spiral around the magnetic field lines and travel along it, creating auroras as high-energy particles flow along the field lines to the Earth's magnetic poles. This process was described by Hannes Alfven to explain how low-frequency waves that propagate along magnetic field lines are created [23].
Standing waves in the magnetosphere involve many magnetic field lines, with lengths several times the Earth's radius, which are excited and oscillate at their resonant frequency, similar to a plucked guitar string. Longer field lines have a lower resonant frequency, while shorter ones resonate at a higher frequency. Field lines with more or heavier particles spiraling around them tend to have lower frequencies. Changes in solar wind velocity or the polarity and orientation of the interplanetary magnetic field can have dramatic effects on the waves, as measured on the surface of the Earth [24].
Studies have shown that increased amplitudes of field line resonances can affect the cardiovascular system, most likely because their frequencies are in the same range as the primary rhythms found in the cardiovascular and autonomic nervous systems [25].
There has been a rapidly growing use of HRV since new devices have made obtaining the electrocardiogram (ECG) and HRV more accessible, and the understanding that HRV reflects autonomic nervous system dynamics [12] and provides an index of stress and emotions [26] and social interaction [27].
In a study conducted by Doronin et al. [28], electroencephalogram (EEG) rhythms, blood pressure, heart rate, and reaction times were compared with the low-frequency geomagnetic rhythms. They found that the oscillations in both heart and brain patterns changed simultaneously with changes in geomagnetic activity. Experiments conducted by Zenchenko et al. [29] monitored healthy individuals' heart rates at rest and compared them with low-frequency variations between 0.5 and 3.0 mHz in the geomagnetic field. They found that in two-thirds of the experiments, there was a synchronization between the heart rhythms and the rhythms in the geomagnetic field that occurred between 4 and 30 min-long periods.
A more recent study [30] by McCraty et al. found a surprising degree of synchronization between geomagnetic activity and human nervous system function by continuous monitoring of participants HRV over a 31-day period in a group of individuals who went about their normal day-to-day lives. Overall, the study found evidence suggesting that daily autonomic nervous system activity not only responds to changes in solar and geomagnetic activity, but also is synchronized with the time-varying magnetic fields associated with geomagnetic field line resonances and Schumann resonances. More specifically, it was found that the participants exhibited a previously unidentified slow wave rhythm in their HRV, which was highly synchronized among the study participants and the time-varying magnetic field data, with a rhythm of approximately 2.5 days.
Following these findings of a significant interconnection between changes in local magnetic field activity and heart rate variability, this study examined potential relationships between human physiology (HRV), the geomagnetic field activity, and the quality of interpersonal relationships.
It has been found that individuals have widely varying levels of sensitivity to changes in the Earth's magnetic field, and can respond in opposite ways to fluctuations in the same environmental variable [31]. In order to improve the assessment of physiological synchronization and also identify different clusters of individuals' response patterns, we first developed and validated a new analysis approach using near-optimal chaotic attractor embedding techniques. This allowed us to identify specific patterns of synchronization between heart rate variability and local magnetic field data, and assess potential relationships between interpersonal dynamics and physiological synchronization in a group of people.
Participants
During the two-week period between 26 February and 12 March 2015, a group of 20 medical students attending the Lithuanian University of Health Sciences continuously wore cardiac monitors (Bodyguard 2, Firstbeat Technologies Ltd., Jyväskylä, Finland) that gathered inter-beat intervals (IBI) from each participant. Consequently, we obtained a total of 20 IBI series, which is the time between the consecutive R wave peaks in the electrocardiogram from the 20 participants.
Ethics Statement
The research met all applicable standards for the ethics of experimentation in accordance with the Declaration of Helsinki. The permit to perform biomedical investigation was granted by the Kaunas Regional Ethics Committee for Biomedical Investigations, No. BE-2-51, 23.12.2015 (copies of documents are enclosed as Supplementary Materials). Participants provided written informed consent prior to the experiment.
Computational Estimation of the Synchronization of a Group's HRV Time Series with Earth's Magnetic Field Data
The main objective of this study was to assess the synchronization between the HRV time series of each participant and the magnetic field data. This information was then used to construct clusters of participants within the group based on the estimated synchronization between their HRV and the magnetic field.
Magnetic Field Data
The local magnetic field intensity was measured using a local magnetometer located in Lithuania (Coordinates: Latitude: 55.634068 Longitude: 23.704563), which is part of the Global Coherence Monitoring Network [32]. Two magnetic field detectors (Zonge Engineering ANT-4) were positioned in the north-south and east-west axes to detect local time-varying magnetic field strengths (sensitivity 1 pT) over a wide frequency range (0.01-300 Hz) while maintaining a flat frequency response. The data acquisition infrastructure captures, then stamps, the global positioning system time, and transmits the data to the common server. Each magnetometer in the network is continuously sampled at a rate of 130 Hz. Used data can be obtained from the HeartMath Institute website [33].
Computation of the Power of Local Magnetic Field
Consider magnetic field intensity {I t } N−1 t=0 , where t is a discrete time variable. In order to transform {I t } N−1 t=0 to the frequency domain the discrete Fourier transform (DFT) was used: In order to observe changes in spectral density over time, the analysis interval was broken up into smaller sections using the discrete time short-time Fourier transform (STFT) for {I t } N−1 t=0 : This is essentially a partitioned form of Equation (1) using the windowing function ξ(t). A windowing function has a value close to 1 in each of a series of sliding segments of t and a value of 0 elsewhere.
The squared magnitude of the STFT F(θ, ω) results in the spectrogram of I t , which is utilized in the subsequent analysis since the STFT provides better time resolution: S(θ, ω) is typically referenced as power spectral density (PSD). Thus, the value of S(θ, ω) is interpreted as the signal power at the time interval ∆θ and at the frequency range ∆ω.
Consider that it is required to find the local magnetic field power in the frequency range The power of the local magnetic field is computed using Algorithm A: (1) Compute the spectrogram S(θ, ω) (as described previously).
(2) Crop the spectrogram S = min S; S crop in order to eliminate intermittent chaotic outbreaks in the measured data due to manmade noise, lightening, etc.
This is essentially a partitioned form of Equation (1) using the windowing function . A windowing function has a value close to 1 in each of a series of sliding segments of and a value of 0 elsewhere.
The squared magnitude of the STFT , results in the spectrogram of , which is utilized in the subsequent analysis since the STFT provides better time resolution: , is typically referenced as power spectral density (PSD). Thus, the value of , is interpreted as the signal power at the time interval Δ and at the frequency range Δ .
Consider that it is required to find the local magnetic field power in the frequency range ∈ ; at time interval through Δ . The power of the local magnetic field is computed using Algorithm A: (1) Compute the spectrogram , (as described previously). (2) Crop the spectrogram min ; in order to eliminate intermittent chaotic outbreaks in the measured data due to manmade noise, lightening, etc. An example of the spectrogram , of the magnetic field signal depicted in Figure 1 for Δ 4 hours, ∈ 0; 52 Hz is displayed in Figure 2. An example of the spectrogram S(θ, ω) of the magnetic field signal depicted in Figure 1 for ∆θ = 4 h, ω ∈ [0; 52]Hz is displayed in Figure 2. In order to calculate local magnetic field power in the frequency range ∈ 0; 1 at time interval 2015/02/26 01: 00: 02 through 2015/02/26 01: 00: 03 , the corresponding spectrogram is calculated as described above. The spectrogram is then cropped to min ; 0.25 .
The signal power 7.7405 pT /Hz is then computed from the filtered spectrogram.
Algorithm for the Computation of Geometrical Synchronization between
where every row of the matrix corresponds to the coordinates of an embedded point in the delay coordinate space. The time lag can be different ( ∈ ). Thus, at , the trajectory matrix reads:
⋮ ⋮
The ordered set of the embedded points is called an attractor, and the 2D plane itself is called the state space [34]. Note that the area occupied by the embedded attractor is one of the attributes characterizing the dynamics of the time series. However, the area of the attractor depends on the time lag used for the reconstruction of the state space. The maximal area of the embedded attractor (and the corresponding optimal time lag) is a feature that can be exploited for the description of the underlying model governing the evolution of the time series [35].
We employed a straightforward algorithm for the computation of the area of the embedded attractor based on the direct assessment of the geometric area occupied by the set of points of the trajectory matrix in the state space. The steps of Algorithm B read: In order to calculate local magnetic field power in the frequency range ω ∈ [0; 1] Hz at time interval t 0 = 2015/02/26 01:00:02 through t 1 = 2015/02/26 01:00:03, the corresponding spectrogram is calculated as described above. The spectrogram is then cropped to S = min{S; 0.25}. The signal power P = 7.7405 ((pT) 2 /Hz) is then computed from the filtered spectrogram.
Computation of the Area of an Attractor in the State Space
Let a signal X = (X 1 , . . . , X n ) be a scalar time series of size n. It is possible to embed the time series into a 2D delay coordinate space: For τ = 1 the following trajectory matrix is obtained: where every row of the matrix corresponds to the coordinates of an embedded point in the delay coordinate space. The time lag τ can be different (τ ∈ N). Thus, at τ = k, the trajectory matrix reads: The ordered set of the embedded points is called an attractor, and the 2D plane itself is called the state space [34]. Note that the area occupied by the embedded attractor is one of the attributes characterizing the dynamics of the time series. However, the area of the attractor depends on the time lag τ used for the reconstruction of the state space. The maximal area of the embedded attractor (and the corresponding optimal time lag) is a feature that can be exploited for the description of the underlying model governing the evolution of the time series [35].
We employed a straightforward algorithm for the computation of the area of the embedded attractor based on the direct assessment of the geometric area occupied by the set of points of the trajectory matrix in the state space. The steps of Algorithm B read: (1) Compute the center of the mass of the points comprising the attractor. Move the origin of the state space to the center of the mass. As noted previously, the area S τ depends on τ. We consider a finite range of time lag values τ = 1, . . . , τ max ; S τ is calculated for each value of τ. The area of the embedded attractor is maximized in respect to τ: τ * = arg max τ S τ . The optimal time lag τ * can be used as a scalar identifier representing the geometrical features of the analyzed data series in the corresponding observation window. The computation of the optimal time lag τ * can be considered as the information reduction algorithm where a set of numbers in the original time series is mapped into a single scalar.
Example 1: Identification of the Optimal Time Lag
We will consider a nonlinear pendulum model with harmonic excitation as a paradigmatic chaotic oscillator in this example: x Equation (4) exhibits a chaotic solution at b = 2.048; ω = 2 3 [36]. Figure 3 illustrates 500 data points of the chaotic solution X = (X 1 , . . . , X 500 ). As noted previously, the area depends on . We consider a finite range of time lag values 1, … , ; is calculated for each value of . The area of the embedded attractor is maximized in respect to : * arg max . The optimal time lag * can be used as a scalar identifier representing the geometrical features of the analyzed data series in the corresponding observation window. The computation of the optimal time lag * can be considered as the information reduction algorithm where a set of numbers in the original time series is mapped into a single scalar.
Example 1: Identification of the Optimal Time Lag
We will consider a nonlinear pendulum model with harmonic excitation as a paradigmatic chaotic oscillator in this example: Equation (4) exhibits a chaotic solution at 2.048; [36]. Figure 3 illustrates 500 data points of the chaotic solution , … , . The images of embedded attractors for the time series depicted in Figure 3 using several different values of the time lag are presented in Figure 4. According to the algorithm presented above, the first step is shifting the origin to the center of the mass of the embedded attractor. The results of this procedure are illustrated in Figure 5. Note that this transformation does not impact the geometrical shape of the attractor. We select the number of slices to be 45. The execution of steps 2 and 3 of the algorithm with the six attractors (which correspond to six distinct time lag values) shown in Figure 4 results in the corresponding sliced diagrams illustrated in Figure 6. According to the algorithm presented above, the first step is shifting the origin to the center of the mass of the embedded attractor. The results of this procedure are illustrated in Figure 5. Note that this transformation does not impact the geometrical shape of the attractor. According to the algorithm presented above, the first step is shifting the origin to the center of the mass of the embedded attractor. The results of this procedure are illustrated in Figure 5. Note that this transformation does not impact the geometrical shape of the attractor. We select the number of slices to be 45. The execution of steps 2 and 3 of the algorithm with the six attractors (which correspond to six distinct time lag values) shown in Figure 4 results in the corresponding sliced diagrams illustrated in Figure 6. The maximal value of is set to 50, because the total number of data points in the observation window is 500 (higher values of would generate too short trajectory matrices). In order to compute * in the range 1,50 , the areas of embedded attractors must be computed for each value of . A plot representing the relationship between an area of the attractor and is presented in Figure 7. The largest area of the embedded attractor 24.9527 is achieved at * 23. The maximal value of τ is set to 50, because the total number of data points in the observation window is 500 (higher values of τ would generate too short trajectory matrices). In order to compute τ * in the range [1,50], the areas of embedded attractors must be computed for each value of τ. A plot representing the relationship between an area of the attractor S τ and τ is presented in Figure 7. The largest area of the embedded attractor S τ = 24.9527 is achieved at τ * = 23. The maximal value of is set to 50, because the total number of data points in the observation window is 500 (higher values of would generate too short trajectory matrices). In order to compute * in the range 1,50 , the areas of embedded attractors must be computed for each value of . A plot representing the relationship between an area of the attractor and is presented in Figure 7. The largest area of the embedded attractor 24.9527 is achieved at * 23.
Construction of the Algorithm for the Estimation of the Geometrical Synchronization between Two Time Series
Let X = (X 1 , . . . , X n ) and Y = (Y 1 , . . . , Y n ) be synchronously sampled time series of size n. The following procedure for the estimation of the geometrical synchronization between those two time series. The steps of Algorithm C read: (1) Divide signals X and Y into T observation windows of size m (m should be large enough to enable the reconstruction of a meaningful attractor in the state space): (X 1 , . . . , X m ), (X m+1 , . . . , X 2m ), . . . , (X n−m+1 , X n ); (2) Compute optimal time lags for each observation window for both time series using Algorithm B. Such computations result in two vectors of optimal time lags: τ T . This information reduction algorithm allows the identification of similarities between attractors reconstructed from different time series from the geometrical point of view. The variation of optimal time lags reconstructed for a pair of time series is used for the quantification of the generalized geometrical synchronization between those time series.
(3) Calculate the vector of absolute differences between obtained optimal time lags for each The differences between the optimal time lags are used as the metric of geometrical similarity between the analyzed time series. (4) In order to identify the slow dynamics reflecting averaged changes in absolute differences between optimal time lags for each data signal, divide the vector of absolute differences into The number of points h in each segment should be large enough to produce a meaningful averaging. ] is defined as a measure representing the geometrical synchronization between data signals X, Y.
Computational Validation of the Geometrical Synchronization Algorithm
To validate the analysis method, we used a system of two nonlinear pendulum models with harmonic excitation (given in Equation (4)) coupled with diffusive terms: where ω = 2 3 ; b 1 = 2.048; b 2 = 2.049. The coupling parameter ε ≥ 0 determines the coupling strength: low values of ε correspond to low synchronization between the pendulum models, while high values lead to highly synchronized oscillations [36].
Two time series X = (X 1 , . . . , X n ), Y = (Y 1 , . . . , Y n ) of size n = 48, 000 are illustrated in Figure 8. Time series X and Y are constructed as follows: firstly, ε is set to zero and the equations are integrated until transient processes die down. Then, the first 12, 000 data points are sampled (first quarter of Figure 8). After sampling, ε is set to 0.03 (weak diffusive coupling) and another 12,000 data points are sampled (second quarter of Figure 8). The process is repeated two more times for ε = 0.1 in the third quarter, and ε = 0 in the last quarter, which are shown in Figure 8. The time series are divided into 160 segments of size 300 according to the first step of Algorithm C (300 is a sufficient number of points to reconstruct a meaningful attractor). Next, the optimal time lags for each segment of both signals were computed using Algorithm B. The set of optimal time lags * , * 1,160 is presented in Figure 9a The time series are divided into T = 160 segments of size m = 300 according to the first step of Algorithm C (300 is a sufficient number of points to reconstruct a meaningful attractor). Next, the optimal time lags for each segment of both signals were computed using Algorithm B. The set of optimal time lags τ (X) * j , τ (Y) * j j = 1160 is presented in Figure 9a i.e., the signals are considered poorly synchronized in the respective segment for the averaged absolute differences of optimal time lags exceeding this value.
The chaotic oscillators given in Equation (5) are nonsynchronized in the first quarter of data points (ε = 0), which results in an average absolute of differences of optimal time lags ranging from 6 to 21. When the coupling parameter is set to ε = 0.03, the chaotic oscillators are weakly synchronized, which is reflected by the decreased values of A (X,Y) . Note that this effect is not obvious, and cannot be observed by simply considering the difference X − Y (Figure 8c). Further, the coupling parameter ε = 0.1 results in an almost complete synchronization, as seen from both Figure 8c and the near-zero values of A (X,Y) in Figure 9c. In the last quarter of data points, oscillators are allowed to evolve in the uncoupled regime at ε = 0, which results in uncoupled chaotic oscillations.
Clusterization of Multivariate Time Series Based and Their Synchronization with a Master Time Series
Suppose a set of time series X (k) = (X ], describing the relationship between X (k) and M as described in Algorithm C, for each X (k) , k = 1, K.
(2) Calculate the Euclidean distance (the measure used to estimate the geometrical similarity of two data vectors) which represents the similarity between all K data signals, using the following formula: The above equation yields the symmetric matrix of Euclidean distances. (3) Construct a dendrogram plot (UPGMA) [37] using the obtained matrix. The main goal of the dendrogram is to identify the clusters of similar time series, i.e., the clustering process involves grouping the analyzed time series based on the similarity of the slower rhythm dynamics of their synchronization with master time series M.
The procedure described above was utilized in subsequent analysis to identify the clusterization of a group of 20 people based on the synchronization of their HRV with the fluctuations in the Earth's local magnetic field. These fluctuations are reflected by the power of the local magnetic field data.
Obtaining the Power of Local Magnetic Field during the Experiment
The local magnetic field power data was computed using the magnetic field intensity values (see Algorithm A) during the experiment (see Section 2.3.1). It is also important to note one specific feature of the acquisition process of the magnetometer data. The magnetometer values are uploaded to the central server at the end of each hour, and the time required for the upload is about one minute. Therefore, the magnetometer data contains one minute-long periods of missing data that occur at the end of each hour.
The local magnetic field power was calculated in the frequency range ω ∈ [0; 1] Hz, since the low-frequency fluctuations of the magnetic field have the most significant impact on human physiology, especially heart and brain activity [29].
The normal heart rate for healthy adults is approximately 60 beats per minute, which implies that the standard IBI is approximately one second. Thus, the power of the magnetic field was computed in one-second intervals, in order to match the time scales of HRV and the local magnetic field variability.
During the computation of the magnetic field power, the spectrogram was cropped using the cropping level S crop = 0.25, since empirical observations indicated that this cropping level was most effective at removing the spike type noise from the spectrograms.
Identification of Clusters in the Groups Based on the Similarity/Synchronization between Participants' HRV and Magnetic Field Activity
Algorithm D was applied to the experimental data (Sections 2.1 and 3.1.1) in order to identify clusters of participants based on the slow dynamics of the synchronization between the participants' HRV and the power of the local magnetic field.
According to Algorithm D, the time series X (k) , k = 1, 20 represents the participants' HRV data collected during the experiment. The master time series M corresponds to the time series of the power of the local magnetic field measured during the time of the experiment.
Since Algorithm D employs Algorithms B and C, the corresponding parameters for both of those algorithms had to be selected: (1) One of the steps of Algorithm C is splitting the participants' HRV and local magnetic field power time series into segments. The standard length of analysis for HRV is five minutes [38]. Thus, inter-beat (RR) interval and magnetometer data was split into five-minute segments for analysis. Note that since HRV data consists of time intervals between each pair of heartbeats, the number of samples in the data vectors corresponding to each five-minute segment varies due to changes in the participants heart rate and other factors that influence HRV, such as stress and emotional states [39]. Since the power of the local magnetic field was computed for one-second time intervals, the resulting five-minute segments consisted of the same number of elements (300 data points). However, the difference in the size of the segments of HRV and the power of the local magnetic field time series did not impact the overall result of the study, since all of the segments represented the same concurrent five-minute time intervals. (2) We selected the number of slices in Algorithm B to be 60 because it was empirically observed that a higher number would result in some empty slices. (3) The maximal value of τ in Algorithm B was set to 50. Higher values of τ would generate too short trajectory matrices, because the five-minute segments consisted of approximately 300 elements. (4) The value of the parameter h in Algorithm C, used for identification of slow dynamics of the synchronization between the two time series, was set to 48. This corresponded to a four-hour averaging of the difference of the optimal time lags. It was observed that this value of h produced the most meaningful averaging. (5) As noted in Section 3.2 the magnetometer data contained one minute-long periods of missing data at the end of each hour. Since these periods in the time series did not contain any information, it was necessary to remove those periods in such a way that would not disrupt the timing between the HRV and magnetic field time series. The solution we implemented was to remove the missing data segments from both the five-minute magnetometer data and from the five-minute RR interval series. Since the cropped series obtained after this procedure fully defined the five-minute series, they were used in the data reduction step.
We applied the clusterization technique on two-day and two-week data sets collected during the experiment (see Section 3.2) in order to determine how the time span of the data set impacts the quality of the clusterization.
According to the first step of Algorithm D, the vector of mean absolute differences A (X (k) ,M) was computed as described in Algorithm C, for each X (k) , k = 1, 20. The execution of this step is demonstrated in Figure 10. The dendrogram depicted in the Figure 11 is a visual representation of the geometrical synchronization between HRV and the magnetic field for all 20 participants. Numbers on the X axis represent participants. The height of the branches of the dendrogram is proportional to the Euclidean distance between HRV/Magnetic field synchronization vectors for corresponding participants.
It can be seen in Figure 11 that participants 7 and 20 are the closest (or most similar) in the sense of synchronization between their HRV and local magnetic field power time series. The Euclidean distance between the HRV and magnetic field synchronization for the pair of participants (7,20) is equal to 5.15. At the opposite end of the spectrum, the participant 15's synchronization with the magnetic field is least similar to any of the remaining participants.
The variation of the slow dynamics of the synchronization (Algorithm C) for two pairs of participants, (7,20) and (7,15), is also illustrated in Figures 12 and 13, respectively. It can be seen that there is a strong visible similarity between the synchronization dynamics for participants 7 and 20, meaning that they are similarly synchronized with the local magnetic field, and form a cluster in the dendrogram (Figure 11). On the other hand, there is no visible similarity in the synchronization dynamics of individuals (7,15), indicating that the relationship between HRV and magnetic field activity for those participants is unlikely ( Figure 13). The Euclidean distance between the HRV and magnetic field synchronization for the pair of participants (7,15) is equal to 30.09.
Int. J. Environ. Res. Public Health 2017, 14, 998 15 of 22 The dendrogram depicted in the Figure 11 is a visual representation of the geometrical synchronization between HRV and the magnetic field for all 20 participants. Numbers on the X axis represent participants. The height of the branches of the dendrogram is proportional to the Euclidean distance between HRV/Magnetic field synchronization vectors for corresponding participants.
It can be seen in Figure 11 that participants 7 and 20 are the closest (or most similar) in the sense of synchronization between their HRV and local magnetic field power time series. The Euclidean distance between the HRV and magnetic field synchronization for the pair of participants (7,20) is equal to 5.15. At the opposite end of the spectrum, the participant 15's synchronization with the magnetic field is least similar to any of the remaining participants.
The variation of the slow dynamics of the synchronization (Algorithm C) for two pairs of participants, (7,20) and (7,15), is also illustrated in Figures 12 and 13, respectively. It can be seen that there is a strong visible similarity between the synchronization dynamics for participants 7 and 20, meaning that they are similarly synchronized with the local magnetic field, and form a cluster in the dendrogram ( Figure 11). On the other hand, there is no visible similarity in the synchronization dynamics of individuals (7,15), indicating that the relationship between HRV and magnetic field activity for those participants is unlikely (Figure 13). The Euclidean distance between the HRV and magnetic field synchronization for the pair of participants (7,15) is equal to 30.09. Next, the dendrogram plot ( Figure 14) for the entire two weeks ( 4032) of the experiment was obtained in an identical manner. The comparison of the two-day ( Figure 11) and two-week ( Figure 14) clusterization results shows that the use of the data with the longer time span provides better quality of clusterization, since the distances between the identified clusters for two-week data ( Figure 14) are greater. Next, the dendrogram plot ( Figure 14) for the entire two weeks (T = 4032) of the experiment was obtained in an identical manner. The comparison of the two-day ( Figure 11) and two-week ( Figure 14) clusterization results shows that the use of the data with the longer time span provides better quality of clusterization, since the distances between the identified clusters for two-week data ( Figure 14) are greater. Next, the dendrogram plot ( Figure 14) for the entire two weeks ( 4032) of the experiment was obtained in an identical manner. The comparison of the two-day ( Figure 11) and two-week ( Figure 14) clusterization results shows that the use of the data with the longer time span provides better quality of clusterization, since the distances between the identified clusters for two-week data ( Figure 14) are greater.
Psychological Survey Data
In addition to the HRV data presented in Section 2.1, each person's physical and mental condition as well as the quality of interactions between the individuals during the two-week experiment were assessed. Each participant completed a questionnaire twice each day throughout the two-week period. The questionnaire consisted of questions concerning their physical, emotional, social, and general states (rating scale between 0 and 10). At the end of each day, each participant was also asked to make a list of other participating individuals who they had interacted with that day (if any) and rate whether the interaction had positively (+1) or negatively (−1) affected them and their survey responses. The quality of interaction data is shown in Table 1. The first column as well as the first row of the table show the participant number for each of the 20 volunteers. The numbers in the intersection rows and columns equal the sum of the row person ratings of the interaction with the column person ratings over the 14 days. If, for example, the row person specified four positive and two negative interactions with the column person during the two-week experiment, the overall interaction value will equal 2. It can be seen that the matrix is nonsymmetric, which means that if the column person positively or negatively affected the row person, this does not necessarily imply that the opposite is true. The matrix is also sparse, since participants did not complete this part of the survey if interactions did not occur. In order to illustrate interaction data, the questionnaire matrix was visualized using the directed weighted graph visualization technique ( Figure 15). A line with an arrow pointing from person a to person b (the pair (a, b)) represents that person a felt positive about person b. The width of the line is proportional to the number of times such an interaction did occur. The graph gives a clearer picture of "mutual affection" between the participants. Participants' pairs (7,20); (2,16); (4,11); (2,10); (1,12) can be clearly identified. However, it is important to note that the "mutual affection" for pairs (4,11) and (2,10) was not "balanced", since the thickness of lines (4,11) and (11,4) as well as lines between (2,10) and (10,2) is substantially different. Consequently, only the pairs (7,20), (2,16), and (1,12) show bilateral "mutual positive interactions".
(2,10) and (10,2) is substantially different. Consequently, only the pairs (7,20), (2,16), and (1,12) show bilateral "mutual positive interactions". The data from the questionnaires (physical, emotional, social, and general state), was also analyzed and examined for the occurrences of changes in participants' conditions in each domain. Changes were most clearly evident in participant 15's survey data. (Figure 16). The figure shows that after feeling good for the first two days of the experimental period, there was a change in his physical, emotional, social, and general condition that drastically worsened and then recovered on the eighth day of the experiment. The data from the questionnaires (physical, emotional, social, and general state), was also analyzed and examined for the occurrences of changes in participants' conditions in each domain. Changes were most clearly evident in participant 15's survey data ( Figure 16). The figure shows that after feeling good for the first two days of the experimental period, there was a change in his physical, emotional, social, and general condition that drastically worsened and then recovered on the eighth day of the experiment.
Comparison of Survey Data and the HRV/Magnetic Field Synchronization Results
The results of the HRV geometrical synchronization with the magnetic field data (represented by cluster diagrams in Section 3.1.2) were compared with the survey data (Section 3.2) in order to determine if the two separate data sets (sociological and physiological) revealed similar trends.
The dendrogram in Figure 14 shows that the synchronization between HRV and local magnetic
Comparison of Survey Data and the HRV/Magnetic Field Synchronization Results
The results of the HRV geometrical synchronization with the magnetic field data (represented by cluster diagrams in Section 3.1.2) were compared with the survey data (Section 3.2) in order to determine if the two separate data sets (sociological and physiological) revealed similar trends.
The dendrogram in Figure 14 shows that the synchronization between HRV and local magnetic field power for participants 7 and 20 is mostly similar. On the other hand, the synchronization for participant 15 is mostly different when compared to all other participants.
It is interesting to observe that the pair of participants (7,20) is the mostly mutually positively oriented pair according to the questionnaire data ( Figure 15). Remarkably, participant 15 has self-assessed his condition being the worst (out of all 20 participants) during the analyzed period of time.
It appears that the computational technique based on the identification of slow dynamics of the synchronization between HRV and local magnetic field can also reflect interpersonal relationships. Participants reporting more positive states and interactions were more similarly synchronized with the magnetic field. Note that the questionnaire data was not used in the proposed algorithm, and served only as a tool to assess psychological relationships within the group.
Discussion
This study developed and validated a novel computational approach using near-optimal chaotic attractor embedding techniques for the identification of physiological synchronization among individual group members' slow wave rhythms in heart rate variability and the degree of synchronization with changes in the local geomagnetic field. This approach allowed us to identify and quantify the degree of geometrical synchronization in time. This new analysis method was utilized to determine the degree of synchronization between locally obtained geomagnetic fields and to identify clusters of similar synchronization patterns in a group of 20 people whose HRV was continuously monitored over a two-week period as they went about their normal day-to-day lives.
Through comparing the two-day and two-week clusterization results, it can be seen that the two-week data provided better separation of the clusters of participants, i.e., the distances between the constructed clusters are greater. This demonstrates that the longer duration of the experiment positively impacts the ability to identify meaningful clusters of individuals. However, the comparison of the two-day dendrogram with the survey data showcased that a shorter time span of data provides a clearer detection of the changes in the participants' condition. This is because the changes in the participant's condition can average out over a long period of time. Thus, such investigations should be performed over both short and long time periods in order to obtain more complete results.
To the best of our knowledge, this is the first study to incorporate psychological data gathered throughout the experiment in the context of physiological synchronization to other group members and with the Earth's time-varying magnetic fields.
Interestingly, the synchronization between the groups' slow wave dynamics of RR intervals and the variation of the local magnetic field were consistent with the psychological data gathered throughout the experiment. When individual pairs reported more stress in their interpersonal relationships, they were less synchronized. This could imply that both the physiological and psychological variables were influenced by the time-varying magnetic fields in the environment. On the other hand, it may indicate that one's level of stress and emotional state modulates the capacity to synchronize to other group members and the Earth's magnetic field. Either way, this finding suggests that psychological states may be a factor in mediating the level of physiological synchronization between people and with the rhythms in the Earth's magnetic field.
Although the specific details for how geomagnetic fields influence human psychophysiology are not yet fully understood, a potential explanation is through a resonant coupling between the nervous system and filed line resonances (Alfvén waves), or standing waves in the Earth-ionosphere resonant cavity (Schumann resonances) that overlap with physiological rhythms [30]. However, a growing body of research strongly suggests that solar and magnetic influences affect a wide range of human health and behavioral processes with the cardiovascular and nervous systems being the most clearly affected.
Overall, the study demonstrated that the slow wave rhythms in heart rate variability can synchronize with local magnetic field data, and that the degree of synchronization is affected by the quality of interpersonal relationships. When two or more persons respond to some changing environmental factor in a similar way and are emotionally close as measured by an independent metric (such as the survey or a direct comparison of their HRV attractors over time), then their response patterns to the environmental factor are less likely to result from chance.
Conclusions
The results of this study are consistent with other studies showing that daily autonomic nervous system activity responds to changes in geomagnetic activity. It also confirms these findings in a larger group and by using a different analysis approach; i.e., the observation of slow wave dynamics occurring in people's heart rhythms over many hours to days. In addition, it also confirms the surprising degree of synchronized activity between the slow wave dynamics in heart rhythms and changes in the Earth's time-varying magnetic field in a frequency range that includes both Schumann resonances and geomagnetic field line resonances, which have similar frequencies as the rhythms produced by human brains and hearts. | 2017-10-09T00:34:47.985Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "f9c9b93df4da943ba0012ebdf8e81745d26ff0af",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/14/9/998/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9c9b93df4da943ba0012ebdf8e81745d26ff0af",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
97196527 | pes2o/s2orc | v3-fos-license | A reinvestigation of the synthesis of 1-aminoarylmethylphosphonates on the surface of alumina and novel method for the synthesis of bis[1-diethoxyphosphoryl aryl methyl] amines
In 1997 we published a simple and efficient method for the synthesis of 1-aminoarylmethylphosphonates from one-pot reaction aromatic aldehydes, hexamethyldisilazane and diethylphosphite. 1 In 2003 Soroka and Kolodziejczyk 2 published comments on this work and they believed that aromatic aldehydes react with diethyl phosphite and hexamethyldisilazane to give 1-(
Introduction
Organophosphorus compounds have found a wide range of applications in the areas of industrial, agricultural, and medicinal chemistry owing to their biological and physical properties as well as their utility as synthetic intermediates. 3α-Functionalized phosphonic acids are valuable intermediates for the preparation of medicinal compounds and synthetic intermediates. 4Among α-functional phosphonic acids, 1−aminophosphonic acids are an important class of compounds that exhibit a variety of interesting and useful properties.The 1-aminophosphonic acids are the most important substitutes for the corresponding amino acids in biological systems. 5,6 ndeed a number of potent antibiotics, 7 enzyme inhibitors, 8 and pharmacological agents 9 contain 1aminophosphonic acids as well as their derivatives, notably peptides.These important compounds have been synthesized by various routes: (a) addition of P-H function to imines and enamines, 10 (b) addition of P-H function to nitriles, 11 (c) Arbuzov and Michaelis-Becker reactions, 12 (d) condensation of X-NH 2 with acyl phosphorus species, 13 (e) Curtius and Hofmann rearrangement of substituted phosphonoacetic esters, 14 and (f) alkylation of nucleophilic precursors such as Schiff bases. 15urface-mediated solid phase reactions are of growing interest 16 because of their ease of set up and work-up, mild reaction conditions, rate of reaction, selectivity, high yields, lack of solvent and the low cost of the reactions in comparison with their homogeneous counterparts.In 1997 we published unexpected results on the synthesis of 1-aminoarylmethylphosphonates in the reaction of aromatic aldehydes, hexamethyldisilazane (HMDS) and diethyl phosphite, via diethyl N-arylidene-1-amino-1-arylmethylphophonate on the alumina surface.In 2003 Soroka and Kolodziejczyk published comments on this reaction.They found aromatic aldehydes react with diethyl phosphate and HMDS to give 1-(trimethylsilyloxy)-1-arylmethylphosphonates instead of 1-amino-1-arylmethylphosphonate (Scheme 1).They believed that HMDS does not react with carbonyl compounds and dialkyl phosphate reacts with aldehydes (or ketones) to give 1-hydroxyalkylphosphonates. 2 As part of our efforts to explore the utility of solid phase reactions for the synthesis of organophosphorus compounds, 17 we decided to analyze and reinvestigate our reaction and comments on this reaction.
Results and Discussion
In contrast to Soroka and Kolodziejczyk ' s report 2 that HMDS does not react with carbonyl compound we found that the reaction of benzaldehyde (1a), as model compound, with HMDS under solvent-free condition in the absence of alumina leads to the long-known substance "N,N'bis(phenylmethylidene) phenylmethane diamine (2a) as the sole product ( according to Scheme 2). 18As it has been shown in experimental section, the article published in 1997, 1 when HMDS and aromatic aldehydes was stirred for 15 min in the presence of acidic alumina, an exothermic reaction took place, which.Toru et al. 19 publication has shown the products must be compounds 2. Consequently the same products 2, N,N'-bis(arylmethylidene)-arylmethanediamines, are ARKAT USA, Inc.
We examined usage of various types of alumina (acidic, basic and neutral) and also magnesia for the synthesis of 1-aminophosphonates.We found that the reaction of HMDS with benzaldehyde in the presence of diethyl phosphite using of acidic alumina gave diethyl N-(phenylmethylene)-1-amino-1-phenylmethylphosphonate (3a) as the major product.The diethyl 1-hydroxy-1-phenylmethylphosphonate was obtained as the major product in the presence of magnesia.17a A 1:1 ratio of 3a and diethyl 1-hydroxy-1-phenylmethylphosphonate obtained by using of neutral or basic alumina.
Scheme 2
It was found that the reaction of imine 3a with diethyl phosphite in the presence of catalytic amount of acetyl chloride give bis[1-diethoxyphosphorylphenyllmethyl] amine 6a as sole product in good yield and diastereomeric excess (Table 1).The product has been used as chelating agent for polyvalent metal ions, particularly alkaline earth metal ions. 20This process was successfully applied to other imines 3 as summarized in Table 1.
According to Scheme 2, diethyl N-arylmethylene-1-amino-1-arylmethylphophonates (3) react with diethyl phosphite and catalytic amount acetyl chloride to afford the desired products in good yields, 6b-6g in Table 1.It was suggested that in situ generation of HCl catalyzed this reaction.
In Summary, for the preparation of diethyl 1-amino-1-arylmethylphosphonate we recommended the reaction of aromatic aldehydes with HMDS followed by reaction with diethyl phosphate in the presence of alumina to give diethyl N-arylmethylene-1-amino-1-ARKAT USA, Inc. arylmethylphophonate (3), which can be easily hydrolyzed to a diethyl-1-aminoarylmethylphosphonates.Further hydrophosphonylation of imine 3 with diethyl phosphate catalyzed by acetyl chloride to afford bis[1-diethoxyphosphorylaryllmethyl] amine as sole product.A simple work-up, low consumption of solvent, fast reaction rates, mild reaction conditions, good yields, relatively clean reactions with no tar formation make our method as an attractive and a useful contribution to present methodologies.General procedure for the synthesis of compounds 5a-g and 6a-g Acidic alumina (1 g) and HMDS (1.93 g, 10 mmol) were mixed at room temperature.Aromatic aldehyde (10 mmol) was added dropwise to the mixture with stirring.After completion of aldehyde addition, acidic alumina (2 gr) was added while resultant mixture was stirred.An exothermic reaction took place at this step thus stirring of mixture was continued for 15 min until its temperature reached to room temperature.Diethyl phosphite (1.38 gr, 10 mmol) was added to the reaction vessel and the mixture was stirred for 2 h.The reaction mixture was extracted with ether (100 ml): Synthesis of 1-aminophosponic acid esters (5a-g).p-TsOH.H 2 O (1.9 g, 10 mmol) was added to ethereal solution and stirred for 3 hrs.The solid salt was filtrated and neutralized with NH 4 OH (10%).Extraction with ether (3X50 ml), evaporation of solvent and chromatography on plug of silica gel with EtOAc/n-hexane (9:1) gave the pure product as oil in 42-65% yields.All products are known and gave satisfactory spectral data in accord with the assigned structures and literature reports. 21ynthesis of bis[1-diethoxyphosphorylarylmethyl] amine (6a-g).Ethereal extract was concentrated and the residue was chromatographed on plug of silica gel with EtOAc/n-hexane (1:1) to give the pure product 3.The product 3 (3 mmol) was added to a mixture of diethylphosphite (5 mmol) in dichloromethane (10 ml).Acetyl chloride (1 mmol) was added dropwise to reaction mixture.The reaction mixture was stirred for 3h at room temperature.
Evaporation of solvent and chromatography on plug of silica gel with ethyl acetate-methanol (9:1) and evaporation of the solvent under reduced pressure gave the pure product as colorless oil in 57-73% yields.All products are known and gave satisfactory spectral data in accord with the assigned structures and literature reports. 22iethyl {[(diethylphosphoryl) (phenyl) methyl] amino}(phenyl)methylphosphonate (6a). 22olorless oil (65%); Scheme 1
Table 1 .
13action of imine 3 with diethylphosphite in the presence of catalytic amount of acetyl chloride All chemicals were commercial products and distilled or recrystallized before use.NMR spectra were taken with a 250 Brucker Avance instrument with the chemical shifts being reported as δ ppm and couplings expressed in Hertz.The chemical shift data for each signal on 1 H NMR are given in units of δ relative to CHCl 3 (δ=7.26)forCDCl3solution.For13CNMR spectra, the chemical shifts in CDCl 3 and DMSO are recorded relative to the CDCl 3 resonance (δ=77.0).The chemical shifts of31P are recorded relative to external 85% H 3 PO 4 (δ=0) with broad-band 1 H decoupling. Silica gel column chromatography was carried out with Silica gel 100 (Merck No. 10184).Merck Silica-gel 60 F254 plates (No. 5744) were used for the preparative TLC. | 2018-12-11T09:13:48.626Z | 2007-07-17T00:00:00.000 | {
"year": 2007,
"sha1": "79656980d9be0e70b4b0c11170cea67d93ca15c3",
"oa_license": "CCBY",
"oa_url": "https://www.arkat-usa.org/get-file/23021/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fc6c9f51d7a68f31c241d8da56f84e9c3da4b058",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
67770961 | pes2o/s2orc | v3-fos-license | Methicillin-Resistant Staphylococcus Aureus in Saudi Arabia: Genotypes Distribution Review
Methicillin-resistant Staphylococcus Aureus (MRSA) infections in hospital have obviously imposed a significant burden of morbidity and mortality, and strain on healthcare resources. Here, we review the genotype distribution of these pathogens in the Kingdom of Saudi Arabia (KSA). A PubMed literature search (until May 2014) specified 12 articles that characterized MRSA clones in KSA. Only two regions (Riyadh and Damamm) were represented in ten articles. Data from these articles showed that the pandemic Vienna/Hungarian/Brazilian clone (CC8/ST239-III) is the most frequent in Saudi regions (Riyadh and Damamm). Several other clones such as Barnim/UK-EMRSA-15 (CC22-IV), Southwest Pacific clone (ST30-IV) and European community-associated-MRSA clone (CC80-IV) have been detected in the Riyadh region. A variety of MRSA clones is beginning to circulate in Saudi hospitals. Continued collection and molecular characterization of MRSA is crucial for the effective prevention and treatment.
INTRODUCTION
Methicillin-resistant Staphylococcus aureus (MRSA) infections represent a major challenge to hospital microbiologists because of the emergence and spread of clones that have decreased susceptibility to many antibiotic classes. [1,2] Methicillin-resistance in staphylococci is due rEviEw artiClE In Saudi studies, the prevalence of MRSA among S. aureus and temporal increases have been shown to vary widely among the regions. [16,17] Recently, the overall estimation of the prevalence of MRSA in Saudi Arabia (KSA) was 35.6% from pooled estimation of 22,793 S. aureus strains from 2002 to 2012. [18] The aim of the present review was to provide an overview of the genotypes present throughout Saudi Arabia to support the use of effective treatments and to guide strategies for the control of the spread of MRSA.
GEOGRAPHY OF KSA AND DATA ACQUISITION
KSA is one of the most populous countries and the largest in the Arabian Peninsula (2 × 10 6 Km 2 ). The population of KSA is estimated around 28 million, about 20% of whom are expatriates mainly from the Indian subcontinent and Southeast Asia. Moreover, KSA hosts more than 4 million Muslim pilgrims from across the globe during the Hajj and Umra seasons. [16,19] Mass gathering of millions of Muslims for the Hajj from all over the world in the same region increases the possibility of infectious pathogens, especially MRSA strains. The above-mentioned conditions make KSA a hot spot for the collection of MRSA and its global spread.
For this review, we performed a literature search to evaluate the history and distribution of MRSA clones within KSA. We searched the PubMed/MEDLINE database of all published articles related to genotyping MRSA in Saudi Arabia until May 2014. Search terms (and combinations thereof) were: "MRSA clones," "'Methicillin resistant Staphylococcus aureus clones," "MRSA," "genetic," "molecular characterization," "typing," "genotypes," and "Saudi". Hand search of references listed in relevant articles was also carried out.
Most of the pertinent published data on genotyping MRSA came from only three regions out of a total of 13 regions in KSA. Only 12 relevant published articles reporting data collected were included: Nine from Riyadh (1998-2011), one from Dammam (2001Dammam ( -2003, one from Jeddah (2009-2011), and one from different cities in KSA (1992KSA ( -1995. A variety of genotyping methods had been used to identify MRSA clones. Some studies applied only one technique (PFGE three articles and SCCmec three articles). Three other articles used two techniques (SCCmec and MLST). One article used PFGE and SCCmec; another study applied PFGE and MLST techniques; and one used three techniques (SCCmec, MLST, and spa) [ Table 1].
DISTRIBUTION OF METHICILLIN-RESISTANT STAPHYLOCOCCUS AUREUS GENOTYPES IN KSA
The data collected in Table 1 show that the earliest publication from KSA described the inter-hospital spread of a single MRSA clone between 1992 and 1995. Ninetyfour strains of MRSA, originating from inpatients across geographically diverse regions of KSA were genetically typed by randomly amplified polymorphic DNA and a representative subset of the strains was analyzed by PFGE as well. However, 93% (87/94) of the isolates belonged to a single clonally related lineage of MRSA and seven other isolates differed only slightly from those determined for the clonal type. [20] Eight years later, in contrast to the Romania strains, MRSA strains collected from institutions in Dammam (n = 60) and Dhahran (n = 8) between April 2001 and May 2003 shared a common PFGE pattern and MLST (ST239). This indicated that a single epidemic clone was spreading in Saudi Arabia. The MRSA strains were differentiated using PFGE of sma I DNA macrorestriction fragments. A comparison of PFGE fingerprints identified clusters of strains with clear segregation branches according to the hospital they had originated from. However, MLST revealed that all strains except two (ST5 and ST254) shared the ST239 genotype. Even the relatively closed PFGE-based dendrogram indicated a diversity of ST239. [21] Data based on evolutionary patterns and genotypic characteristics again indicated that a single epidemic clone of MRSA was widespread in Saudi Arabia compared with other Asian countries. In this review, five MRSA strains (from King Khalid University Hospital, Riyadh during 1998-2003 were analyzed by MLST and SCCmec typing. Data indicated that all five MRSA strains belonged to a single, epidemic clone ST241 (a single-locus variant of ST239) and CC239 with SCCmec IIIA. [22] A description of SCCmec elements carried by 19 MRSA strains isolated in King Khalid University Hospital, Riyadh, from 1998 to 1999 was reported. Eighteen MRSA strains were classified as SCCmec 3A (III) type, and the frequency of type SCCmec 2B (IV) was very low (1/19). Subsequently, the genotypes of representative five isolates were investigated by MLST. The dominant four MRSA strains belonged to CC8 (ST239), and one minor strain of CC5 (ST5) were reported. [23] (66) --- [21] 254 (1) 0 [31] ST08 (7) 3-3-1-1-4-4-3 spa CC790 (9) t032-t223-t790-t4573-t7604- [24] In the same way, three major clusters were revealed from the 30 MRSA strains collected from major hospital laboratories and public health centers in Riyadh in 2009 and typed by using PFGE. The first, which included 17 strains were subdivided into four groups, and the second consisted of 12 strains, while the third cluster contained only one strain. [25] Other MRSA strains have been described with genes encoding Panton-Valentine leukocidin (PVL), harboring SCCmec type IV-traits that are usually associated with community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA). MRSA genotyping data showed a much lower PVL prevalence of only 8% (three out of 37) in SCCmec IV strains from outpatient clinics in King Khalid University Hospital, Riyadh, in 2007. [26] In another report, Moussa and Hessan found that only 18 strains (13.33%) recovered from skin and soft tissue infections were positive for PVL and (SCCmec) type IV. [27] Furthermore, in an investigation of a CA-MRSA outbreak among healthy neonates at a tertiary care teaching hospital in Riyadh between October and November 2009, 10 MRSA isolates (nine infants and one mother) were characterized using PFGE and SCCmec typing. Although all 10 MRSA isolates harbored SCCmec IV type, nine MRSA were of the same PFGE pattern but the remaining one was different. [28] Recently, 101 clinical isolates of MRSA, taken from a Jeddah Hospital and Health Centers from August 2009 to May 2011, were investigated genetically by SCCmec typing and genes encoding PVL. SCCmec III (39 strains) were the most predominant type. Only 38 strains (37.6%) harbored PVL gene while SCCmec V (43 strains). Some minor strains belonged to SCCmec IVa and IVc (16 and 3 strains, respectively), but no SCCmec types I, II, IVb or IVd were detected. [29] The first MRSA typing data from Saudi Arabia were . Surprisingly, the prevalence of PVL genes was significantly higher (54.21%). [30] Alreshidi et al. provided the first data on MRSA genotypes and virulence gene profiles in cancer patients. A total of 120 MRSA isolates from cancer and noncancer patients in the Armed Forces Hospital in Riyadh in February and August 2010, were investigated using SCCmec, MLST, spa, and virulence genes detection. SCCmec type III was detected in all MRSA isolates, but no PVL gene was detected. According to spa typing, MRSA strains were clustered into six groups including cluster-1 spa CC 037 (66 strains), cluster 3 spa CC 376 (25 strains), cluster 2 spa CC 790 (9 strain) as well as some minor clusters. Four novel spa types (local spa types) were detected and identified as t7604, t8506, t8507, and t8855. MRSA strains were classified into two different clonal clusters and three singletons. Group-1 including ST239 (70 strains), ST08 (seven strains), and ST241 (three strains) was the most prevalent followed by group-2 (ST22 and ST217) and five minor singleton groups (ST182, ST71, ST88, ST30, and ST80,). [31] PREDOMINANT METHICILLIN-RESISTANT
STAPHYLOCOCCUS AUREUS CLONES IN KSA
Although few published data about MRSA genotyping in KSA (12 published papers) are available, what are presented above give an overview of the genotyping of MRSA currently circulating in KSA and allow a comparison to be made with other countries. In this review, we found that the pandemic Vienna/Hungarian/ Brazilian clone (CC8/ST239-III) and its variants continue to circulate in Dammam and Riyadh. ST239-III, mainly hospital-associated was the largest group detected in most studies from 2001 to 2013. Another clone, the pediatric clone (CC5), has been found in KSA hospitals though cases are rare. In recent years (2012-2013), many diverse strains of MRSA have been identified in the | 2019-02-25T17:00:04.075Z | 2015-12-02T00:00:00.000 | {
"year": 2015,
"sha1": "c3745f73d9398e856fb9ec41690696e08a52f9d0",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1658-631x.170880",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3745f73d9398e856fb9ec41690696e08a52f9d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15776099 | pes2o/s2orc | v3-fos-license | Spectral Lags of Gamma-Ray Bursts from Primordial Black Hole (PBH) Evaporations
Primordial Black Holes (PBHs), which may have been created in the early Universe, are predicted to be detectable by their Hawking radiation. PBHs with an initial mass of 5.0 * 10^14 g should be expiring today with a burst of high energy particles. Evaporating PBHs in the solar neighborhood are candidate Gamma-Ray Bursts (GRBs) progenitors. We propose spectral lag, which is the temporal delay between the high energy photon pulse and the low energy photon pulse, as a possible method to detect PBH evaporation events with the Fermi Gamma-ray Space Telescope Observatory.
INTRODUCTION
In the present era black holes with mass less than a few solar masses are not expected to be created by any known process. Early in the Universe, however, primordial black holes (PBHs) could have been created with masses ranging from the Planck mass (10 −5 g) to as large as 10 5 M ⊙ , or larger. PBH formation scenarios include the collapse of overdense regions due to primordial inhomogeneities, especially those generated by inflation, a softening of the equation of state, or bubble collisions at cosmological phase transitions, and the collapse of oscillating cosmic string loops or domain walls. (For a recent review see [1]).
Hawking discovered that, due to thermodynamical requirements and quantum-gravitational effects, all black holes continually radiate particles thermally [2,3]. The Hawking temperature T is inversely proportional to the black hole mass M. Thus, as the black hole radiates its temperature increases. It can be shown that PBHs with an initial mass of ∼ 5.0 × 10 14 g should be expiring today in a burst of high energy particles [4]. In this context evaporating PBHs in the solar neighborhood are candidate Gamma-Ray Bursts (GRBs) progenitors.
HAWKING RADIATION
According to quantum theory, virtual particles are continuously created and destroyed in the vacuum. Heuristically, one way to interpret the Hawking radiation process invokes the strong gravitational field gradient near the event horizon of the black hole. This field gradient can separate particle-antiparticle pairs. In some cases, one particle falls with apparent negative energy into the black hole, while the remaining one has sufficient positive energy to escape to infinity. As a result, some particles can come out of the vacuum as real particles by obtaining energy from the black hole.
Black holes will predominantly radiate particles whose de Broglie wavelength (λ ) is roughly of the order of the Schwarzschild radius of the black hole (R s ≡ 2GM/c 2 ) and so the energy of radiated particles can be estimated as follows [5] From the thermodynamic Stefan-Boltzmann relation, the Hawking luminosity can be estimated by the area of the black hole horizon times the radiation intensity (with a proportional to the number of degrees of freedom of the radiated particles) as follows: Thus, the time remaining until the black hole expires by radiating away its mass is roughly given by The estimated black hole temperature, luminosity and time to expire are summarized below The latter two relations are modified by the contributions of various particle species producing Hawking radiation and subsequent decay processes [6].
PHOTON SPECTRA FROM PBHS
A black hole should directly emit those particles which appear non-composite compared to the wavelength of the radiated energy (or equivalently the black hole size) at a given temperature. When the temperature of the PBH exceeds the Quantum Chromodynamics (QCD) confinement scale (250-300 MeV), quarks and gluons will be emitted [7]. These particles should fragment and hadronize, analogous to the jets seen in accelerators, as they stream away from the black hole [4]. The jets will decay on astrophysical timescales into photons, neutrinos, electrons, positrons, protons and anti-protons. The particle spectra of direct as well as decay products are shown in Fig 1 for The time (t) left until the PBH completes its evaporation is calculated in Table 1 for a range of black holes temperatures using the method of reference [6].
A PBH burst should produce n photons in a detector of effective area A eff and angular resolution Ω if it is closer than and be detectable above the extragalactic gamma ray background at energy E if it is closer than If PBHs are clustered in our galaxy with local density enhancement factor f local then the number presently expiring (i.e. PBH evaporation rate R) is given by [8] R ≤ 10 −7 f local pc −3 yr −1 .
DISCUSSION
With the launch of the Fermi Gamma-ray Space Telescope observatory on June 11, 2008, we have a new opportunity to examine the high energy band pass which is suited to detect PBH evaporation events. The discovery of a PBH would provide a unique probe of many areas of physics including the early Universe, gravitational collapse mechanisms, dark matter, and quantum gravity. Positive PBH burst detection should also elucidate extensions of the Standard Model, e.g. the existence of Higgs boson or Supersymmetry.
One approach to identify a PBH evaporation event directly is to look at the spectrum spanning from MeV to GeV energy scales. However, getting enough photons from these fast transient events to make a spectrum can be difficult. In contrast, spectral lag measurements merely require a light curve in two energy bands and does not need many counts. Therefore we can measure the spectral lag even for weak events that last for very short time scales. Hence measuring spectral lag is a possible method to identify PBHs. Qualitative analysis of the spectral lag of PBHs shows positive to negative evolution with increasing energy. Work is in progress to calculate quantitative values for the PBH spectral lags, including the low energy inner bremsstrahlung contribution [9]. | 2009-08-14T19:14:12.000Z | 2009-01-05T00:00:00.000 | {
"year": 2009,
"sha1": "260a1adcd6b0c924b85e241c655945729c205207",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0901.0542",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "81a465974e9d1df47acf9555481e8deeff377757",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
129990496 | pes2o/s2orc | v3-fos-license | Soils recovered from disaster debris
Utilization of the soils recovered from the disaster debris is one of the geoenvironmental challenges related to the 2011 East Japan earthquake and tsunami. The disaster debris contained significant amount of soil fractions and these soils are expected to be separated from the disaster debris and utilized in re-construction works in the areas affected by the disaster. In this paper, generation and treatment of disaster debris are briefly summarized. Mechanical properties of recovered soil were discussed from the viewpoint of composition, compaction, and compressibility in order to utilize it in geotechnical applications. A lower amount of admixed combustible matter can lead a higher quality of recovered soils as a geo-material. Difficulty of characterizing the recovered soil and feasible strategies for utilization are also briefly discussed based on the experimental results.
INTRODUCTION
The earthquake of magnitude 9.0 and subsequent tsunami affected coastal areas in eastern Japan on March 11, 2011. This earthquake and subsequent tsunami caused several serious geoenvironmental issues mainly in the coastal area in the eastern Japan (Inui et al. 2012). These geoenvironmental issues may include; 1) generation of disaster debris and tsunami deposits, 2) salt damage on agricultural lands, 3) land subsidence, and 4) subsurface contamination with nuclides caused by the Fukushima Daiichi nuclear disaster.
Treatment of the disaster debris in the affected areas was conducted and completed in most sites by the end of March 2014. Since the disaster debris and tsunami deposits included a significant amount of soil fractions, proper treatment to recover these soils and utilization of such recovered soils in geotechnical applications in re-construction works have strongly been encouraged.
The Japanese Ministry of Environment (MOE) is currently preparing a particular principle that enables proper disaster waste treatment in forthcoming huge disasters such as Nankai Trough earthquake and Tokyo Inland earthquake. According to an estimation in an interim report of "Grand Design for Countermeasures to Disaster Debris in Disasters" released by MOE in March 2014, the amount of disaster debris in Nankai Trough earthquake may reach to total 322 million tons that corresponds approximately 11 times of that in the East Japan earthquake and tsunami, and 27 million tons of soil including tsunami deposits will be generated (MOE 2014a). For treatment of such considerable amount of disaster debris, careful simulation of disaster damage, plans of proper waste treatment system, and initial response for smooth treatment have to be preliminarily considered. Even in such catastrophes, utilization of soil fractions after proper treatment is required by the above-mentioned grand design of MOE. This paper discusses strategic utilization of recovered soils through overview of disaster waste treatment in this earthquake and several laboratory tests in anticipation of forthcoming huge disasters.
GENERATION AND TREATMENT OF DISASTER DEBRIS
Generation and treatment of a huge amount of disaster debris are the first experience in human history. Immediately after the earthquake and subsequent tsunami in 2011, the Japanese government estimated http://doi.org/10.3208/jgssp.JPN-097 that approximately 20 million tons of disaster debris and 10 million tons of tsunami deposit had been generated through this disaster mostly in Iwate, Miyagi, and Fukushima Prefectures. Tsunami deposits are mainly soil transported by the tsunami and, similarly to other disaster debris, also require proper treatment. It was geographically and economically unrealistic to construct new landfill facilities with sufficient capacities to accept these wastes, since their amounts are several times larger than the annual generation of municipal solid waste in each local municipality. For example, disaster debris generation in Iwate prefecture corresponded to 12-year municipal solid waste (MSW) generation in the whole prefecture and, in a more extreme case, in Rikuzentakata city, Iwate prefecture, it corresponded to 280-year local MSW generation. Utilization and minimization of these disaster debris were, therefore, required. The national government decided that proper treatment and utilization of debris and tsunami deposits should be completed by 2014 March (MOE 2011a, 2011b). A significant fraction of these wastes corresponds to tsunami deposit soils. Fig. 1 shows the fractions of generated waste materials in Iwate and Miyagi prefectures (MOE 2014b). Although combustible and incombustible mixtures must be disposed as waste, organic fractions, such as tsunami deposit, concrete, and asphalt, consisted approximately 65% of disaster waste.
A common system of disaster waste treatment can be illustrated in Fig. 2, while the detailed processes of treatment vary by municipality. First, the debris was removed from the affected areas and transported to primary storage sites. Only rough separation, such as separation using operation vehicles and manual separation, was conducted at the primary storage sites. Then, an advanced treatment was conducted at the secondary storage and treatment sites, which were set 1 or 2 sites per one municipality (approximately 30 sites in Miyagi and Iwate prefectures). The system of advanced treatment varies from sites to sites, depending on given conditions of disaster debris to be treated (generated amount of disaster debris, primary separation and storage, type of original soil, etc.), site environments (area limitation, air pollution risks, the number of primary storage sites, etc.), and local resources (waste incinerators, cement plants, etc.). Accordingly, properties of recovered soils obtained through these advanced treatments would be also variable. In order to accelerate the utilization of recovered soil, the Japanese Ministry of Land, Infrastructure, Transports and Tourism (MLIT) established two technical guidelines to construct (1) parks and green spaces as a redundancy zone against tsunamis in which embankments be constructed using disaster wastes and (2) fill embankments at the areas where ground subsidence occurred significantly due to the earthquake (MLIT 2011a, 2011b).
Composition of recovered soil
Eight different recovered soil and residue after treatment, which is soil-like fine fractions after the advanced treatment, were collected at actual treatment sites in Iwate prefecture in September 2012. Fig. 3 illustrates compositions of recovered soils of raw conditions, of re-sieved by 9.5-mm screen, and of re-sieved by 4.75-mm screen. In this graph, "recovered soils" indicate the soils after rough treatment, while "residues" or "residue after treatment" indicate the soil-like fine fractions after advanced treatment. Since the disaster debris was crushed into smaller fractions by the advanced treatment, residue after advanced treatment basically included wood fractions more than recovered soil. All recovered soils and residues were generated after sieved by a 20 mm opening screen except Noda town of 15 mm. Compositions of recovered soils are similar to each other, while those of residues vary by sites. Any recovered soil consists of 80 to 90% soils as a sum of soil fractions both larger and smaller than 2 mm. For example, residues from Yamada town consisted of low fractions of combustible and incombustible matters larger than 2 mm, which may indicate that concise separation can be achieved by the system installed in this town.
The combustible matter content, which may affect material properties as soil, decreased with smaller maximum diameter regardless of types and areas of generated material; that is to say, quality of the recovered soil can become increasingly similar to the real soil by re-sieving with smaller screens from the viewpoint of their composition.
Compaction characteristics
Proctor test was conducted on actual recovered soils and residues of raw condition, under 9.5 mm, and under 4.75 mm, respectively, to evaluated compaction characteristics as shown in Fig. 4. Any recovered soils can be expected to be compacted well because the compaction curves of recovered soils are located relatively higher than those of residues after advanced treatment, and have a clear peak due to fewer combustible matters such as wood.
By re-sieving with smaller mesh, compaction characteristics of residues can be improved. For example, the maximum dry density of residue collected from Otsuchi town was improved from approximately 0.75 g/cm 3 to 1.00 g/cm 3 by re-sieve with 4.75-mm mesh. The compaction characteristics of any recovered soils were almost equal with or without re-sieving because improvement effect was relatively small due to a limited amount of coarse wood chips in original material. Fig. 5 illustrates unconfined compression strength versus ignition loss at 330˚C (IL 330 ) after the experiment. As detailed later, since organic matters as well as crystalline water and adsorbed water in soil can be volatilized at 750±50˚C, materials were heated at 330˚C to evaluate combustible matter content in this study. Simulated recovered soil used in this series of test was prepared by mixing commercial granite soil and wood chips separated from actual recovered soil collected from Iwate prefecture in order to control an amount of wood content of the mixture from 0% to 15.0% in dry mass basis. The simulated recovered soils were prepared with two different wood sizes; one is the mixture with wood chips smaller than 2 mm, and another is that with wood chips smaller than 4.75 mm. As indicated in this figure, higher IL 330 , i.e. wood content, led smaller unconfined compression strength regardless of wood size, while actual recovered soil at Yamada and residue at Otsuchi represented rather high strength probably because of cementation by calcium carbonate generated by seawater inundation. From the viewpoint of wood size, simulated recovered soil containing smaller wood chips represented higher strength because soil particles can contact and interlock each other. Compression index increased with increasing IL 330 as shown in Fig. 6 due to softness of wood fractions. Also, effect of water immersion during consolidation steps was evaluated for all compositions. By comparing with or without water immersion, specimens with water immersion represented compression indexes higher than those without water immersion because wood fractions became more pliant by absorbing water inside.
MEASUREMENT OF WOOD CONTENT IN RECOVERED SOILS
Since wood content affects material properties of the recovered soil as discussed above, its measurement is significant. In this research, composition of actual recovered soils and residues were evaluated by hand only on fractions over 2 mm. As summarized in Table 1, the definition of ignition loss varies by areas, although Japan Society of Material Cycles and Waste Management suggested the ignition loss at 600±25˚C of 5% as a criterion for utilization of tsunami deposit. Since this ignition loss at 600±25˚C is for quantifying an amount of residue after incineration of waste, it is not reasonable to judge quality of recovered soil and tsunami deposit as geomaterial by this value. Also, as described above, not only organic matter but also crystalline water and bound water affect ignition loss at 750±50˚C that is commonly used in the geotechnics area. As a previous study verified that ignition loss at 600±25˚C and 750±50˚C overvalue combustible matter content, it is essential to establish a method to quantify it. In this study, materials were heated at 330˚C because combustion point of organic matters was observed at around 330˚C in the previous study.
UTILIZATION STRATEGY FOR FORTHCOMING DISASTERS
In this paper, mechanical properties of recovered soils were discussed based on the experimental results from the viewpoint of effect of re-sieving, wood content, and wood size. Since the combustible matter content can be reduced by re-sieving with a smaller size of screen, the recovered soil can be utilized in various applications by re-sieving into a proper maximum diameter.
The Japanese Geotechnical Society established Technical Committee on Recovered Geo-Materials in 2013 with the supports from National Institute for Environmental Studies and Mud Recycling Association, and releassed "Guideline for Utilization of Geo-Materials Recovered from Disaster Debris" to tie different institutions and sections to promote the effective use of soils, either recovered, excavated, or new as shown in Fig. 7. In this guideline, three important concepts are presented; (1) construction of resilient infrastructures, (2) promotion of utilization of the recovered geo-materials, and (3) optimization of combined projects, but not single projects, in the area. For strategic utilization, necessary volume and quality of geo-material in each application should be figured out as quickly as possible after disasters in terms of material balance. | 2019-04-25T13:11:59.727Z | 2016-01-31T00:00:00.000 | {
"year": 2016,
"sha1": "a85a48f6bd099a7b6bfdcf628dd4a99554937ffe",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jgssp/2/54/2_JPN-097/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "af9903bf0162ac23bede386ed754130ea8b9133e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geology"
]
} |
202405845 | pes2o/s2orc | v3-fos-license | The Edinburgh Postpartum Depression Scale: Stable structure but subscale of limited value to detect anxiety
Purpose The Edinburgh Postnatal Depression Scale (EPDS) aims at detecting postpartum depression. It has been hypothesized that a subscale (items 3, 4, 5) may detect anxiety. The aim of this study is to assess whether this EPDS anxiety subscale is present in a community-based dataset, and if so, to assess its validity and stability during the first six months postpartum. Methods We obtained EPDS data of a community sample of 1612 women at 1 month, with follow-up at 3 and 6 months, postpartum (Post-Up study). We performed an exploratory factor analysis on the EPDS forcing two- and three-factor solutions. We assessed the correlations of the extracted factor subscales and the total EPDS with the short-form of the STAI (STAI-6). We examined the stability of the identified factors by means of a confirmatory factor analysis (CFA), using the EPDS data collected at 3 and 6 months postpartum. Results Both the two- and three-factor solutions contained a hypothesized anxiety subscale of items 3,4,5,10, and fitted well with the 3- and 6-months EPDS data, with CFI and TLI values >.99 and RMSEA and SRMR values < .035 and < .45. The subscale’s Pearson correlations with the STAI-6 were moderate: .516, compared to .643 for the total EPDS. Conclusions The factor structure of the EPDS is stable across the first six months postpartum, and includes the subscale assumed to represent anxiety. However, this subscale as well as the total EPDS correlate only moderately with anxiety criteria. Using the EPDS thus does not imply adequate screening for anxiety.
Introduction
In the postpartum period, both depression and anxiety frequently occur, with reported metaanalysis period prevalence rates of 19.2% for major and minor depression (0-3 months postpartum) [1], and 13.2% for anxiety (0-24 weeks postpartum) [2]. The co-occurrence of depression and anxiety seems to be high; Fallah-Hassani et al. reported meta-analysis prevalence rates of 3.5 to 9.2% in the first 24 weeks postpartum [3]. Comorbidity of depression and anxiety is associated with more persistent depression [4,5], which increases the risk of negative consequences for the offspring [6,7]. Therefore, adequate recognition and treatment of both depression and anxiety are essential. However, until now interventions focusing on postpartum maternal mental wellbeing have mainly addressed postpartum depression (PPD) [8].
A key step in addressing maternal mental disorders in the postpartum period is early detection. Primary care settings usually make use of the Edinburg Postnatal Depression Scale (EPDS) [9] to screen for PPD [10]. Though the EPDS was developed to detect PPD, many studies of its structure detect two or three factors, recently summarized in an overview by Coates et al. and Kozinsky et al. [11,12]. Interestingly, the majority of the factor solutions found contained a subscale formed by three items (3, 4 and 5), interpreted as being an anxiety subscale, even though evidence on the total number of factors and item allocation is inconclusive. This hypothesized anxiety subscale, named the EPDS-3A by Matthey [13], might be of clinical interest when considering screening for anxiety along with PPD. However, evidence for the validity of the EPDS-3A to detect anxiety is limited, provided by studies with small or selected populations [13][14][15]. The same limited evidence applies to the postpartum stability of the subscale, with only one study in a community based sample [12] finding a stable structure at two postpartum intervals, thereby making conclusions on clinical use rather premature. Therefore, the aim of this study is to assess whether the hypothesized EPDS anxiety subscale is present in EPDS data of a large community based sample, and if so, to assess whether this subscale enables measurement of anxiety in addition to depression, and is stable across the first six months postpartum.
Procedures and sample
We used data of the Post-up study, a study on the effectiveness of repeated screening for PPD with the EPDS, compared to care-as-usual in well-child care. The current study was limited to data on the intervention region. Procedures, including details on enrollment and exclusion criteria and on data collection, are fully described elsewhere [16]. In the intervention region, 4275 women with a newborn child visiting the participating well-child care centers in the inclusion period were eligible for enrollment. Informed consent was obtained from 2265 mothers, of whom 1843 completed the baseline assessment (3 weeks postpartum). Prior to their visit to the well-child care center at 1, 3 and 6 months, intervention mothers were asked to fill in a hardcopy version of the EPDS. During their consultations, well-child care professionals used the EPDS results, and afterwards returned the anonymized EPDS forms to the research team for further analysis. Data of mothers with a completed baseline assessment and at least one EPDS returned were used in this study, resulting in a sample of 1612 women, i.e. a retention of 71.1%.
Measures
The Edinburg Postnatal Depression Scale is a 10-item self-report measure, developed specifically for use in community samples of postpartum mothers [9]. By choosing one of four responses (scored 0 to 3), women can indicate the extent to which each statement corresponds to their mood over the past 7 days. The sum of item scores forms the total score, with higher scores implying more depressive symptoms. The Dutch version was validated in 1992 [17], showing adequate concurrent validity, and a standardized Cronbach's alpha of .82.
Anxiety level was measured at baseline assessment at 3 weeks postpartum with the 6-item short form of the state scale of the Spielberger State-Trait Anxiety Inventory (STAI-6) [18]. For each item (calm, tense, upset, relaxed, content and worried) the experienced current status is indicated on a 4-point scale. The Dutch version has been shown to have good reliability (Cronbach's alpha .83) and validity (correlation with the STAI full version: .95) [19].
Background characteristics, measured at 3 weeks post-partum, concerned demographic characteristics of the mother (age, native country, living in an urban area, educational level, employment, single mother); pregnancy characteristics (complications, preterm birth, firstborn); history of depression; and breastfeeding of the child.
Statistical analysis
First, we described the sample. Second, we examined the suitability of our data for structure detection by performing the Kaiser-Meyer-Olkin Measure of Sampling Adequacy test (KMO) and Bartlett's test of sphericity. Third, we assessed the factor structure of the EPDS and whether in mothers one month post-partum we could indeed identify an anxiety subscale, in addition to a depression subscale. We did so by assessing the factor structure of the EPDS, using an Exploratory Factor Analysis (EFA) with maximum likelihood extraction and oblique rotation (direct oblimin) [20,21], based on a polychoric covariance matrix. In this EFA we forced two-and three-factor solutions as parallel analyses. We used a polychoric correlation matrix because of the skewness of distribution of answer categories of the EPDS items. We evaluated the EFAs based on eigenvalues, total amount of variance explained, factor loading and Cronbach's alpha.
Fourth, we assessed whether one of the extracted factor subscales indeed measured anxiety, by calculating the Pearson correlations of the subscale scores and of the total EPDS score with the STAI-6. In addition we computed the area under the receiver operating characteristic (ROC) curve (AUC) for both the anxiety-subscale and the total EPDS scores, with the STAI-6 (cut-off � 42, prorated score) [22,23].
Finally, we assessed the stability of the EPDS structure, i.e. its measuring of both depression and anxiety, across the first six months postpartum. We did so by determining whether the structure of the EPDS at 3 and 6 months differed from that at 1 month, using CFA. Items were fixed on the factor with the highest loading. Fit indices reported are Chi-square (including (df) and p), the comparative fit index (CFI), the Tucker-Lewis fit index (TLI), the root mean square error of approximation (RMSEA) (including 90% confidence interval (CI) and p) and the Standardized Root Mean Square Residual (SRMR). CFI and TLI values greater than .95, RMSEA < .06 and SRMR < .08, were considered indicative of good fit, preferably in combination [24,25]. We performed data analyses using SPSS 24 and R with the lavaan package [26].
Background characteristics
Background characteristics of the sample are presented in Table 1. National demographic data of the Dutch population of 2013 show comparable characteristics for mean age at giving birth (31.0 years), first-born child (46%) and medium-high education (84.7% for all women aged 25 to 45 years) [27].
Of the total sample of 1612 mothers, of whom at least one EPDS had been returned to the research team, 1339 mothers filled in an EPDS at 1 month (SD 1.1 weeks), 1272 at 3 months (SD 1.7 weeks) and 1040 at 6 months (SD 1.5 weeks). Mean EPDS scores were 3.7 at 1 month, 2.8 at 3 months and 2.7 at 6 months.
Factor structure of EPDS at one month post-partum
The EPDS data at one month postpartum were found suitable for factor analysis with a KMO statistic of .91 and a significant Bartlett's test of sphericity (p< 0.001). Table 2 shows the outcomes of the EFA with forced two -and three-factor solutions. Both the two-and three-factor solutions resulted in a factor formed by items 3, 4, 5 and 10, labeled as 'anxiety subscale'. In the two-factor solution the other factor was formed by the remaining items 1, 2, 6, 7, 8, 9, labeled 'two-factor depression subscale'. In the three-factor solution these items were split up in a subscale formed by items 1 and 2, labeled the 'three-factor anhedonia subscale', and a subscale formed by items 6,7, 8, 9, labeled the 'three-factor depression subscale'. In both solutions item 10 presented with low loadings and minimal cross loadings. This was also the case for item 6 in the three-factor solution. Eigenvalues ranged from 1.85 to 3.90, and resulted in a total variance explained of 60.7% for the two-factor solution and 64.4% for the three-factor solution. Cronbach's alphas for the two-and three-factor solutions varied from .61 to .79, implying acceptable reliability. Correlations between the factors in the factor models can be found in S1 and S2 Figs.
Correlations of total EPDS and subscales with STAI-6
The correlation with the STAI-6 (maximum administration interval of 7 days (N = 550)) was strongest for the total EPDS (Pearson correlation .643). Moreover, the correlation of the STAI-6 with the two-factor depression subscale was stronger (.605) than the correlation with the anxiety subscale (.516). The three-factor subscales resulted in correlations with the STAI-6 of .520 (anhedonia subscale) and .565 (depression subscale). Similar correlations resulted from including more mothers by enlarging the maximum administration interval between EPDS and STAI-6 to 7 weeks (N = 1256), and from leaving item 10 out of the anxiety subscale. AUC for the anxiety-subscale was .729 versus .811 for the total EPDS. Table 3 shows the extent to which the two-and three-factor models fit the EPDS data collected at three and six months postpartum. CFI and TLI values > .99 and RMSEA and SRMR values < .035 and < .45 respectively, indicate good fit for both models. The three-factor model found in the EFA performed the best. Omitting item 10 out from the CFA resulted in comparable outcomes (S1 Table).
Discussion
During our factor structure analysis of the EPDS data, collected in a large community sample of postpartum women, we found the EPDS to have a subscale formed by items 3, 4, 5 and 10, in both the two-and three-factor solutions. This hypothesized anxiety subscale was stable across the first six months postpartum. We further found only a moderate correlation of this subscale with the STAI-6 as criterion for anxiety, at one month postpartum. Correlations with the STAI-6 were stronger, though still moderate, for the total EPDS, and also for the depression subscale from both the two-and three-factor solutions.
Findings compared to current evidence
The presence of a subscale containing EPDS items 3, 4 and 5 in our EPDS factor structure analysis confirms previous findings from comparable studies with a large community sample and timing of the EPDS within 4-6 weeks postpartum [12,[29][30][31][32]. Our finding also confirms findings from studies with broader or different postpartum timeframes or more specific populations [13,14,33,34]. Our study results differ from these studies regarding the position of item 10 (the item asking for suicidal ideation), as in most studies item 10 is loading more on the depression factor. Our inclusion of item 10 in the anxiety subscale may have been caused by our use of a polychoric matrix, which may better suit the data concerned. However, as in previous studies, loadings of item 10 were low, i.e. the item was rather undetermined. This may align with the vision to consider item 10 as an item with the specific function to detect potential suicidal risk. Regarding the stability of the EPDS in the postpartum period, our findings correspond to the outcomes of Coates et al. [12], who found a stable structure with the hypothesized anxiety subscale, from 8 weeks to 8 months postpartum. In sum, the hypothesized anxiety subscale appears to be present and stable in large community samples. Our findings on the correlation of the hypothesized anxiety subscale are in line with the study of Brouwers et al. [35], who also found moderate correlations during pregnancy for the anxiety subscale and the STAI, and somewhat stronger correlations for the total EPDS as well as the depression subscale. Other studies assessing only correlations between the STAI (fullform) and the total EPDS, reported substantially stronger correlations [36][37][38]. Two studies with positive conclusions on the value of using the anxiety subscale to detect anxiety did not validate the subscale [39,40]. The only study providing evidence in favor of the validity of a 3, 4, 5 item anxiety subscale was that of Matthey [13] (N = 238, 7.6%, met the anxiety disorder criteria), with a subscale sensitivity of 67% and a specificity of 82% at 6 weeks postpartum (criterion Diagnostic Interview Schedule).
The limited evidence for the hypothesized subscale's representation of anxiety might imply that this subscale actually does not represent anxiety. Brouwers et al. [35] noted the subjective, negative judgement, incorporated in items 3, 4 and 5 (e.g. "for no good reason"), which may relate to another construct like low self-esteem. The correlations of the total EPDS and other subscales with anxiety, indicate that anxiety is measured at least as much by the other EPDS-items. This implies that the total EPDS does to some extent detect anxiety symptoms in addition to depression symptoms, but that its subscales do not have added value for this.
Strengths and limitations
Strengths of our study are its community based sample and its large sample size. Another strength is our use in the analyses of a polychoric matrix, which is a more adequate statistical method when performing a factor analysis with ordinal data [41], but as yet rarely used in factor analyses of the EPDS.
A limitation of our study might be the use of the STAI-6 as anxiety criterion, as it probably measures depression in addition to anxiety, as is similar to the STAI full form [42,43]. Further, the non-simultaneous administration of the EPDS and STAI-6 may have deflated the correlations, though in our analyses we minimized this effect by limiting the maximum interval to 7 days.
Implications
Our study provides clear evidence for an EPDS subscale of items 3, 4, 5 and 10 which is stable across the first six months postpartum, but could not ascertain this subscale to adequately detect anxiety symptoms. The total EPDS performed better than our hypothesized anxiety subscale, but still correlates only moderately with our anxiety measure. This implies that using the EPDS in routine care, does not enable the professional to detect most cases of both depression and anxiety, nor enables to discriminate between the two. Research findings based on the EPDS subscales should be interpreted with caution.
Further research is needed to assess the maximum potential of the EPDS in the detection of anxiety, and whether additional efforts should be made to detect both depression and anxiety reliably and efficiently in an early stage. This may add to screening policies for both depression and anxiety regarding women during pregnancy and the postpartum period [44,45], and thus promote maternal mental health.
Conclusion
Our large community based study shows that the factor structure of the EPDS is stable across the first six months postpartum and includes a subscale generally assumed to represent anxiety. This subscale correlates only moderately with our anxiety criterion though, with the total EPDS performing slightly better. Adequate screening for anxiety may require an additional effort on top of the current EPDS.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. | 2019-09-11T13:06:31.450Z | 2019-09-09T00:00:00.000 | {
"year": 2019,
"sha1": "fb7d2b227859f9733206e9a4cf48f27fda5e4e2c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0221894&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d009f9b76690ce796957d17d1a883deb57162e66",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233426928 | pes2o/s2orc | v3-fos-license | MiR-181a Promotes Spermatogenesis by Targeting the S6K1 Pathway
Approximately 15% of couples suffer from infertility worldwide, and male factors contribute to about 30% of total sterility cases. However, there is little progress in treatments due to the obscured understanding of underlying mechanisms. Recently microRNAs have emerged as a key player in the process of spermatogenesis. Expression profiling of miR-181a was carried out in murine testes and spermatocyte culture system. In vitro cellular and biochemical assays were used to examine the effect of miR-181a and identify its target S6K1, as well as elucidate the function with chemical inhibitor of S6K1. Human testis samples analysis was employed to validate the findings. miR-181a level was upregulated during mouse spermatogenesis and knockdown of miR-181a attenuated the cell proliferation and G1/S arrest and increased the level of S6K1, which was identified as a downstream target of miR-181a. Overexpression of S6K1 also led to growth arrest of spermatocytes while inhibitor of S6K1 rescued the miR-181a knockdown-mediated cell proliferation defect. In human testis samples of azoospermia patients, low level of miR-181a was correlated with defects in the spermatogenic process. miR-181a is identified as a new regulator and high level of miR-181a contributes to spermatogenesis via targeting S6K1.
Introduction
Infertility besets about 15% of couples at childbearing age over the world, which equals about 48∼72 million couples according to the World Health Organization.
Female factors typically account for 50% of the cases while male factors are responsible for about 20∼30% of the cases. It is estimated that 2∼7% of all men exhibit different levels of sub-optimal sperm characteristics, such as alteration in sperm motility, morphology and sperm concentration (1,2). There is a deteriorating trend in the last few decades as continued decrease of sperm count has been reported from different regions of the world (3,4). The causes behind male infertility are complex, including the genetic mutation like Klinefelter syndrome with two X chromosomes and one Y chromosome, obstructive reproduction duct, immune disorder and inflammation, genital disorders, environmental exposure, and malfunction in the production of sperm (5,6). Azoospermia is one of the severe types of male infertility, it is a medical condition that no sperm is contained in semen due to various defects in the process of spermatogenesis or ductal system of testes (7,8). Of note, in the special case called ob-structive azoospermia, spermatogenesis is normal but no spermatozoa could be found in the ejaculate. Obstructive azoospermia is commonly seen as urologic disease for about 6.1∼13.6% of total male infertility patients (9,10). The treatment for male infertility is very limited and huge unmet needs exist (11)(12)(13)(14).
Germ cells undergo a fundamental and complicated process called spermatogenesis within the seminiferous tubule epithelium. Spermatogenesis includes one cycle of DNA replication and two consecutive cell divisions, during which dramatic changes occur in the cell cycle, growth, differentiation and morphology (15). Spermatogenesis is briefly divided into three successive phases: mitosis, meiosis and spermiogenesis. Spermatogonia first replicate the whole set of genes and divide into primary spermatocytes in mitosis like normal somatic cell division. Besides the self-renewal of spermatogonia, some offspring cells detach from the basement membrane and enter meiosis. Primary spermatocytes then undergo the second cell division without DNA replication and give rise to the haploid spermatids in the stage of meiosis. Finally, the spermatids further differentiate and undergo morphogenesis into spermatozoa in spermiogenesis. The spermatozoa are fully activated in the process called capacitation and are competent to fertilize the oocyte (16). During the process there is complex interplay between germ cells and supporting somatic cells, which requires cell and time-specific expression of a large set of genes and post-transcriptional regulation of genes and their products governing the diverse biological processes, including cell cycle and proliferation, DNA replication and chromatin remodeling, energy metabolism, mobility and morphology control (17). Many fundamental pathways are involved, such as MAPK (mitogen-activated protein kinase) (18), PI3K (phosphatidylinositol-4,5-bisphosphate 3-kinase) (19), TGF-β (transforming growth factor beta) (20) and mTOR (mammalian target of rapamycin) pathways (21). However, the detailed mechanism of gene regulation is still elusive.
Recently, microRNAs (miRNAs, miRs) have been implicated in the complex network of gene regulation in the process of spermatogenesis (22,23). MiRNAs are small non-coding RNAs of 19-23 nucleotides length which can bind to the target mRNAs through Watson-Crick base pairing and eventually cause the cleavage and degradation of mRNAs through the RISC (RNA-induced silencing complex) machinery (24). There have been multiple reports on miRNA's roles in spermatogenesis, such as miR-21, let-7, miR-17-92, miR-34c and miRNA-146 (25)(26)(27)(28). Some miRNAs are considered as potential biomarker for male infertility, for instance miR-141 is often upregulated in oligoasthenozoospermia (29). Genetic knockout studies suggest that loss of RISC-related components such as Dicer is linked with different defects of spermatogenesis (30), suggesting the importance of miRNA pathway in male germ cell development.
To shed more light on the role of miRNA in spermatogenesis, we studied the expression profile of miR-181a and established the connection between miR-181a and S6K1 in mouse spermatogenesis. miR-181a was highly expressed in testis tissues, especially the spermatogonia but not in spermatids. MiR-181a could downregulate S6K1 level and promote the cell proliferation of spermatogonia. Inhibition of miR-181a could suppress the cell proliferation which could be reversed by S6K1 inhibitor treatment. Interestingly, low level of miR-181a expression was observed in human male infertile patients with maturation arrest or hypospermatogenesis, compared to obstructive azoospermia patients which had normal spermatogenesis, suggesting the essential role of miR-181a to promote spermatogenesis.
Animals
C57BL/6 mice were from Shanghai SLAC Laboratory Animal Co. Ltd. Animals were housed in full ventilated cages in temperature controlled (20∼24℃) certified SPF grade facilities. All animals have free access to food and water unless indicated otherwise. All animal protocols were approved by the Institutional Animal Care and Use Committee of Qingdao Women and Children's Hospital, Qingdao University in accordance with the guidelines of the National Institutes of Health Guide for the Care and Use of Laboratory Animals. The testis tissues were col-lected from male mice at different stages after birth, various tissues from adult male mice were analysed for the expression of miR-181a, after the euthanasia of animals with CO2 inhalation and cervical dislocation.
RT-qPCR assay
Total RNA was extracted by RNeasy kit (Qiagen) and 25 ng total RNA was used for reverse transcription reaction. miRNA quantification was performed with TaqMan miRNA assay according to the manufacturer's instruction. Nucleolar small RNA RNU44 was used as internal control.
Western blotting
Total cell lysates were extracted with RIPA buffer and equal number of lysates were resolved on SDS-PAGE followed by standard immunoblotting procedures with indicated antibodies. The data were representative of three independent experiments.
MTT proliferation assay
GC-1 or GC-2 cells were plated at 2,000 cells/well density in 96 well plate and subject to different treatment/ transfection. After 48 h, cell medium was removed and after wash with PBS, fresh medium supplied with MTT reagent (0.3 mg/ml) was added to each well. The plate was scanned at plate reader (570 nm) for absorbance. Data were analysed with the Graphpad software.
Luciferase reporter assay
3'UTR of miR-181a (300∼500 nt) was inserted into pMirGlo vector, mutation at seed region was introduced as designed. 48h after transfection into GC-1 spg cells and various treatment, cell lysate was prepared and measured according to the instruction of manufacture (Promega dual luciferase kit). The results were analysed with Graphpad software.
Cell cycle profiling
Cells were treated with different conditions and then stained with propidium iodide (PI) solution after two brief washes with PBS and incubated at room temperature for 30 minutes with PI/RNase solution. Samples were analyzed on a FACScan flow cytometer (Becton Dickinson, San Jose, CA., USA). At least 20,000 events were collected on each sample. Cell cycle analysis of DNA histograms was performed with FlowJo software (Tree Star Inc., Ashland, OR., USA).
Human testicular samples analysis
Biopsy samples of 20 patients (aged 25∼40 years) diagnosed with Non-obstructive azoospermia (NOA) were obtained with approval of IRB committee in Qingdao Women and Children's Hospital, Qingdao University. Written informed consent were collected from patients before their enrolment into this study.
Statistics
All data were presented as means±S.D. Data was analysed by one-or two-way ANOVA analysis followed by a post hoc test. p<0.05 was considered as statistically significant.
miR-181a was upregulated during spermatogenesis
To understand the role of miR-181a in mouse physiology, we profiled the expression level of miR-181a in different tissues of adult mice. As shown in Fig. 1A, the relative expression level of miR-181a of testis was the highest in adult male mice, followed by high levels in the kidney and thymus, compared to other tissues such as the brain, heart, spleen, muscle and skin. This indicated miR-181a was relevant to the function of testis. We next examined the time course of miR-181a expression in the different stages of postnatal mouse testes. From day 7 to 21, the miR-181a level showed a marginal increase from 100 to 130 with day 7 at relative level of 100. From day 28 to 42, the level increased dramatically to more than 300 (Fig. 1B). Maximum level was seen at more than 450 in the sample of day 42. This pattern matched the progress of spermatogenesis in mouse in which the mature spermatozoa are first produced in male mouse around 5 weeks after birth. The results demonstrated that miR-181a was enriched in male reproductive organ and upregulated in testis tissue during the spermatogenesis.
miR-181a promoted spermatogonia cell proliferation
To assess the role of miR-181a in spermatogenesis, we use the in vitro spermatogenic murine cell culture model GC-1 and GC-2 cell lines. GC-1 spg cell resembles the transition between spermatogonia and primary spermatocytes and GC-2 spd (ts) cell is spermatocyte arrested at premeiotic stage. Firstly miR-181a expression level was determined in these cells. Compared to fibroblast NIH3T3 and myoblast C2C12 cell lines, GC-1 spg cells showed the highest level of miR-181a which was about 3-6 times higher than that in NIH3T3 and C2C12. MiR-181a level in GC-2 spd cells was about the 40% of that in GC-1 spg cells, as shown in Fig. 2A. Next, we knocked down the level of miR-181a by specific inhibitor in GC-1 spd cells to monitor the cell proliferation and cell cycle profile.
MTT cell proliferation assay result demonstrated the significantly reduced cell growth rate in miR-181a inhibitor transfected cells compared to scramble control (Fig. 2B). There was obvious G1-S arrest in miR-181a inhibitor transfected cells compared to control (Fig. 2C), as sig-nificantly less cells at S stage were detected in the cells transfected with miR-181a inhibitor. These results suggested that miR-181a was highly expressed in spermatogo- Luciferase reporter assay study of the interaction between miR-181a and 3'UTR of S6K1. GC1 spg cells were transfected with luciferase reporter genes conjugated with either wildtype (WT) or mutant (MUT) 3'UTR of S6K1. 24 hours later, the cells were transfected further with either miR-181a mimic or scramble control. Transfected cells were cultured for 48 hour and then harvested for the measurement of luciferase activities in a plate reader. (C) Western blotting analysis of mTOR1, S6K1, p-S6K1 in cells transfected with either miR-181a or scramble control. The representative image from at least three independent experiments was shown. The representative images from at least three independent experiments were shown. The data were presented as means±SD (*p<0.05, **p<0.01, ***p< 0.001 as compared with the control). nia stage and was required for cell proliferation.
S6K1 was downstream target of miR-181a in GC-1 cells
To find the downstream target of miR-181a, we use bioinformatic tool to identify potential match between the 3'UTR and seed region of miR-181a. One of the hits is S6K1 kinase which is an important protein kinase in mTOR signaling. mTOR is essential for cell survival, protein synthesis and energy metabolism. It has been reported to play a key role in spermatogenesis. To validate the in-silicon finding, we designed the mutant 3'UTR region of S6K1 and inserted into a luciferase reporter construct. Mutation at seed region on 3'UTR was introduced to disrupt the recognition of miR-181a as shown in Fig. 3A. Both of the luciferase assay reporters were transfected into GC-1 spg cells first, then the mimic of miR-181a or scramble control were transfected into cells. The luciferase assay result showed that miR-181a mimic could significantly suppress the reporter activity of wildtype S6K1 3'UTR. The suppression was abolished on mutant 3'UTR reporter construct, as shown in Fig. 3B. To further validate the result of luciferase assay, we examined the endogenous S6K1 protein level when miR-181a was transfected into GC-1 spg cells. Strikingly, S6K1 and phosphorylated S6K1 protein was completely undetectable in the miR-181a mimic transfected cells compared to scramble control (Fig. 3C), consistent with luciferase assay result. Collectively, these results confirmed that S6K1 was the downstream target of miR-181a.
S6K1 over-expression attenuated GC-1 spg cell proliferation
Next, we examine the function of S6K1 in GC-1 cell culture. In accordance with previous result, S6K1 protein level was much lower in GC-1 spg cells which had higher level of miR-181a, compared to GC-2 spd cells, as shown Fig. 4A. mTOR protein level was not changed between these two cell lines, suggesting the effect of miR-181a was very specific. To further assess the role of S6K1 in cell growth and differentiation, wildtype or kinase dead mutant form of S6K1 was over-expressed in GC-1 spg cells. As shown in MTT assay (Fig. 4B), cells overexpressed with wildtype S6K1 protein showed significantly reduced cell proliferation compared to cells with GFP control or S6K1 kinase dead mutant protein. Similarly, cells with wildtype S6K1 overexpression showed remarked G1/S arrest, compared to cells with GFP control or kinase dead S6K1 overexpression (Fig. 4C). This is in line with previous result in Fig. 2C, where inhibition of miR-181a showed G1/S arrest. These results demonstrated that S6K1 suppressed the spermatocyte proliferation in the manner dependent on its kinase activity.
Inhibiting S6K1 activity rescued the antagonizing effects of miR-181a inhibitor in GC-1 spg cell growth To further explore the relation between miR-181a and inhibitor. In parallel, the miR-181a inhibitor transfected cells were also treated with S6K1 kinase inhibitor or DMSO. The growth rate of cells was monitored using MTT kit for 72 h. (C) Representative histogram data of the cell-cycle analysis of GC-1 spg cells in different treatment conditions as described above. The representative images from at least three independent experiments were shown. The data were presented as means±SD (*p<0.05, **p<0.01, ***p<0.001 as compared with control).
S6K1, we introduced miR-181a inhibitor into GC-1 spg cells and observed the dramatic increase of S6K1 protein level compared to scramble control as shown in Fig. 5A, clearly suggesting that S6K1 was directly downregulated by miR-181a. We then examined whether S6K1 kinase inhibitor could affect miR-181a function. GC-1 spg cells Fig. 6. miR-181a was downregulated in the testis tissues of sterile men with non-obstructive azoospermia. miR-181a expression level in testis of patients with maturation arrest, hypo-spermatogenesis or non-obstructive azoospermia was quantified by real time PCR (n=10 for each group and **p<0.01 as compared with non-obstructive azoospermia patient group). The expression levels of the mRNA were also quantified in normal testis tissues from post-mortal donors.
were transfected with miR-181a inhibitor or scramble control first, then S6K1 inhibitor or DMSO control vehicle was added to the cells with miR-181a inhibitor transfection. As shown in Fig. 5B, cells with miR-181a inhibitor showed decreased proliferation compared to cells with scramble control, this suppressive effect on cell growth was reverted by S6K1 inhibitor treatment. Likewise, miR-181a inhibitor could cause significant G1/S arrest as shown by cell cycle profiling in Fig. 5C, and S6K1 inhibitor treatment could recover the cell population in S phase to the level of scramble control. These results indicated that S6K1 was the downstream factor of miR-181a to regulate cell growth and inhibitor of S6K1 was effective to revert the suppressive role of low level of miR-181a.
miR-181a was downregulated in the testis tissues of sterile men with non-obstructive azoospermia To extend our finding in mouse cellular model of spermatogenesis, we assessed the level of miR-181a in human testis samples by RT-qPCR method. In non-obstructive azoospermia such as maturation arrest or hypo-spermatogenesis type of male infertility patients, the level of miR-181a was found at relatively low level, as shown in Fig. 6. However, miR-181a level was significantly higher in the patients of obstructive azoospermia in which the spermatogenesis is normal. In addition, we also included the expression levels of miRNA in normal testis tissues as comparison and observed a slightly lower expression than the sterile patients from maturation arrest or hypo-spermatogenesis types. This result indicated that high level of miR-181a was well correlated with normal spermatogenesis, in accordance with the previous results from in vitro cell model. Future study will be performed to establish the role of S6K1 in these patients and test the therapeutic potential of S6K1 kinase inhibitor in the non-obstructive azoospermia patients with low level of miR-181a.
Discussion
Male infertility is a common cause for couples who have difficulty to bear a child. Azoospermia is a severe type of male infertility with elusive mechanisms and limited available therapy. MiRNAs have been implicated to play a role in the spermatogenesis. Here we reported a new mechanism of miR-181a in this process. MiR-181a is highly enriched in male reproductive organ and increased expression level was observed in the maturation peak of spermatogenesis. Using a murine spermatocyte culture system, we demonstrate that miR-181a is required for cell proliferation and cell cycle progression. Mechanistically miR-181a could target S6K1 protein kinase of mTOR pathway by 3'UTR reporter assay and protein immunoblotting. Further we found S6K1 overexpression in spermatocyte could suppress the cell growth and kinase activity is essential for the suppression. On the other hand, S6K1 kinase inhibitor could revert the cell growth arrest by knocking down miR-181a level. In other words, S6K1 kinase inhibitor could compensate the effect caused by low level of miR-181a. Lastly human testis sample from male infertility patients revealed the correlation of high level of miR-181a with functional spermatogenesis. Our study established the role of miR-181a-S6K1 axis in spermatogenesis and pointed out the potential therapeutic application of miR-181a mimic and S6K1 protein kinase inhibitor in non-obstructive azoospermia.
Spermatogenesis involves the highly orchestrated waves of transcriptional and translational regulation of gene networks in the different cell types like gametes and somatic cells. Notably, only a small fraction of human genome is transcribed into coding mRNAs. Recently non-coding RNAs have attracted increased attention and play important functions in multiple biological processes. Among them, microRNAs are small non-coding RNAs which constitute the fine-tune control of huge number of coding genes as a supplement to conventional gene regulation. Many miRNAs have been reported to play important role in spermatogenesis, for example, miR-15a (31), miR-18 (32), among others as reviewed (22,33). A microarray profiling of miRNAs in mouse testes has mentioned the potential role of miR-181 family members (34). However, our study provides the thorough mechanistic insight of miR-181a in spermatogenesis and identifies the downstream target S6K1 which is the downstream key a serine/threonine kinase activated by mTOR. S6K1 is considered a master regulator of cellular metabolism and survival together with mTOR and its downstream effector ribosomal protein S6 (35)(36)(37). mTOR has been reported to be important for Sertoli cell polarity and spermatogenesis in human (38). A recent study of mTOR and S6K1 in rat spermatogenesis reported some seemingly contradictory findings with our results (21). Their conclusion that mTOR/ p70S6K1 promotes spermatogonia was based on the observation from 8-week, 38-week and 80-week old rats. The effect of decrease in mTOR signaling is marginal in their results and the significant change at 80-week rats is more about aging instead of spermatogenesis which should be active at much earlier stage of life. In addition, our result was supported by another miRNA study in mouse spermatogenesis that activation of mTOR pathway is detrimental to cell survival (39). The study using knockout miR-17-92 offered the solid proof on its role in spermatogenesis. What's more, gene disruption of S6K1 in mice does not cause male infertility (35,36), supporting our finding with S6K1 inhibitor (Fig. 5B and 5C). We think the discrepancy between our results and Xu et al. (21) report may be originated from different species used and different stages of sperm development under observation. The method difference is also noticeable. Xu et al. (21) observed that rapamycin treatment in 8-week male mice led to decreased sperm count which was consistent with human study with rapamycin. However, whole body administration of mTOR inhibitor rapamycin may have profound effects. It is reported that mTOR is essential for the polarity maintenance of the Sertoli nursing cells by controlling the tight gap junction in addition to regulation of cell metabolism (40). Our finding is based on the spermatocyte itself rather than focusing on the Sertoli cell and germ cell interaction. Loss of all members of miR-181 clusters leads to decreased survival, retarded growth and lymphocyte homeostasis in mice (41). This report is largely in line with our finding on cell proliferation, although there might be different functions for miR-181 in various type of tissues. Therefore, further tissue-specific knockout of miR-181a subtype in male reproductive organ is required to elucidate the mechanism of miR-181a in spermatogenesis in a more physiologically relevant condition.
Activation of mTOR/S6K1 protein kinase pathway is frequently observed in multiple cancers and thus targeting mTOR in cancer is under development (42). Inhibitor of S6K1 protein kinase has been evaluated in many different cancers (43)(44)(45). Inhibition of S6K1 caused the cell growth arrest and apoptosis in the setting of cancer cell lines. However, in our study we found inhibitor of S6K1 actually promoted the spermatocyte proliferation in the background of miR-181a inhibition. We speculate there might be cell type specific effect and miR-181a should have other targets besides S6K1 gene. The effect we seen was the compound consequence of inhibiting both miR-181a and S6K1. Further study is needed to dissect the detailed regulatory loop between miR-181a and S6K1, and other potential genes involved in the unique setting of spermatogenesis.
In summary, we reported the function of miR-181a in mouse germ cell differentiation and proliferation by the profiling of miR-181a expression in different tissues and development stages of male spermatogenesis and further unraveling the novel connections between miR-181a and S6K1 protein kinase as the mechanism. The arrest of cell growth caused by knockdown of miR-181a could be rescued by S6K1 kinase inhibition. Interestingly low level of miR-181a was found in sterile male testis samples compared to the ones with normal spermatogenesis. This finding opens a new avenue for the future study to investigate the inhibition of S6K1 in non-obstructive azoospermia patients with low level of miR-181a. | 2021-04-29T06:17:18.882Z | 2021-04-30T00:00:00.000 | {
"year": 2021,
"sha1": "e232158efd6dafe9816518ad647574e620ccb325",
"oa_license": "CCBYNC",
"oa_url": "https://www.ijstemcell.com/journal/download_pdf.php?doi=10.15283/ijsc21001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75e163d0aad674fcd1a6e7ed3970ebf7b85854ee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235280211 | pes2o/s2orc | v3-fos-license | On the formation of environmental consciousness among students of an agricultural university
The paper analyzes the phenomenon of environmental awareness and educational technology of its formation among the students of an agricultural university. The definition of ecological consciousness is used as the ability to build a relationship with the surrounding natural world in accordance with the law of harmonious and sustainable development of a modern technogenic society. At the same time, the work puts forward an understanding of environmental consciousness as a factor that builds a person’s relationship with the social world, which determines the worldview maturity of a person himself as a person. The latter makes environmental awareness not only an important component of environmental safety, but a guarantor of the existence of culture as a whole, in connection with which there is an urgent need to develop a strategy for the formation of environmental awareness among young people. The special role of educational institutions in the formation of environmental awareness as an important component of the educational process is indicated. The experience of research and teaching and educational work of teachers of the Department of Botany and Meadow Ecosystems and the Department of Humanities of the Tver State Agricultural Academy on the formation of environmental awareness among students is presented. One of the effective forms of extracurricular and research work is analyzed - project activity, an innovative method that contributes to the development of ecological consciousness and ecological lifestyle of students. Reconstructed the main stages of a pilot project demonstrating the benefits of growing environmentally friendly vegetable products. An educational project - “The World of a Russian Estate”, which forms the basic values necessary to preserve the ecological consciousness of young people, is presented.
Introduction
Among the most pressing global problems that challenge modern society, the leading place is given to the environmental problem. Some scientists are inclined to identify the very essence of the global crisis, which has engulfed all spheres of human life, with an ecological catastrophe, thereby expanding the original meaning of the concept of "ecological", including in its content not only the attitude to the natural environment, but also the socio-cultural one. The emergence of an ecological consciousness capable of harmonizing a person's relationship with his biosocial environment is now becoming a condition for both natural and social survival of society, and its formation is one of the priority goals of the modern education system. An urgent task of a modern teacher should be the search for methods and ways of forming environmental awareness among young people, ensuring the environmental safety of the country as a whole.
Materials and methods
The material for writing the article was the pedagogical experience of the teachers of the Tver State Agricultural Academy in the formation of environmental consciousness among students of the Faculty of Technology. Traditionally, a similar problem is designed to solve "Ecology", a subject related to transdisciplinary areas of knowledge [1], as well as other academic disciplines related to environmental issues ("Agricultural ecology", "Vegetable growing", "Floriculture", etc.). At the same time, the experience of the pedagogical work of the teachers of the Tver State Agricultural Academy has shown that the formation of ecological consciousness becomes most effective precisely when it goes beyond the classroom [2]. In this case, we mean extracurricular work, including various forms, among which a special place is given to the project activities of students.
Organization, holding, summing up the results of educational and scientific (experimental project for growing ecologically clean vegetable products) and educational (project "The World of a Russian Estate") activities served as the material for this study, in which teachers of specialized (Department of Botany and Meadow Ecosystems) and the general humanitarian department of the Tver State Agricultural Academy.
Research methods are defined by the stated topic. So, when conducting an experiment to identify the factors for obtaining environmentally friendly products of vegetable crops, special methods of biological sciences (laboratory and experimental) were used. The study of the phenomenon of environmental consciousness and its place in the system of modern education involved the use of general scientific methods of the humanities (comparative-historical, systemic). The project activity was declared as an effective teaching and educational method for the formation and preservation of ecological consciousness.
Results
In the modern education system, a stable attitude has developed towards the project method as one of the innovative technologies, which is based on the idea of developing students' cognitive skills, creative initiative, the ability to think independently, find and solve problems, navigate the information space, the ability to predict and evaluate the results of their own activities [3]. The advantage of the project method over other pedagogical technologies is its focus, first of all, on the independent activity of students, which creates the conditions for the natural and organic course of the educational process, and with it the personal maturation of young people. In connection with the popularity of the project method in the modern education system, they started talking about the need to form project thinking, a special ability to think creatively and organize effective work to achieve the goal [4]. We believe that the phenomenon of design thinking will inevitably correlate with environmental consciousness, which acts as a kind of censor in relation to it, neglect of which will not allow achieving the desired result.
An example demonstrating the combination of design thinking and environmental awareness is an experimental project to identify the factors for obtaining environmentally friendly vegetable products [5]. The main stages of the project were: Analysis of scientific and practical literature on the latest technologies for growing root crops [6]; Development of a scheme for a pilot project, study of research methods; Preparation of the experimental site, its marking, laying of field experience, carrying out the planned activities; Selection of prototypes for analysis in accordance with the research methodology; processing, systematization and analysis of the information received.
During the experiment on the cultivation of table root crops, students were asked to use various types of complex mineral fertilizers for root crops ("Sudarushka", "Agricola", "Omu"), gradually tracking the rate and quality of the accumulation of nutrients in root crops, comparing the results of the harvest [7]. The experiment showed that not using the declared fertilizers gives a lower weight result, 3 but at the same time preserves the ecological purity of the final product. Whereas the use of mineral fertilizers that do not affect the final mass of the product can significantly reduce its environmental safety. The students were asked to make a choice between using and not using the declared fertilizers, arguing it during the subsequent discussion. The result of the pilot project was the refusal of 7 out of 10 project participants from the use of mineral fertilizers in the cultivation of root crops. Such a choice in favor of a smaller volume of environmentally friendly final product, which does not fit into the usual parameters of a market economy, at the same time, indicates the ecological maturity of students who rely on an environmentally healthy lifestyle.
The method of project activity was also the basis for educational activities, also involved in the formation of environmental consciousness. One of them was the project "The World of a Russian Estate (Based on the Material of the Sakharovo Estate in Tver)". The study of estate culture by students of an agricultural university was not only of purely cultural and historical interest. Reconstruction of the economic side of the Tver estate, first of all -the organization of the agrarian economy, an important component of which was the cultivation of vegetable plants, including root crops, -is a useful experience necessary for students of an agricultural university to understand modern agriculture. A special place in the organization and implementation of the project was given to environmental issues, in particular, obtaining environmentally friendly products of agricultural crops.
The main material for the study of the regional estate was the territory of the former estate of Field Marshal Iosif Vladimirovich Gurko, today it is the village of Sakharovo, the location of the Tver State Agricultural Academy, which belongs to the ecologically safe zone of the city of Tver. In the course of their project activities, students, using archival materials, reconstructed the ecological environment of the estate of the late 19th -early 20th centuries and compared it with the current state of the former estate territory, in particular, its park area. A comparative analysis showed a significant loss not only of the external aesthetic appearance of the park, created in the style of an "English landscape park", but also of the loss of valuable species of plants and trees that ensured the ecological purity of the estate. Among the latter: single specimens of Siberian fir, common ash. On the verge of extinction are perennial planting of trees (ages from 140 to 160 years): pedunculate oak, smooth elm, common spruce, small-leaved linden, Norway maple, warty birch. The results of students' project activities were tested at the regional scientific student conference and during the final round table "The World of the Russian Estate". In the future, it is planned to continue work within the framework of this project in the direction of returning the lost appearance of the Tver estate, including the restoration of Sakharovsky Park.
Discussion
Despite the fact that the term "ecological" has a long tradition, its content remains open and the practice of recent years has shown that it is capable of replenishing with new meanings. In modern science, this term is used in relation to culture, society, man ("ecological culture", "ecological society", "ecological man") [8]. It seems that all these interrelated terms can in essence be regarded as derivatives of ecological consciousness. We share the position of scientists who consider ecological consciousness as a person's ability to build relationships with the natural world, his understanding of an inextricable connection with nature. At the same time, we share the opinion of scientists that in the conditions of the modern ecological crisis there is a need to define ecological consciousness as an important factor that ensures human relations, including with the social world, a guarantor of ecological safety, human survival both as a biological and as a social being [9]. The basic values that determine the worldview maturity of a person as a person serve as a condition for the emergence and preservation of the very ecological consciousness.
Such an understanding of ecological consciousness puts forward the task of its formation as a priority, uniting all teachers -both biologists and humanities. Moreover, the work on the formation of environmental consciousness should not take the form of moralizing or exclusively popularizing a healthy lifestyle. It should become organic and natural, present in the classroom and extracurricular life of students as an important component of the educational space of the university. We believe that such work can effectively manifest itself in the conditions of a personality-oriented model of education [10]. A student focused on personal development, and in a practical situation related to his professional and agricultural sphere, the choice in the means of achieving the result will be made taking into account an ecologically healthy lifestyle and the ecological safety of the country.
Conclusion
Thus, the strategic goal of education is the formation of the spiritual and moral culture of students, an important component of which should be environmental awareness. In solving such a problem, the efforts of teachers alone are not enough. Undoubtedly, it should be solved with the active participation of the state and institutions that implement a single strategy for the development of the ecological consciousness of young people. If we define the process of the formation of ecological consciousness as building a person's relationship with the surrounding world, including both natural and social, according to the laws of harmonious and sustainable development of society, then we have to state that this process is quite long and its results are distant. However, the performance indicator in any case will depend, among other things, on the choice of educational technologies for the formation and preservation of the ecological consciousness of young people, which guarantees the ecological safety of the country. | 2021-06-03T00:06:02.259Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "b2233728b241cbb8805cf5e7f921d117e1a15c27",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/723/4/042006",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b2233728b241cbb8805cf5e7f921d117e1a15c27",
"s2fieldsofstudy": [
"Environmental Science",
"Education",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
44074199 | pes2o/s2orc | v3-fos-license | Ordinary Cannulated Compression Screws or Headless Cannulated Compression Screws? A Synthetic Bone Biomechanical Research in the Internal Fixation of Vertical Femoral Neck Fracture
Purpose The purpose of this study is to verify whether the headless cannulated compression screw (HCCS) has higher biomechanical stability than the ordinary cannulated compression screw (OCCS) in the treatment of vertical femoral neck fractures. Materials and Methods 30 synthetic femur models were equally divided into 2 groups, with 50°, 60°, and 70° Pauwels angle of femoral neck fracture, under 3D printed guiding plates and C-arm fluoroscopic guidance. The femur molds were fixed with three parallel OCCSs as OCCS group and three parallel HCCSs as HCCS group. All specimens were tested for compressive strength and maximum load to failure with a loading rate of 2 mm/min. Results The result showed that there was no significant difference with the compressive strength in the Pauwels angle of 50° and 60°. However, we observed that the maximum load to failure with the Pauwels angle of 50°, 60°, and 70° and the compressive strength with 70° of HCCS group showed better performance than the OCCS group. Conclusion HCCS performs with better biomechanical stability than OCCS in the treatment of vertical femoral neck fracture, especially with the Pauwels angle of 70°.
Introduction
Femoral neck fracture in young adults is usually the result of polytrauma and high-energy injuries, for which accurate reduction and stable fixation are necessary [1,2]. Orthopedic surgeons should choose the most effective and affordable implant to deal with this kind of fracture. Comparing with other internal fixation implants, cannulated compression screws (CCSs) have been reported for their particular superiority with the treatment of femoral neck fracture, such as less damage of soft tissue, less blood loss, and being easy to operate, making them become one of the most common fixation devices [3]. However, with the increase of Pauwels angle, high rates of fixation failure while fixing with CCS, including femoral neck shortening, loose fixation, varus deformity, and fracture displacement, have been reported.
Several factors might affect the stability between fracture fragments while using the CCS in treating femoral neck fracture such as the direction of the screws, the number of the screws, the position of the screws, the configuration of the screws, and the type of the screws. It has been proved that there is no difference in compressive strength among screws placement angle [4]. There are many researches trying to figure out the best choice of the number of screws but it still remains controversial. Several studies have shown evidence about the relationship between the position of cannulated screw and the effect [5][6][7], and the triangular or inverted triangular configuration performed with better strength and stability of fixation [8]. Then does the type of CCS have effect on the stability of femoral neck fracture? There are several literatures that can be searched but only one literature focused on the length of thread, and no biomechanical research was conducted [9]. HCCS has been introduced for the treatment of femoral neck fractures in recent years, functions of which are via whole thread and continuous compression with proximal lateral femoral cortex to femoral head.
The purpose of this study is to verify the assumption that the HCCS has better biomechanical stability.
Specimen Preparation.
30 same shaped left side synthetic femur models (ENOVO, China) were equally divided into two groups, and each group was further equally divided by three different Pauwels angles (50 ∘ , 60 ∘ , and 70 ∘ ). To ensure the site of each screw at exact the same position, the first screw was driven up to the subchondral bone just beneath articular side of the femoral head; the second screw was placed beneath the first screw near to anterior cortex and the third one was placed near to posterior cortex; three screws form the configuration of standard triangle. We designed a guiding plate and made it with 3D printing technology. Fracture line was made with a medical pendulum saw just from the upper side of the lesser trochanter proximal to the superior femoral neck with the assistance of 3D printing guide plates, to avoid the appearance operation error ( Figure 1). Firstly, under the C-arm fluoroscopic guidance and guiding plate, 3 parallel guide pins were placed into molds first. Then we removed the guiding plate and predrilled 3 insertion holes with 3.0 mm width drill bit through the guide pin, and after the accurate reduction of fracture, three 6.5 mm cannulated screws were inserted. Finally, the fracture was repaired with three CCSs. Group A (OCCS group): three parallel OCCSs (Stryker Co.) of 6.5 mm width placed with the configuration of triangle; Group B (CCCS group): three parallel HCCSs (Acumed Co.) placed with the configuration of triangle ( Figure 2).
Biomechanical Testing.
All tests were performed with axial compressive loading with an Instron test system (Instron, Norwood, MA, USA) which included a base, a pressure applicator, and a data analyzer. The distal femur was fixed with shaft adduction angle of 7 ∘ using dental powder to imitate the femur form in normal walking. A vertical force was put on the top of the femoral head at the loading rate of 2 mm/min. The failure load was defined as the marked decrease followed by the maximum load or the fragments displaced with 5 mm [10]. Two magnets were placed on the proximal and distal fragments, respectively, to record the displacement of the two fragments ( Figure 3).
Statistical Analysis.
The analysis was performed with the use of SPSS software (SPSS Version 20; SPSS Inc., Chicago, IL, USA). To detect the differences of compressive strength and failure load, the two groups were performed using -test with significance set at < 0.05.
Results
The testing results of compressive strength are showed in Table 1. We found that in our vertical fracture models the compressive strength of HCCS group performed better than the OCCS group with the Pauwels angle of 70 ∘ (109.03 ± 7.89 versus 128.58 ± 12.24, = 0.019), but there is no statistical significance between two groups with 50 ∘ (177.58 ± 25.74 versus 214.08 ± 18.62, = 0.133) and 60 ∘ (137.54 ± 32.57 versus 135.96 ± 43.52, = 0.721). And the maximum failure load of the two groups is performed in Table 2. The results clearly indicate that the maximum load to failure in the HCCS group performed significantly better than that in OCCS group ( Figure 4).
Discussion
The incidence of femoral neck fracture has increased rapidly in recent years because of the acceleration of ageing with the population process. Different from the old, most of the young patients suffered from high-energy injury directly to the femoral neck, for whom choosing an appropriate device is significant. In recent years, the use of new methods to treat femoral neck fracture has been reported a lot. For example, the study of Samsami et al. [11] on vertical femoral neck fracture of young people compared the CCS with proximal femoral locking plate (PFLP) and the combination of DHS and an antirotation screw. The result revealed that the latter had better resistance of rotation shear force. Further, a new type of femoral neck locking plate (FNLP) consisting of a locking plate and five screws showed satisfactory biomechanical results in Pauwels type III femoral neck fracture. Compared with DHS, the new type of FNLP performed better in the biomechanical stability and it also could reduce the incidence of bone nonunion effectively by conduct force to the femoral neck dispersedly via the multiaperture screw system [10]. In Zhu et al. [12] follow-up study including 74 patients with femoral neck fracture, they introduced a new treatment for femoral neck fracture with percutaneous compression plate. The results showed that 98.5% of patients had good prognosis and could walk unaided again. Only two of them appeared with avascular necrosis of femoral head and delayed union, respectively. Although there are so many choices for treatment, none of them has been proven to be overwhelmingly superior. The use of OCCS in the treatment of femoral neck fracture achieved great success [13]. A 17-item survey showed that for the undisplaced fracture near 80% surgeons prefer using OCCS [14]. In the retrospective study of 59 patients in a single institution, 4/5 of them treated with OCCS showed good results at last [15]. However, for the more vertical fracture, the rate of failure and complication has risen obviously because of the increase of shear force [16,17]. In our study, the compressive strength of OCCS of the Pauwels angle of 50 ∘ , 60 ∘ , and 70 ∘ is 177.58 ± 25.74 N/mm, 137.54 ± 32.57 N/mm, and 109.03 ± 7.89 N/mm, respectively. And the maximum load to failure is 691.56 ± 72.02 N, 437.05 ± 55.97, and 312.06 ± 89.64 N. It appeared that with the increase of Pauwels angle the axial stiffness and the maximum load to failure decrease obviously.
The HCCS has been introduced as a reliable choice of internal fixation in recent years. The diameter of HCCS becomes larger from screw tip to tail, and the screw pitch becomes smaller to make the screw tip goes faster than the tail while going into the bone, and it forms compression between fractures. The design of the screw thread increases the contact between the screw and the bone, and the conical shape makes it possible to obtain greater holding force, pullout strength, and shear strength. Thereby it can increase the compression in joint and create an immediate stability to help early mobilization. Dodds et al. [18] found that long thread screws provided optimal fixation when used for the scaphoid fracture. In the retrospective study with 41 distal ulna patients with rheumatoid arthritis, the patients with the treatment of the HCCS showed better stability and higher rate of bone union, compared to those treated with OCCS [19]. As for the treatment of metacarpal neck fractures, the HCCS also shows its advantages in earlier mobilization than those fixed with K-wires [20]. Moreover, in the case reported by Borse et al., they introduced a method using two HCCS in the treatment of Hoffa fracture and have achieved satisfying result [21]. And the method gained affirmation and improvement in the prospective study of Li et al. [22]; eight Chinese patients suffering isolated Hoffa fractures were treated with HCCS combined with back buttress plate; the result is inspiring with all fractures healed clinically and the range of motion of the knee joint became better. With all the successful use, the HCCS has become an effective internal fixation device. In addition, the compressive strength with the Pauwels angle of 70 ∘ (128.58 ± 12.24 N/mm) also showed satisfactory results in our biomechanical study. And the load of failure showed good results as well; with 50 ∘ , 60 ∘ , and 70 ∘ Pauwels angle, the loads were 1001.80 ± 151.88 N, 660.05 ± 104.16 N, and 468.83 ± 82.02 N, respectively.
We strongly proved the advantages of HCCS in treatment of the vertical femoral neck fracture. Especially with the Pauwels angle of 70 ∘ , no matter the axial stiffness (128.58 ± 12.24 versus 109.03 ± 7.89, = 0.019) or the maximum load to failure (468.83 ± 82.02 N versus 312.06 ± 89.64 N, = 0.018), the superiority of biomechanical stability of HCCS is obvious.
However, there are several limitations for this experiment. Firstly, the bone is synthetic femur model rather than cadaveric femur bone. Secondly, limited to implants supply, we have failed to consider the length of thread purchase and screw arm of the OCCS, which might affect the results.
Conclusion
In summary, with the vertical femoral neck fracture especially Pauwels angle of 70 ∘ , HCCS performs with much better biomechanical stability than OCCS. And HCCS can be introduced a better implant than OCCS while treating the vertical femoral neck fracture. However, further investigation with clinical research is needed in the future. | 2018-06-05T05:04:19.704Z | 2018-04-11T00:00:00.000 | {
"year": 2018,
"sha1": "59ee5033850241e32b445974f3a0488d23a1ca9c",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2018/4898301.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb2c76901acb8023d68aafd24ea3204f2d985992",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
3093625 | pes2o/s2orc | v3-fos-license | A new species of Hibiscadelphus Rock (Malvaceae, Hibisceae) from Maui, Hawaiian Islands
Abstract Hibiscadelphus stellatus H. Oppenheimer, Bustamente, & Perlman, sp. nov., a new, narrowly endemic species from West Maui, Hawaiian Islands is described, illustrated and its affinities and conservation status are discussed. It is currently known from three populations totaling 99 plants in Kaua`ula valley on leeward western Maui. It differs from H. wilderianus, its nearest congener, in its denser white or tan stellate pubescence on most parts; larger externally purple colored corollas that are 5–6.5 cm long; linear-subulate to lanceolate, acute to acuminate involucral bracts; globose-cuboid to ovoid capsules; and endocarp with scattered hairs.
Introduction
Joseph Rock described the endemic Hawaiian genus Hibiscadelphus Rock in 1911 based on H. giff ardianus Rock (Radlkoff er and Rock 1911). Th e genus is extremely rare, with seven previously described species from the main Hawaiian Islands, four of which are now extinct, two only persisting in cultivation (including restoration plant-ings), and a single species remaining in its natural habitat. Th e genus belongs to the tribe Hibisceae (Malvaceae), and it appears to form a distinct monophyletic group based on its curved and narrowly zygomorphic corollas forming a tubular structure with the petals unequal in length (the lower two shorter than the upper three). In contrast, in Hibiscus the corollas are actinomorphic with spreading petals of equal length (Lorence and Wagner 1995). In most species of Hibiscus the calyx is not circumscissile in fruit but persists, splitting along one side.
In addition to establishing the genus, Rock described three species: Hibiscadelphus giff ardianus Rock from Mauna Kea, H. hualalaiensis Rock from Hualalai, both on Hawai`i Island, and H. wilderianus Rock from Auwahi on the island of Maui (Rock 1913). After Rock's initial treatment, Forbes (1920) described a fourth species (H. bombycinus C.N. Forbes) based on a specimen collected in the mid 1800's by Hillebrand and Lydgate at Kawaihae in the Kohala Mountains of Hawai`i Island. Over the next 75 years three additional species were subsequently discovered and described: H. distans L.E. Bishop & D. R. Herbst on Kaua`i (Bishop and Herbst 1973); H. crucibracteatus Hobdy on Lana`i (Hobdy 1984), and H. woodii Lorence & W.L. Wagner on Kaua`i (Lorence and Wagner 1995). Th e last authors published a key to the seven taxa known at that time. Presently, six species are extinct in the wild, but two of these persist in cultivation (including restoration outplantings), and two others, including this new species, occur as natural populations (Table 1). Th e eight described species are all mostly single volcano endemics. Th e two Kaua`i species are separated by a distance of 8 km. Hibiscadelphus woodii was known from Kalalau Valley on the islands northern coast and H. distans is known from Koaie Stream in Waimea Canyon, whose outlet is along the southern shore During the course of fi eld work on west Maui in 2012 the authors discovered two populations (25 and 51 plants) over 400 m apart of a previously unknown Hibiscadelphus species on the steep slopes of Kaua`ula Valley on leeward, western Maui. A year later a third colony was found between the fi rst two locations with 23 plants. Hibiscadelphus had not been observed, reported or documented previously on west Maui. Study of the collected specimens and comparison with collections of other known species at the BISH and PTBG herbaria, and images on JSTOR Global Plants revealed they represent an undescribed species. Description. Small trees 3-6 m tall, many branched, trunks to 30 cm dbh, bark smooth, light tan to gray, young branchlets densely white to tan pubescent with 8-12-rayed stellate trichomes 0.3-0.4 mm in diam., surface scurfy-waxy, glabrescent with age; petiole scars prominent, subcircular, 2.5-4 mm in diam. Leaves chartaceous, new growth densely stellate-pubescent, mature leaves with blades broadly-ovate to suborbicular or subreniform in outline, occasionally shallowly 3-lobed, 7.5-16(-18) cm long, (8)9.5-13.5(-18) cm wide, veins prominulous, primary veins 7-9 radiate from base, midvein with 3-4 pairs of secondary veins arising along midrib, light green to occasionally red tinged when fresh, higher order venation prominulous on both surfaces, margins irregularly broadly crenate, base cordate, with a wide to narrow but usually open sinus, apex acute to obtuse or rounded, green when fresh with scattered tan stellate pubescence on both surfaces, densely so along veins and adaxial surface, trichomes 0.2-0.4 mm in diam. with (2-)8-16 rays, abaxial surface with principal vein axils domatiate with dense tufts of tan to white trichomes 0.2-0.3 mm long; petioles 3.5-6cm long, green or sometimes red-tinged, pubescent with dense white to tan stellate trichomes as on branchlets; stipules lanceolate to subulate, 2-3.5 mm long, apex acute, green, sparsely to densely tan or white stellate pubescent, soon caducous. Flowers solitary, axillary, erect to spreading, pedicels 22-30 mm long, green or sometimes red-tinged, densely white to tan stellate pubescent as in petioles, involucral bracts 5-6 (-7), linear-subulate to lanceolate (rarely spathulate), acute to acuminate apically, connate only at base, 9-22 mm long, 1-2 mm wide at base, erect, appressed or spreading perpendicular to the fl oral axis in anthesis, green, densely tan or white stellate pubescent with trichomes 0.2-0.3 mm in diam. Calyx tubular-saccate, mostly 5-lobed, tube 22-30 mm long, 19-20 mm wide, the lobes triangular, acute to short acuminate 5-10 mm long, 7-8 mm wide, green, surface obscured by dense tan stellate pubescence as in bracts, in mature fruit splitting along one side but persistent. Corolla zygomorphic, adaxially curved, 5-6.5 cm long, lobed nearly to base, lobes coalescent, 6-6.5 cm long, 3.5-4 cm wide, obovate-spathulate, apex obtuse, tips and outer margins slightly refl exing with age, outer exposed portion purple, purple-green or purple-yellow, inner concealed portion yellow, conspicuously veined, densely covered with gray or tan stellate trichomes especially along veins, internally yellow or purple-tinged distally, purple toward base, corolla usually becoming purplish with age, staminal column and apex of the style exserted for 1.5-2.5 cm; staminal column 8-8.5 cm long, antheriferous in distal 3.5 cm, maroon-purple, antheriferous in distal 3.5 cm, stamens c. 100, anthers reniform-curved, 0.8-1.5 mm long, purple, fi laments 6-12 mm long, purple, pollen grains purple turning golden yellow after anther dehiscence; style 8.5-9 cm long, style branches 3-5 mm long, villose, stigmas rounded, c. 1 mm long, yellow, ovary dome-shaped, 8 mm long and wide. Fruit a woody capsule, globose-cuboid to -ovoid, 5-locular, 5-valved, 2.5-3.5 (-4) cm long, 2.2-3.3 cm in diameter, surface yellowish brown, rough densely covered with dense tan stellate hair clusters, appearing tuberculate, mericarps 10, mesocarp well developed, reticulate, endocarp chartaceous, loose, with scattered long hairs, testa brown. Seeds 1-2 per mericarp, reniform, 8-10 mm long, 6-8 mm wide including the dense, lanate yellowish-tan hairs 0.4-1 mm long.
Habitat and ecology. Hibiscadelphus stellatus occurs on very steep, rocky slopes between 800 and 900 m elevation. Th ese sites have a windward aspect and are situated mid-slope between the upper rim of a deep valley and a perennial stream below. Soils at these sites are of typical volcanic, basalt origin, from the Wailuku Series of original shield building fl ows. Th e vegetation where H. stellatus grows forms a mosaic of trees and shrublands with an open canopy, best characterized as Lowland Mesic Forest (Wagner et al. 1999). Rainfall averages from 12 to 1400 mm annually and the substrate is well-drained. Etymology. Stellatus -Latin, star shaped, alluding to the stellate pubescence that characterizes the Malvaceae in general, including Hibiscadelphus. Th e name also refers to the "star-shaped" pattern formed by the fi ve involucral bracts, which contrasts with the cruciform pattern formed by the four bracts in H. crucibracteatus. Additionally, stellatus acknowledges the beautiful and stellar (outstanding) fl owers of this species. Th e Hawaiian name hau kuahiwi has been applied to other species of the genus (Rock 1913). Hau (Hibiscus tiliaceus L.), a lowland tree; kuahiwi-lit. mountain or high hill (Pukui and Elbert 1986). Hawaiians recognized the similarities of the taxa while observing that Hibiscadelphus grows at higher elevations.
Conservation eff orts. Th e conservation status of Hibiscadelphus is precarious at best. Th ree species (H. crucibracteatus, H. giff ardianus, and H. wilderianus) were each only known from a single naturally occurring tree (Hobdy 1984;Rock 1913). However, H. giff ardianus survives in cultivation and is planted within the type locality at Kipuka Puaulu in what is now Hawai`i Volcanoes National Park. Hillebrand provided no information on the abundance or scarcity of H. bombycinus when he fi rst collected it but the species is presumed extinct. Hibiscadelphus crucibracteatus is presumed extinct in the wild since the single known tree died a few years after its discovery from damage by introduced axis deer (Axis axis) despite it being fenced; there is no ex situ material although there were several attempts at propagation (R. Hobdy, pers. comm.). Hibiscadelphus woodii was known from four individuals, but evidently has recently gone extinct (Wood 2012). Th ere are no plants in cultivation despite attempts to propagate it. Hibiscadelphus hualalaiensis is considered extinct in the wild as of 1992 but is in cultivation. Hibiscadelphus wilderianus is also presumed extinct. Although Rock mentioned that Wilder (who discovered the species with Rock, later returning and making several additional collections from the only known tree) had succeeded in raising a single seedling (Rock 1913) no surviving material is known. Hibiscadelphus distans is known from two wild populations of approximately 15-20 individuals total on Kaua`i, and over 100 ex situ collections at the McBryde and Limahuli gardens of the National Tropical Botanical Garden (NTBG). With 99 known plants, H. stellatus has the largest known wild populations plus the only known naturally occurring seedlings of any species in the genus.
Seeds were collected from 12 individuals of H. stellatus representing the three known subpopulations. Th e subpopulations were mapped with GPS and each individual plant numbered and tagged. Cuttings from three plants were also made although these failed to take root. Material is being propagated at the Olinda Rare Plant Facility on Maui, NTBG on Kaua`i and the Lyon Arboretum on O`ahu. Th e fi rst seeds germinated in conventional propagation approximately 50 days after sowing and under three weeks in tissue culture. As of May 2013 four parent trees from two sites are represented ex situ, with seeds from four additional trees in the third site now in propagation at Olinda and Lyon.
Th reats to the existence of Hibiscadelphus stellatus include habitat erosion, fi re, weeds, drought, probably rats (Rattus rattus, R. exulans) (Baker and Allen 1978) and mice, (Mus domesticus), slugs such as Derocerus and Limax or other invertebrates such as seed weevils (Giff ard 1920) and caterpillars (Lorence and Wagner 1995), and potentially feral goats (Capra hirca) and/or pigs (Sus scrofa). Small populations of feral goats and pigs are encroaching in surrounding areas, although the West Maui Moun-tains Watershed Partnership is constructing strategic fencing. In 2007 a large wild fi re burned within 180 m of the plants; succession of its habitat presently includes non-native fi re-adapted grasses that were absent before the fi re. Erosion is a natural process but is exacerbated by invasion by weeds and ungulates and the destruction of vegetation by fi re. Woody non-native plants are currently low in diversity and number, but are represented by known aggressive, habitat -modifying species such as Grevillea robusta A. Cunn. ex R. Br., Lantana camara L., Psidium guajava L., and Schinus terebinthifolius Raddi. Herbaceous understory weeds are similarly low in number of taxa but include serious habitat modifi ers such as Adiantum hispidulum Sw., Beauv., all of which may hinder establishment of seedlings.
Conservation status. When evaluated using the IUCN Red List criteria (IUCN 2013) Hibiscadelphus stellatus falls into the Endangered (EN) category, a designation for taxa facing a very high risk for extinction in the wild. Th e species merits this designation by meeting the following criteria: B2(a)(biii, v) + D, where the area of occupancy (AOO) is less than 500km² (B2), with severely fragmented or number of locations <5 (a), and a continuing decline observed, estimated, inferred or projected in (biii) quality of habitat and (bv) number of mature individuals; and D: <250 mature individuals. Although there is some reproduction observed, there is not a suffi cient population structure that will allow enough immature plants to replace mature individuals as they perish, therefore a decline is almost a certainty under current conditions. Th e AOO is 2.28 hectares (5.63 acres) much less than the threshold. Th e habitat is inferred to be in decline due to the eff ects of introduced taxa such as invasive plants and rats, as well as the eff ects of introduced rats and diseases on pollinators. Continued monitoring over the next fi ve years will possibly lead to an updated assessment to CR. Discussion. Th is new species clearly belongs to Hibiscadelphus based on its fl owers that have their corolla lobes coalescent into a curved, tubular zygomorphic structure. Hibiscadelphus stellatus diff ers from its congeners in the following combination of characters: moderate to dense stellate pubescence on all parts; involucral bracts 5 (-7) in number that are linear-subulate to lanceolate, 9-22 mm long, and acute to acuminate apically; 5-lobed calyx with tube 22-25 mm long and lobes 5-8 mm x 7-8 mm; externally purplish-colored corolla 5-6.5 cm long; and globose-cuboid to ovoid capsules with scattered hairs on the endocarp. Th e species of Hibiscadelphus can be separated by the following key. | 2018-04-03T03:24:14.697Z | 2014-07-25T00:00:00.000 | {
"year": 2014,
"sha1": "8039f0edc2cc2b7b31dcb808a2e27af57507feda",
"oa_license": "CCBY",
"oa_url": "https://phytokeys.pensoft.net/article/1536/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8039f0edc2cc2b7b31dcb808a2e27af57507feda",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119264000 | pes2o/s2orc | v3-fos-license | Comparative simulations of Fresnel holography methods for atomic waveguides
We have simulated the optical properties of micro-fabricated Fresnel zone plates (FZPs) as an alternative to spatial light modulators (SLMs) for producing non-trivial light potentials to trap atoms within a lensless Fresnel arrangement. We show that binary (1-bit) FZPs with wavelength (1 \mu m) spatial resolution consistently outperform kinoforms of spatial and phase resolution comparable to commercial SLMs in root mean square error comparisons, with FZP kinoforms demonstrating increasing improvement for complex target intensity distributions. Moreover, as sub-wavelength resolution microfabrication is possible, FZPs provide an exciting possibility for the creation of static cold-atom trapping potentials useful to atomtronics, interferometry, and the study of fundamental physics.
Introduction
Atom interferometry is a powerful tool for precise measurements and metrological technologies. It can be used for a wide range of applications, from the determination of fundamental constants and cosmological phenomena [1,2] to navigation applications such as accelerometers and gyroscopes [3][4][5]. Developments in laser cooling, trapping and atom manipulation have allowed a wide range of atom interferometers to be developed [1][2][3][4][5][6][7][8], and for the exploration of light based atom traps [9][10][11][12][13][14][15]. Optical traps can offer a method of production for much more complex micrometer scale traps such as atomtronic optical circuits [16].
Toroidal trapping of cold atoms for use as atom circuits has many applications beyond interferometry [17,18], such as the study of persistent currents in superfluids [19][20][21], and low-dimensional atomic systems [22,23]. However, trapping ultra-cold atoms requires a very smooth trap, as the presence of very small perturbations in a potential can result in heating of a cold atom cloud or fragmentation of a trapped Bose-Einstein condensate [24]. Within previous demonstrations of all-optical ring trapped BECs, the azimuthal variation of the ring minimum was far below the chemical potential of the BEC, with these rings produced through a variety of methods such as painted potentials [25] or combinations of confining light sheets with shaped light, for instance, Laguerre-Gaussian beams [19][20][21]26], co-axial focused beams [18], or conical refraction based beams [27]. To successfully produce trapping potentials for BEC, we must aim to match or surpass the above limit on azimuthal variation, thus aiming to produce traps of µK depth with a roughness of below 1%.
There are many methods which can be used to produce tailored optical potentials, ranging from acousto-optic beam deflection techniques [13,28] to holographic phase manipulation using a phase adjustable spatial light modulator (SLM) [12,14,15,31] or digital micromirror device (DMD) [32,33]. To date, the holographic method has proved to be very adaptable, paving the way for the production of novel optical lattices for quantum simulation [34], dark spontaneous-force optical traps [35] and exotic Laguerre-Gauss modes [36][37][38]. Despite these successes, SLM holography for atom trapping still remains an imperfect and computationally intensive technique, notwithstanding significant improvement in the iterative algorithms used [14,15,39]. This is due to a combination of system aberrations, low spatial resolution, dead space between pixels, and the difficulty of creating an algorithm that converges on a solution suitable for atom trapping (i.e. smooth and without background light which could cause low loading rates or tunnelling out of the trap [14]) without lowering light usage efficiency.
Fresnel Zone Plates (FZPs) work by spatially modulating either the amplitude or phase of a light beam, resulting in interference of the optical field after propagation; by design of the modulated region one can in principle then produce an arbitrary optical pattern, or trapping potential for atomtronics. The prototypical FZP is one that acts as a lens, resulting in a focused spot in the selected focal plane (z = f ). While the operation of such an FZP is standard in the teaching literature of diffraction, we find it intuitive to briefly consider the FZP required to generate a single focus, shown diagrammatically in figure 1 a). We make use of the time/direction symmetry of linear optics by starting from the desired result and finding the full electric field pattern at a defined plane. Our goal is now to create an optical element, the FZP, that matches an input beam, for example an idealised plane wave, to the field pattern that we produced in the plane. The FZP can then be considered the hologram generated by a plane wave and the backwardpropagating field from the focus. For a binary FZP, we obtain a two-level map of the phases of the electric field in the plane of the FZP required to generate the desired focus. In the next section we discuss in detail the theory and numerical methods to implement this. This type of plate ( Fig. 1) consists of alternating Fresnel zones forming concentric rings that alternate between the chosen binary states at radii, where j can take any integer value, and λ is the wavelength of the incident light. Successive rings can be blocked, allowing only those that constructively interfere at the target plane to propagate. Alternatively, a phase shift of π can be added to otherwise 'destructive' zones, increasing the useful power at the focal plane. Figures 1 b), c) demonstrate an envisaged transmissive binary FZP etched into a substrate, with consecutive zones that would be completely out of phase experiencing an increased optical path length. A similar approach can be used to make straight waveguides with a linear symmetric FZP pattern, or to create arbitrary FZP-like patterns by recording the phase of a nearfield diffraction pattern. In this work we will calculate and simulate phase plate patterns, or kinoforms, for single focii, rings, and beamsplitters, as shown in figure 2. These target intensity distribution have been chosen due to their applicability to cold-atom trapping and atomtronics. The single focus allows both the calculation and propagation methods to be evaluated and compared to the simplest FZP model, whereas the ring allows for comparison of this method to existing toroidal traps which are the simplest nontrivial closed-loop circuits. In order to extend the simulations to consider complex elements for atom optics we finally consider a beam splitter, as such an element is essential as a building block to create a circuit type interferometer.
We anticipate that microfabricated FZPs will overcome many of the limitations posed by the use of SLMs in atom trapping experiments. The higher spatial resolution and sharper edges between pixels presents the ability to reach higher spatial frequency and thus produce a wider range of more accurate holograms. Additionally, due to their size and transmissive operation, we expect that FZPs should be placed inside a vacuum chamber (as with the grating MOTs shown in [9,11]), thus immediately addressing the major system aberration of propagation through a vacuum chamber window. Further information will be discussed pertaining to the nature of FZPs in future sections.
Simulation methods
The phase patterns required to produce the optical traps shown in figure 2 are calculated using a Fourier-optics method of modelling the propagation of an initial electric field E (0) = E(x , y , z = 0) to a distance z. This uses the angular spectrum of the field, (A (0) ), and the Helmholtz propagator, H, such that, where the z-component of the wave vector is k z = k 2 − k 2 x − k 2 y for an electric field with wave vector k = 2π/λ [29,30]. We use this method, following the details in [29], and references therein, to complete the design algorithm shown in figure 3; firstly, a c) a) b) Figure 2. The target intensity distributions used to simulate a range of potentials useful to atomtronics and interferometry; a), b), and c) show a focused spot, a ring and a beam-splitter, respectively. These simulation distributions are formed of Gaussians with 1/e 2 widths of 2 µm (or 5 µm for the focus) and ring radii of 200 µm, however, for visibility, the distributions shown above have a larger width and are cropped to show only the 600 µm×600 µm area around the non-zero intensity. The target (T) electric field distribution is propagated backwards a distance f using Fourier techniques and maximum spatial resolution (4096×4096). The electric field is spatially averaged over a variable size (larger) grid of pixels, then separated into phase (φ −f ) and amplitude (I −f ) components, with the phase rounded to 1-8 bit resolution. The kinoform is then illuminated to create an image.
target intensity is calculated and then propagated backwards, using equation 2, by the focal length. The phase of the resulting electric field in this plane is rounded to the desired bit depth, as discussed later in the text. This routine acts to calculate the required kinoform, and the performance of the result is tested numerically by simulating a desired input beam (either a plane wave or a Gaussian beam with defined width) that is then propagated forward by the focal length. Our method of simulation means that the pixel sizes of the kinoform and simulation (the electric field) are independent. Although we set the input beam and target plane to have flat phase fronts, we allow for phase freedom in the resultant distribution. As we are not utilising a feedback algorithm, our method intrinsically avoids the presence of optical vortices, which can be confirmed through observations of simulation results. We consider the case in which the kinoform acts as a transmissive element and the incident light only illuminates the patterned area. It should also be noted that no optimisation is used to improve the kinoform. This full Helmholtz propagation method is computationally efficient and accurate, reducing the possibility of fringing artifacts in comparison to the paraxial approximation utilised in many hologram calculations as also highlighted in Ref. [14].
To evaluate the success of each kinoform, we calculate the root mean squared (RMS) error for the normalised two dimensional final and target intensities, where N is the number of pixels (in the simulation),Ĩ is the final intensity, andT is the target intensity distribution, both intensity distributions are normalised by the mean of the pixels in T that are brighter than 50% of the maximum value [31].
The target distributions we have chosen to simulate are shown in figure 2: a) a focus with Gaussian waist (e −2 radius) w 0 = 5 µm; b) a ring of radius r = 200 µm and radial Gaussian waist w r = 2 µm; c) a beam splitter formed from straight segments and radii as given in b), again with waist w b = 2 µm.
Laser parameters of 2 mW (30 mW) power at a wavelength of 1064 nm were used for the focus (ring and beam splitter) simulations as these parameters give trap depths of a few µK. Moreover, trap frequencies are 2 kHz in the direction of tightest confinement, which is higher than existing ring shaped dipole potentials [18,20,21,25,26,42,43] and permits access to lower dimensional regimes. The ring radius is larger than these previous demonstrations to increase its applicability to interferometry application where sensitivity scales with the area enclosed.
Within the simulations, we run the calculations for a wide range of kinoform pixel sizes and phase resolution (or bit depth), allowing the comparison of binary FZP-type kinoforms with simulated pixel size of 1 µm to 8-bit SLM type kinoforms with simulated pixel sizes of 12 µm or more. The 12 µm pixel size corresponds to the state of the art for SLMs, which have an effective area of approximately 2 cm 2 , whereas FZPs can be manufactured with pixel size as small as 10 nm and with large total areas of up to 25 cm 2 [9]. Despite these evident spatial advantages for FZPs, one must remember that SLMs typically operate with 8-bit precision and are updatable, whereas FZPs, by their very nature are static, with only two levels of phase control. Both technologies are already being utilised for trapping, in the form of optical tweezers [10,40].
Throughout the simulation process, the electric field propagation is calculated to a resolution of a wavelength with a simulation area of 4.38×4.38 mm 2 (2 12 λ × 2 12 λ), limited solely by the reverse propagation technique and computation memory requirements. For illumination by Gaussian beams, the choice of the input beam e −2 radius, w(z), is determined by the desired focal length and the Gaussian width, w 0 of the desired features by w(z) = w 0 1 + (z/z R ) 2 , with Rayleigh length z R = π w 2 0 /λ. We do note that these computation limitations mean that the active area is smaller than, if comparable to, typical SLM active areas.
Results & Discussion
Maps of RMS error, calculated in the simulations using equation 3, are shown in figures 4 and 5. For all three target patterns and illumination beams (except the plane focus), there is a clear increase in RMS error with increasing pixel size and decreasing bit depth. The simulations also show that a two level FZP consistently has an RMS error lower than that of a kinoform comparable to an SLM. In addition, we can note that, at low pixel size, increasing the bit depth from 2 to 4 level phase resolution significantly reduces the RMS error, thus improvements in microfabrication techniques would significantly increase the accuracy of the FZP kinoforms by allowing for non-binary phase. Examples of the calculated kinoforms for FZPs illuminated by a Gaussian and producing a ring and beam splitter are shown in figure 6.
The RMS error map, shown in figure 4, for a focus kinoform illuminated by a plane wave clearly shows an unexpected increase in RMS error at high phase (high bit depth) and spatial resolution (small pixel size). In this area of higher RMS error, we observe that the optical power is concentrated in a tighter focus than the target 5 µm e −2 radius. This can be understood because each pixel of the kinoform is illuminated equally, unlike Gaussian optics where a concomitant Gaussian illumination of the optical element is required. We can explore the consequences of this by considering three of the contributors to the RMS error: phase resolution error, spatial resolution error, and illumination error. In our algorithm all spatial intensity information from backpropagation is lost and replaced by the intensity information of the illumination beam, whereas the phase information loss is only limited by the pixel size and phase resolution. At large pixel size and low phase resolution, these sources of error dominate over the intensity error, but at high resolutions, the lost intensity information becomes dominant. It is a standard result in Gaussian optics that a smaller focus diverges more rapidly than a larger focus, meaning that the tighter the focus desired, the larger a kinoform or lens should be used, such that the numerical aperture can be increased. Conversely, this means that the size of the illuminated area of the kinoform, rather than the phase across it, affects the size of the focus produced. So, for the plane wave case, the illumination is more similar to that required for a smaller focus than 5 µm. We do not see this in Plot of RMS error for kinoforms of varying spatial and phase resolution, illuminated by Gaussian beams of optimised widths. The obtained intensity distributions for the lowest RMS error, typical FZP, and typical SLM are labelled by the triangle, 5-point star and 7-point star respectively, shown logarithmically. Line graphs of intensity versus radial position is shown below the full intensity plots. For the focus and ring, the area around the (symmetrical) brightest region is shown at an appropriate scale. The equivalent for the beam splitter shows the intensity distribution along the vertical line of symmetry, with the peak offset from the distribution centre indicating the position of split. Figure 6. Fresnel Zone Plates calculated for producing a ring and a beam splitter using Gaussian beam illumination (as highlighted by the 5-point star in figure 5). The inset shows the central section of the kinoform, magnified to allow the zone plate features to be easily seen. Note that the outer regions of the zone plates appear grey due to pixel dithering where the Fresnel zones would be smaller than a pixel. The pure black area denotes the masked area, where the plate is non-transmissive or light is blocked. The off-centre appearance of rings in the ring kinoform are artefacts of the finite simulation pixel size.
the Gaussian illumination simulations, figure 5, due to the Gaussian weighting of the intensity at the kinoform.
As one can see from the RMS errors shown in figures 4 and 5, accuracy of intensity reproduction is reduced with pattern complexity and for distributions with less obvious symmetries: reproduction of the beam splitter is much less accurate than for either the focus and the ring. Both the ring and the focus have been masked to form circularly symmetric kinoforms, meaning that artefacts caused by the square shape of the active area are reduced, however, the reduced symmetry of the beam splitter makes this process more complex. The masking makes pixels outside of a desired area completely dark, thus creating an active area of illuminated pixels and excluding pixels which cause abberations. In the beam-splitter case, we were able to use the symmetry properties of a straight waveguide Fresnel zone plate to shape the active area appropriately, thus blocking light incident more than a certain distance from centre of the intensity lines. This technique greatly improved accuracy, but requires further fine-tuning to allow the approach to be applied to an arbitrary intensity pattern. However, we note that if the appropriate spatial distribution of the incident field, with a flat phase front, can be produced at the kinoform, then the errors would rapidly tend to zero, as for the single focus in the upper plots of figure 5. Indeed, producing such a large scale pattern is well suited to the coarser resolution of an SLM, suggesting that SLMs and FZPs can be used together synergistically.
In all the error maps, particularly for the Gaussian illumination, we see nonmonotonic variations in the errors between consecutive pixel sizes. This is due to aliasing between the three length scales involved in the kinoform design calculations: the length scale of phase change, the simulation pixel size (λ), and the kinoform pixel size. Due to the involvement of three length scales we were not able to reduce this roughness with suitable choice of any of these values. The roughness in RMS error is less pronounced for plane wave illumination as the overall RMS error is higher and so this aliasing is less prominent.
We can also note that the discontinuous nature of the example beam splitter has also increased the error in its production, this led to us using a target that reached the edges of the simulation area to avoid such issues. In a useful intensity distribution for atomtronics, one would want to produce a target intensity with no discontinuities (i.e. a closed-loop circuit), such as a ring with a beam splitter at either end for use in interferometry; hence, the discontinuity based artifacts and errors are not critical to the success of these simulations.
In order to demonstrate the applications of the hologram method of optical trap generation (particularly the potential for three dimensional trapping), we have demonstrated propagation through the focus of the ring distribution in figure 7. This is shown both for the best kinoform and for an FZP, with the average intensity of the ring minimum at each distance shown as a scatter plot alongside the full intensity distributions. Both cases demonstrate a full-width-half-maximum (FWHM) in the propagation direction of 20 µm, similar to that expected for a focussed Gaussian beam. Azimuthal plots of intensity are shown as line graphs in figure 5, allowing for intensity noise to be seen. We can note that the intensity distribution in the case of the focus and ring are too narrow to show any noise due to the pixel size of the simulation, however, we can see significant noise along the vertical waveguide section of the beam splitter. The beam splitter is noise is largely due to beating between the vertical and horizontal sections of the waveguides and could be minimised with more careful target distribution design.
In the simulations of RMS errors in Figures 4, 5 we adopted a compromise position whereby we compared both target and image distributions across the whole grid size. This means that even the background wings (i.e. non-target zone) of the intensity distribution -which could affect the atomtronic circuit loading efficiency -contribute to the error. However, for a given application one may be mainly interested in a subset of the image and target, e.g. the pixel region where the top 50% of the target intensity distribution. This region is where the coldest atoms would be trapped and in this case it makes sense to modify equation 3 to only consider pixels in this zone. Moreover, one should then adaptĨ the final intensity, andT the target intensity distribution, so that the intensity distributions are independently normalised by their maximum value over the pixels in T which are brighter than 50% of the maximum value. This gives a more realistic estimate of the in-situ trap roughness, which can be seen in figure 8. The lowest RMS error, typical FZP, and typical SLM have corresponding errors of 0.0%, 3.7 % and 32.7%, respectively for a ring shaped target. In this situation, rather than the plane wave/Gaussian illumination considered in Figs. 4 and 5, the hologram is illuminated by its ideal spatial intensity distribution -a realistic assumption we elucidate on in our conclusions.
Outlook and Conclusion
By calculating and simulating kinoforms for focii, rings and beam splitters, we have shown that under most circumstances spatial resolution is much more critical than the bit-depth of the hologram. Specifically, we demonstrated that, in the lensless Fresnel regime, FZPs with wavelength spatial resolution consistently show improved root mean square error over kinoforms of spatial and phase resolution comparable to commercial SLMs which are typically 8bit, with 12 µm pixels. FZP kinoforms become increasingly superior for complex target intensity distributions, indicating their suitability for use to produce static atomtronic circuits for trapping ultracold atoms. This is accompanied by the illustration of 3D trapping capabilities through propagation of a ring shaped potential through its focus. By extension of a FZP from a binary kinoform to a 4 level kinoform, the fidelity of intensity distributions can be greatly increased, showing the potential of these kinoforms to improve with increasing micro-fabrication capabilities [44].
Despite the success of these simulations, they are limited to wavelength resolution due to the Helmholtz propagation method used and to a size of 4.38×4.38 mm 2 by the memory requirements of the simulation. The calculation process explicitly does not include an algorithm for iterative error correction of the kinoform meaning that both FZP and SLM RMS errors may find improvements with the use of algorithms similar to those used in [14,31].
Future work will extend this method of kinoform calculation to include an optimisation algorithm, whilst the manufacture of potentially useful FZPs will allow for predictions to be tested experimentally. There is also great potential for combining the strengths of different techniques: a laser incident on a DMD pattern could be re- For the ring-shaped target, image (a) is a demonstration of how the RMS error is modified if one considers only the grid points in which the target is within 50% of the maximum intensity. The target is normalised to its maximum value within this pixel range, and the image is scaled by a constant which minimises the RMS error. Note the much higher overall error, as the large background content of the image can give a false impression of pattern smoothness. The lowest RMS error, typical FZP, and typical SLM are labeled by a triangle, 5-point star and 7-point star, with corresponding errors of 0.0%, 3.7 % and 32.7%, respectively. In image (b), for benchmarking, we consider a complex target 'OR' gate (150 × 280 µm 2 ) similar to that used in Refs. [14,15]. Note that in this case the lowest RMS error, typical FZP, and typical SLM are labeled by a triangle, 5-point star and 7-point star, with corresponding errors of 0.0%, 17.4 % and 19.0%, respectively. Such values appear high, however it is important to consider the small target size, and that there is no additional hologram optimisation. The phase profile across the target is flat in all cases, with no observable vortices.
imaged onto the FZP in order to provide flat-phase front spatially-tailored intensity illumination (assumed in Fig. 8), with RMS errors substantially reduced beyond those from plane wave/Gaussian illumination. Whilst only red-detuned (bright) dipole potentials were considered in this paper, extension to blue-detuned (dark) traps should be straightforward, however such patterns rely more heavily on destructive interference which is likely to impinge on the smoothness of the final patterns. Additionally, FZPs should lend themselves to future extension work involving multi-wavelength hologram production following a similar approach to that shown in [45].
For the dataset associated with this paper please see [46]. | 2016-01-27T15:53:22.000Z | 2016-01-27T00:00:00.000 | {
"year": 2016,
"sha1": "56fe6b31160533e0ddd3abd2cb7103930258271f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/18/2/025007",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "56fe6b31160533e0ddd3abd2cb7103930258271f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244613489 | pes2o/s2orc | v3-fos-license | Management Share Proprietory and Firm Value: Are There Peculiar Determinants?
The study examined the specific factors that influence management share holdings and how this impact on the value of the firm in Ghana. The study specifically sought to find the relationship existing between managerial ownership and firm worth, non-linear relationship between managerial ownership and firm value, how managerial ownership and firm value are affected by size of firm and firm growth opportunities. Using secondary data from Ghana Stock Exchange from the period of 2015 to 2019, ten financial institutions (Banks) and ten non-financial institutions (non-bank) were purposively se-lected for the study, considering data availability and accessibility. Inferential statistics was adopted in analysing the data collected. The study found a weak positive linear relationship between firm worth and management’s share holdings in the organization. The findings from the second research question revealed that the nature of the bond between management share holdings and firm value was the fact that it was nonlinear, however with no visible description of shape but rather fluctuating in nature. The study further found that there is a moderately weak positive association between firm worth and management share of holdings of the organization in the presence of firm size. The findings from the final research questions also revealed that connection between firm value and management holding of share is influenced by firm growth opportunities. The study recommends the development of a more robust and parsimonious model in the examination of the association between management ownership share and firm size to help improve on the strength and nature of the relationship as was revealed by weak correlation and coefficient of determination values.
Background to the Study
A few examinations speculate a connection between the board's possession and firm qualities, especially when assessing firms' value. Offer ownership alludes to offers in an organization. As a result, a firm value has remained the point of interest of companies or firms, particularly in meeting the objectives of such companies of expanding the estimation of the firm or investors' worth, boosting benefits, and just limiting expenses, it is necessary for each organization to evaluate its monetary presentation or value and make it accessible to the clients, owners, shareholders, or partners of the organization. The value of the firm assists in determining the overall health over a specified time and can be used to compare firms in a similar industry or to analyze entire ventures or areas. The value of the firm can be assessed in a variety of ways, with one of the most common being the use of budgetary proportions. They are valuable indicators of a business's exhibition and financial situation. Regardless of this, it provides a connection that reveals information about an organization's operations, for example, the ratio of an organization's current assets to its current liabilities or the ratio of its borrowers to its turnover.
However, the firm's excellent performance, which is a measure of the firm's value, mostly basically depends on the management of such firms. If these managers are also owners of such firms or are shareholders simultaneously, then an irreconcilable situation among managers and investors helps us remember the need for a suitable degree of administrative proprietorship that guarantees the board choices line up with the advantage of investors. Earlier research has exhibited a relationship between administrative proprietorship, budgetary decisions, and firm incentives in created markets. For instance, an analysis of the impact of managerial ownership of firm is evident in the structure of the capital decisions, utilizing cases of regular citizen-run recorded firms in [2002][2003][2004][2005][2006][2007] in China resulted in a drive for capital structure into a non-linear shape, yet with a contrary course to the state of administrative possession on firm worth. The aftereffects of synchronous relapses propose that administrative possession impacts capital structure, which thus, influences firm worth. Therefore, the impact of being owners of a firm and that of the firm worth necessitates specific study for corporate funds (Denis & McConnell, 2003). It is commonly realized that the inclinations between firms and owners are not completely adjusted and are likely to result in organizational issues that decrease firm worth or value.
A series of papers examined corporate organization in developing (or progressing economy) markets, with an emphasis on company worth and ownership structure relationship (see Lemmon & Lins, 2003;Wei, 2005). Most of these prior studies discovered a non-linear relationship between authoritative ownership, also known as managerial ownership, and strong impetus in an infinite number of developing markets, indicating that organizations and insiders can participate in reallocating the preferences of other financial specialists. Corpo-rate insiders in China's registered organizations usually control business by swaying votes in their favour, but also via non-monetary means, for example, employee utilization, the creation of organization space, and so forth. Numerous other research studies continue to demonstrate that management ownership influences corporate value because value held by the board of directors may motivate supervisors to make value or money-related decisions to their advantage or the advantage of investors, resulting in a decline, alternatively an addition to firm value (Morck et al., 1988;Morck & Steier, 2007;McConnell & Servaes, 1990;Short & Keasey, 1999;De Miguel et al., 2004). Hence, the viability of businesses may be related to the degree of managers who are also owners of the company.
This study is necessitated by the fact that top managerial staff is seen as a powerful inward corporate administration instrument. The board's power ranges from directing administrative activities, deciding the level and structure of top administration remuneration, and supplanting inadequately performing directors. All examinations locate that poor firm exhibition improves the probability of an adjustment in the top supervisory group. Weisbach (1988) reports that this outcome is available for firms with a board ruled by outside executives and credits this to fruitful checking by external chiefs. Nonetheless, the connection between firm execution and board turnover is genuinely feeble. It has driven a few creators, including (Jensen & Murphy, 1990;Hermalin & Weisbach, 1988), to scrutinize its monetary noteworthiness.
These investigations reflected across-the-board disappointment with balanced specialized ways to deal with essential leadership, arranging, and execution. Today, there are many issues going up against the board share proprietorship and firm worth, and a portion of the difficulties are different administration styles, personal propensities, financial matters and value, commitment levels, and disparities in aptitudes and jobs. These challenges and attributes on the part of the board and management can work in favour or to the company's disadvantage about their firm value. According to Assibey-Yeboah (2017), on the banking sector crisis in Ghana, fit owners either loaned to themselves, relatives, or family and friends, among others, resulting in most firms in an insolvency situation. This, therefore, becomes the directional line that this study shall attempt to address. This paper is divided into five parts. This article begins with an introduction part that gives context for the research subject and the study's shortcomings. Section 2 examines the literature on management share proprietory and firm value and its peculiar determinants. Section 3 covers the research methodology whereas Section 4 presents the data analysis results. Finally, Section 5 discusses the paper's findings, ramifications, and future research issues, thus completing the paper's goal.
Significance of the Study
This part of the paper is tailored to help us gain knowledge as to the extent in which this thesis is going to be beneficial to the society and the country at large. This include; helping to resolve the issues managers have about shareholders and also help to eradicate wrong perceptions about shareholders in the society; helping investigate more and know the effect of shareholders to the firm; aiding managers in making informed decisions as to who to share ownership and firm value with; contributing to improve future studies on management share ownership and firm value and assisting society to be aware of the kind of the firm they need to buy shares from.
Managerial Ownership
Managerial ownership of a portion of the company's capital, or a group of workers, would undoubtedly enhance the interests of their profits while lowering agency expenses. As a result, Bekiris (2013) and Cziraki, Renneboog and Szilagyi (2010) emphasize that when managerial ownership reaches a high level, the agency problem is primarily mitigated due to complete alignment between managers and shareholders, with the result that the higher managerial ownership is, the less shareholder activism there is.
Firm Size/Worth and Ownership
The association between corporate governance and institutional investors has garnered considerable attention in the literature. Aggarwal, Erel, Ferreira and Matos (2011) found that institutional ownership significantly contributes to upholding good corporate governance. On the other hand, Bushee, Carter and Gerakos (2010) showed that appropriation by responsive governance institutions in the United States is associated with potential enhancements in shareholder rights. According to Chung and Zhang (2011), institutions' percentage of business shares seems to grow significantly in lockstep with quality of governance.
Likewise, McCahery et al. (2016) discovered that institutional investors place a premium on corporate governance and that many are ready to participate in shareholder activism. The empirical research in this field shows that activism contributes to creating additional value if the goal is big. Cai and Walkling (2011) and Renneboog and Szilagyi (2011) have hypothesized that activist shareholders are more inclined to target big companies. Indeed, funds often believe that activism valuation is more straightforward in big businesses.
Firm Growth and Performance
According to Sahut and Othmani Gharbi (2011), this beneficial effect on company performance is consistent with institutional investors, especially dynamic behavior. On the other hand, many other writers have shown a detrimental impact of institutional investors on company performance. Huynh (2010) and Gantchev, Gredil and Jotikasthira (2015) expanded on the lack of a connection between company performance and shareholder interest. Hadani et al. (2011) and Goranova et al. (2017) investigated the connection between shareholders' proposals, earnings management, and institutional growth as measured by the book to market ratio. They found that low corporate growth encourages shareholders to compel leaders in such a situation.
Linear Interaction of Firm Worth and Managerial Proprietorship
Empirically, Ruan, Tian and Ma (2011) The metrics are distinct from Korean companies in Baek et al. (2004) or Joh (2003. Rather than confusing the two opposing effects, this article independently examines the convergence of interest and entrenchment hypotheses. Empirical findings indicate that, given control rights, there is no clear relationship between firm value and inside management ownership for the majority of firms with less than 42 percent inside management ownership, that a positive relationship exists between firm value and inside management ownership for those firms with more than 42 percent inside management ownership, and that, given owner control rights, there is a positive relationship between firm value and inside management ownership.
Non-Linear Interaction of Firm Worth and Managerial Proprietary
Ekadjaja et al. (2019) examine the ownership structure of a company, including managerial ownership, institutional ownership, foreign ownership, and concentrated ownership, as a predictor of firm value. Management ownership will be identified and studied for its potential to create an inverted U-shape relationship pattern, allowing for the parabolic impact of managerial ownership to be tested using Tobin's Q. Meanwhile, this kind of test cannot be performed on the other three independent variables. This test was conducted on non-financial listed companies that had their shares on the Indonesian Stock Exchange (IDX) from 2000 to 2017. The panel data regression test results indicate that management ownership can predict company value, while institutional and foreign ownership cannot.
Indrarini, Chandrarin and Subiyantoro (2019) access investors' perceptions of a company's degree of performance, which is often correlated with market prices. A rise in share prices indicates an increase in shareholder wealth. Specifically, the study examines the direct and indirect impacts of management ownership on the predictability of profits and company value. Between 2011 and 2016, the population consisted of all manufacturing firms registered on the Indonesian Stock Exchange. The SEM-PLS model was used to evaluate the data. Managerial ownership has a significant impact on earnings predictability, which is a proxy for profits quality. The predictability of earnings has a significant impact on company value. Managerial ownership significantly affects company value, both directly and indirectly, via the predictability of profits. Mandacı and Gumus (2011) investigate the impact of ownership concentration and managerial ownership on the profitability and valuation of non-financial companies listed on the Istanbul Stock Exchange (ISE) in an emerging market environment. Firm's performance was evaluated using the Return on Assets (ROA) and Tobin's Q ratios, the former of which evaluates profitability, and the latter assesses the firm's worth. Additionally, we provide comprehensive information on the primary features of the ownership structures of the businesses in our sample and discover that Turkish firms are highly concentrated in ownership. Additionally, unlisted holding companies have the most significant average proportion of shares, corroborating the notion that individuals or families create holding companies to manage their publicly traded businesses. After adjusting for investment intensity, leverage, growth, and size, we show that ownership concentration has a statistically significant positive impact on company value and profitability. In contrast, managerial ownership has a statistically significant negative effect on firm value.
Effect of Firm Size on Management Share Holdings and Firm Value
According to Fahlenbrach, Prilmeier and Stulz (2018), from 1988 to 2003, the average change in managerial ownership was significantly negative for American firms, with managers more likely to decrease ownership when their firms are doing well and increasing ownership when their firms are struggling financially. Increasing management ownership raises Tobin's q when previous stock returns are controlled. Officers' shareholding increases drive this finding, whereas director shareholding increases seem unrelated to company value changes. There is little evidence that substantial ownership reductions harm company value. We use the managerial ownership or firm value relation's dynamics to address endogeneity issues. According to Mandacı and Gumus (2011), the profitability and value of nonfinancial companies listed on the Istanbul Stock Exchange (ISE) are affected by ownership concentration and management ownership. Return on Assets (ROA) and Tobin's Q ratios are used to assess the firm's performance. The study also provides comprehensive information on the ownership arrangements of the sample companies, revealing a highly concentrated ownership structure in Turkey. It is also believed that individuals or families set up unlisted holding corporations to control their listed businesses. After adjusting for investment intensity, leverage, growth, and size, it was found that ownership concentration increases firm value and profitability, whereas managerial ownership decreases both.
Does Firm Growth Opportunity Influence Management Share
Holding and Firm Value? Sakawa and Watanabel (2020) examine the role of institutional investors in a stakeholder-centred economy using data from 2924 Japanese firms from 2010-2016. The monitoring role of institutional shareholders, or foreign shareholders, functions well in Japanese corporations, and the monitoring roles are aimed at strengthening firms via higher growth opportunities. In a stakeholder-oriented structure, institutional shareholders help improve company performance and build sustainable corporate governance systems. Shan (2019) The results show that there is a negative connection between management ownership and performance. A study using two-stage least squares (2SLS) finds that management ownership does not affect company performance. Also, Sualehkhattak and Hussain (2017) investigate the relationship between leverage, dividend payout ownership structure, and firm value using correlation analysis and ordinary least square (OLS) regression analysis on 148 non-financial companies listed on the Karachi Stock Exchange (KSE) for five years (2011)(2012)(2013)(2014)(2015). Testing for panel data regressions. The study found a significant positive relationship between leverage and firm worth and a significant negative relationship between dividend payout and firm value. The interaction between leverage and growth opportunities is insignificant the connection between leverage, dividend payment, and ownership structure with business development possibilities. Based upon the extensive review of literature from variety of scholar in management share ownership and firm value, the basic premise has been estab-lished for this study to be conducted focusing on the developing country such as Ghana where not much studies have been conducted. Hence the need for this study considering the limitations, significance and gaps necessitated the study.
Materials and Methods
It fundamentally covers the type and source of data, econometric model, or method of data analysis, and issues of validity and reliability using model adequacy checking.
Data Collection
The study utilized secondary data, which is data acquired for a purpose other than the original reason it was collected. The data for this research came from non-financial organizations listed on the Ghana Stock Exchange (GSE). The market value of equity-to-book value of equity and the log of market capitalization was gathered during five years from 2015 to 2019. This period was chosen for the study because of the unavailability of data for the recent year. A gap between the time accounts is published, and the information in the accounts deemed surfacing on the market is likely to occur (Bokpin, 2013). Ten banks and ten non-banks were chosen from 100 business entities for the research based on data availability. Only non-financial companies are excluded from evaluating the management of closely regulated, highly geared firms because these traits have been shown to affect governance processes (Ntim et al., 2012).
Econometric Modelling of Data
Descriptive and inferential statistics were performed in the analysis of the data gathered for the study. The descriptive data analysis involved frequency tables and charts to help identify trends patterns of the main variables relating to firm value and managerial ownership over a period. Summary measures such as means, standard deviations, maximum and minimum values, and the range were computed. Inferential analysis, on the other hand, adopted the regression and correlation with analysis of variance to test hypothesis was employed to further corroborate the results from the descriptive statistics. The critical variable is management share ownership. Control variables are added in the model to account for variation in firm value not attributable to management share ownership and to address endogeneity caused by an omitted variable. Per previous research, the study considers government ownership, the age of the firm's listing, its size, debt, and return on equity. Numerous arguments and empirical data support the assertion that these factors exhibit both linear and non-linear correlations with company value (Fiador, 2013). The assumption here is that firm value will be different across industries and financial years-accordingly, an inclusion of industry dummies (INDUST) for the two sectors, namely mining and pharmaceuticals. Also, year dummies (YD) for the financial years 2015 to 2019 are included. These variables are included to unveil unobservable effects such as trend effects and industry-specific effects.
i = 1, 2, 3, …, N is the cross-sectional dimension of companies, t = 1, 2, 3, management share ownership (MANSHARE) is the independent variable, X it is the set of control variables, λ i represents the unobserved firm-specific fixed effect, ε it is the error term and it y denote the dependent variable.
Where FV = Firm value is assessed in five ways: year-end share price, 3 month and 6 months share price, market-to-book value of equity, and market capitalization. MANSHARE = percentage of ordinary shares owned by directors at year-end. GOVSHARES = percentage of ordinary shares owned by the government and its institutions. LISTINGAGE = number of years a company has been listed on the GSE. Total assets at year-end FSIZE = natural log of book value LEV = total liabilities/total assets in the year-end book value of equity, ROE = earnings after tax and any preferential dividends. The STATA, together with EXCEL worksheet, were used in the data.
Model Adequacy Checking
In each study, the proper modelling technique is to verify the model by analysing the residuals using different diagnostic tests. These characteristics include multicollinearity, autocorrelation, heteroscedasticity, normalcy, and linearity. To test for multicollinearity, the research employs both Pearson pairwise and Spearman rank correlation coefficients. Additionally, the research examined for endogeneity. The existence of endogeneity is determined using the Durbin-Wu-Hausman homogeneity test, which determines if the coefficient of board share ownership is statistically significant for all firm value proxies.
Results and Discussions
This part of the paper deals with the analysis of data results in presentation and discussions is very crucial arriving at the conclusion of the study based on the methodology that was adopted for the study. The presentation in this section is systematically based on the objectives of the study which include: to investigate the relationship between managerial ownership and firm value; to examine whether the relationship between managerial ownership and firm value is nonlinear; to examine if the relationship between managerial ownership and firm value is influenced by the size of the firm, and to investigate if the relationship between managerial ownership and firm value is dependent on firm growth opportunities. Finally, it is essential to emphasis as per the data analysis method contained in the methodology that both the descriptive and inferential analysis were adopted in the analysis, presentation, and discussion of the results.
Descriptive Statistics of Data
Four main variables were involved in the study which include FV, MANSHARE, FGO, and size, where the acronyms connote their usual original meanings. The various descriptive statistics ranging from measure of location measures of dispersion such as the mean, median, mode; measures of variation such the standard deviation, sample variance, errors, the counts have been assessed with respect to the variables. This is to give a brief description of associated measurements of the variables and in particular identification and detection of missing values, or influential observations that can have any kind of effect on the analysis of the data. The results are summarized in the Table 1.
Empirical Results of Key Findings
To test for multicollinearity, the research employs both Pearson pairwise and Spearman rank correlation coefficients. The exact results for the multi-collinearity test are contained in Table 2 and discussed as follows.
The essence of the Pearson pairwise correlations is to find out if the variables are correlated, in other words if there exist a significant relationship between the independent variables including the control variables so as not to violate the principle of multi-collinearity. The diagonals have correlation coefficient of one, indicating perfect relationship. This also is because it represents the correlations of each variable relating to its own self. Since the dependent variable is firm value (FV), its correlation with the reaming variables, that is managerial ownership share (MANSHARE) which is the independent variable, and as well as the control variables, that is firm growth opportunities (FGO), and firm size (size) was not important. It can be seen from the results that the correlations among the dependent and the control variables were very small, though not zero with the highest being -0.187 for the relationship between size and MANSHARE. It therefore implies the absence of multi-collinearity in the data as far as the independent and the control variables were concerned. This further indicates that the data is suitable for such analysis based on the methodological approach.
The results of the objective have been presented in the Table 3 and Table 4 using regression and correlation techniques and discussed as follows.
The test of the relationship between managerial ownership shares and firm value indicated the existence of a relationship and subsequent confirmation of the statistical significance of the existence of such a relationship, it is appropriate to model the relationship by way of an appropriate equation that will be able to generate estimates and forecasts. Therefore, model is thus given as Y = 17.453 -3.93431X, where Y is the firm value, and X being the management ownership share. The constant value of 17.453 is or will be the initial value firm value when management ownership share is set to zero (0), whilst the −0.9343 is the magnitude of change in the firm value because of a change in management ownership share. The probability (p) value indicates a statistical significance at a significance level of 0.05 since the p-value of 0.0008 is less that the level of significance. This further corroborates the earlier conclusion of the statistical significance of the relationship in overall terms using the analysis of variance tests and the parameter tests as well. This is result is consistent with the proposition that increasing management share affects the interest of minority shareholders, but also provides support for the results of past studies that report a negative coefficient of board share ownership on firm value (Ruan, Tian, & Ma, 2011;Sahut & Gharbi, 2011) and inconsistent with past studies that report a positive coefficient between share of ownership and firm value. This implies that the model thus obtained based on the relationship between management share of ownership can be used to predict firm value especially in the absence of a better model in this context. The analysis' model summary is used to determine the type, strength, and margin of variability of the estimate in terms of the connection between managerial share ownership and company value. Since the correlation coefficient is not zero (0), it indicates that there is a connection between company value and management ownership share. Correlation coefficient (R) of 0.206 indicates a somewhat favorable relationship between company value and management portion of ownership. Its positive sign also implies that when management owns a stake in the business, it weakly increases the firm's value, implying that management share ownership as a governance mechanism has been a boon to the emerging capital market, as the level of management share ownership increases, the market value of the firm increases. This finding contradicts the notion that growing management share ownership has a detrimental effect on minority shareholders' interests. Additionally, the result provides evidence to refute those whose evidence supports previous studies reporting a negative effect of board share ownership on firm value and is inconsistent with previous studies reporting a positive relationship between board share ownership and firm value, such as Arshad and Javid (2014), who find conclusive evidence that managerial ow The consequence of this finding for the Ghana stock market is that if management owns a portion of the company, it will have a somewhat favorable impact on the firm's valuation. Furthermore, the coefficient of determination (R square) of 0.0425 (4.26 percent), which is a measure of the proportion of variation in the dependent variable that is explained by the independent variable, indicates that management share of ownership accounts for or explains only 4.26 percent of the change in firm value. This also implies that about 95.74 percent of the change in company value is not due to changes in management ownership but to other variables not considered in this study. This finding is similar with the findings of Aggarwal et al. (2011), who found that institutional ownership significantly aids in the maintenance of good corporate governance, which results in increased company value. Table 4 uses a variance analysis to examine the connection between manage-G. W. Lokko et al. ment share ownership and company value. The table shows that the connection is statistically significant since the P-value of 0.0008 is less than the significant threshold of 0.05. In the absence of a better model for evaluating the connection between management share ownership and company value, the model should be kept even if the correlation and determination coefficients are lower. This conclusion is in line with previous research showing a strong link between board shareholding and company value (Connelly et al., 2012). According to this research and earlier studies, allowing managers to be part of the firm's shareholders increases the firm's worth by a statistically significant amount. Therefore, management must hold stock in the company to increase the firm's value, which is the main reason for its existence. A previous study used a simultaneous equation model to examine the effect of managerial ownership on firm performance and financial policies (debt and dividend) for 140 listed manufacturing firms in Pakistan (Arshad & Javid, 2014).
Another important aspect was to examine whether the link between managerial ownership and firm worth is non-linear or otherwise. As a result, a graphical approach as depicted in Figure 1 is used to show the relationship between the two variables with firm value being the dependent variable and MANSHARE being the independent variable on the other hand. From the results in Figure 1, the association between managerial shares holding and firm worth was weak positive using the correlation coefficient and statistically significant but is negatively related in model and statistically significant using the regression coefficients. This result provides support the examination of the nature of the relationship as to whether it is non-linear. Based on the frequency curve used in assessing the nature of the bond, it can be seen clearly that the nature of the connection is non-linear. This again is consistent with past studies which reported a significant curvilinear association between board share ownership and firm value (Connelly et al., 2012). In the same vein therefore, it can be concluded that the link between management share ownership and firm value is non-linear.
However, it is also important to emphasise that from previous studies which the non-linear nature of the association reflected a u shape, that of the relationship between management ownership share and firm value display otherwise with no visible description of shape. The non-linear nature of the link between management share of ownership and firm value can either be describable or indescribable based on both the current and previous studies.
Next was to examine if the bond between managerial ownership and firm value is influenced by the size of the firm. This is explored to ascertain if the size of firm could be a useful control variable since the link between management ownership share and firm worth was only 0.206% and 4.6% for the correlation coefficient and the coefficient of determination respectively. This is further assessed using the same approach of regression with control variable as summarised in Table 5 and Table 6 as follows. Open Journal of Business and Management Source: Author's Computation, 2021; *p 0.05 significant*. than the level of significance. This further corroborates the earlier conclusion of the statistical significance of the relationship in overall terms using the analysis of variance tests and the parameter tests as well. Table 4 uses a variance analysis to examine the connection between management share ownership and company value. The table shows that the connection is statistically significant since the p-value of 0.0008 is less than the significant threshold of 0.05. In the absence of a better model for evaluating the connection between management share ownership and company value, the model should be kept even if the correlation and determination coefficients are lower. This conclusion is in line with previous research showing a strong link between board shareholding and company value (Connelly et al., 2012). According to this research and earlier studies, allowing managers to be part of the firm's shareholders increases the firm's worth by a statistically significant amount. Therefore, management must hold stock in the company to increase the firm's value, which is the main reason for its existence. A previous study used a simultaneous equation model to examine the effect of managerial ownership on firm performance and financial policies (debt and dividend) for 140 listed manufacturing firms in Pakistan (Arshad & Javid, 2014).
Again, the coefficient of determination (R square) of value 0.074 (7.4%) indicate that 7.4% of the proportion of variation in the firm value is accounted for or explained by both management ownership shares and size of firm. This further indicate that about 92.6% of the changes in firm value is not accounted for by management share of ownership and firm size but rather by other factors which have not been included in this analysis. This result is consistent with Aggarwal, et al. (2011) who discovered that institutional ownership helps greatly in maintaining effective corporate governance which then leads to higher firm value. It follows from the result above that the introduction of the firm size as a control variable did not necessarily influence upward adjustment in the firm value based on managerial ownership. Rather managerial ownership alone as an explanatory or independent variable had a great influence on firm value. Al-Gharaibeh et al. (2013) tried to examine the impact of ownership structure on corporation dividend policy using a sample of 35 Jordanian corporations listed on the Amman Stock Exchange between 2005 and 2010. The complete adjustment model described 61.57 percent of the variance in dividend, whereas the partial adjustment model explained just 20.65 percent. For management ownership, the partial adjustment model yielded a negative but significant coefficient, while the complete adjustment model yielded a Thus, evaluating managerial ownership share on every element of the business led to an upward rise, highlighting managerial ownership share's significant contribution to the firm.
Based on the test of significance of the relationship as contained in the ANOVA, Table 6 shows that the relationship between management share ownership and firm value as influenced by firm size at a significant level of 0.05. This implies that the model should be maintained in the absence of a better model for ex-amining the relationship between managerial share ownership and firm value as influenced by firm size.
The study further investigates if the relationship between managerial ownership and firm value is dependent on firm growth opportunities. In other words, if firm growth opportunities (FGO) influence the relationship between firm value and management ownership share. Table 7 and Table 8 summarise the results of the various relationships as discussed as follows.
With regard the model as based the association between managerial ownership share and firm value with firm growth opportunities as the control variable is given as Y = 17.330 -3.74165X 1 + 0.38235X 2 , where Y is the firm value, and X 1 and X 2 representing management ownership share and firm growth opportunities respectively. About the coefficients in the model, the constant value of 17.330 represent the initial value of firm value when management ownership shares and firm growth opportunities are absent, all other factors held constant. The values of −3.7416 and 0.3823 represent the respective values of the magnitude of change in firm value because of a unit change in management ownership share and firm growth opportunities respectively. The probability (p) values again indicate a statistical significance at a significance level of 0.05 since the p-value of 0.001 and 0.023 are both less than the level of significance. This further corroborates the earlier conclusion of the statistical significance of the relationship in overall terms using the analysis of variance tests and the parameter tests as well. From the model summary Table 7, the correlation coefficient (R) of 0.248 indicate a moderately weak positive association between firm value and management share of ownership in the presence of firm growth opportunities. It increases the correlation coefficient from 0.206 to 0.248. This implies that the relationship between firm value and management ownership share is influenced by firm growth opportunities. This further implies that an assessment of the relationship between firm value and management ownership share should be done with the inclusion of firm growth opportunities.
Because the R square is 0.061 (6.1%), both management ownership shares and firm development possibilities account for or explain 6.1% of the variance in firm value, suggesting that 93.9 percent of the variation in company value is not jointly accounted for or explained. However, the modest influence of managerial ownership on firm value is consistent with earlier findings that expansion has a favorable impact on managerial ownership. Theoretically, expansion should increase management ownership. It shows that managers favor investing in fastgrowing companies (Din & Javid, 2011). Using another variable as a control variable in assessing the impact of management share of ownership on firm value yielded a moderate or weak positive influence on firm value while yielding a negative coefficient. As previously reported, board share ownership has a negative impact on company value, while management share ownership has a favourable impact.
Based on the test of significance of the relationship as contained in the ANOVA Table 8 shows that the relationship between management share ownership and firm value as influenced by firm growth opportunities is significant at 0.05 level of significance, and as a result model can be maintained in the absence of a better model for examining the relationship between managerial share ownership and firm value as influenced by firm growth opportunities.
Conclusion
To begin, all four (4) variables included in the study, namely firm value (FV), management ownership share (MANSHARE), firm growth opportunities (FGO), and firm size (size), were relevant because they aided in measuring what the study intended to measure in terms of the assessment of emerging market capital markets via descriptive statistics such as mean, median, and mode. Additionally, regarding the study's first aim, which was to examine the connection between managerial ownership and company value, the findings indicated the presence of a relationship between firm value and managerial ownership share. Thus, the study concludes that the relationship was weak regardless of sign and could only account for 4.26 percent of the changes in the dependent variable, with approximately 95.74 percent of the change in firm value as per the data and analysis performed being accounted for by factors other than management share of ownership. Additionally, the research finds that when the connection between managerial ownership and firm value is examined as a function of company size, a substantial association exists between management ownership share and firm value as a function of firm size. Thus, about 92.6 percent of the changes in company value were accounted for by variables other than management ownership share and firm size. The conclusion related to the final objective, which was to determine whether the relationship between managerial ownership and firm value is dependent on firm growth opportunities, is that in the presence of firm growth opportunities, a moderately weak positive association between firm value and management share of ownership was also observed.
Recommendations
The following suggestions are offered considering the study's results and conclusions. The research suggests developing a more robust and sparser model to examine the connection between management ownership share and company size to enhance the strength and nature of the association as indicated by low correlation and coefficient of determination values. Second, the research proposes a more robust connection by expanding the data or changing the variables to clearly demonstrate the non-linear or otherwise nature of the link between management ownership and company value. Additionally, it is recommended that additional control variables be included in the model, as only the firm's size as a control variable revealed a significant relationship between management ownership share and firm value, resulting in a model that fully explains the dynamics of the relationship between firm size and management ownership share for prudent decision making. Finally, and similarly to the conclusion about the goal, it is suggested that the connection between management ownership and company value be remodelled to include other control factors in addition to firm development possibilities. This is because the two (2) control variables added to the model improved or enhanced the values of the model coefficients, resulting in a more accurate model than the models without control variables. | 2021-10-22T15:17:22.648Z | 2021-10-20T00:00:00.000 | {
"year": 2021,
"sha1": "b30d0fc9634985c62e29a175d79a325bcab1a5fe",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=112621",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cd6f399e0cc18930e246073a4b196960c49e3c7c",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
119194979 | pes2o/s2orc | v3-fos-license | Finsler-type modification of the Coulomb law
Finsler geometry is a natural generalization of pseudo-Riemannian geometry. It can be motivated e.g. by a modified version of the Ehlers-Pirani-Schild axiomatic approach to space-time theory. Also, some scenarios of quantum gravity suggest a modified dispersion relation which could be phrased in terms of Finsler geometry. On a Finslerian spacetime, the Universality of Free Fall is still satisfied but Local Lorentz Invariance is violated in a way not covered by standard Lorentz Invariance Violation schemes. In this paper we consider a Finslerian modification of Maxwell's equations. The corrections to the Coulomb potential and to the hydrogen energy levels are computed. We find that the Finsler metric corrections yield a splitting of the energy levels. Experimental data provide bounds for the Finsler parameters.
I. INTRODUCTION
A widely expected consequence of a (still-to-be-found) theory of quantum gravity is a small modification of General Relativity. Such a modification may be encoded in a scalar-tensor theory as it comes out from the low energy limit of string theory leading e.g. to a violation of the Universality of Free Fall [1,2]. Other consequences might be that, in addition to the metric, there could be a further geometric field like torsion leading to an effective Riemann-Cartan geometry.
Another modification of the usual peudo-Riemannian geometry is Finsler geometry which was already discussed as an effective geometry describing quantum gravity effects, see e.g. [3]. The idea of Very Special Relativity [4] can also be described in terms of a Finslerian geometry [5].
Finsler geometry is a framework which still respects the Universality of Free Fall but violates Local Lorentz Invariance. The way in which Local Lorentz Invariance is violated is beyond usual Lorentz Invariance Violation schemes like the χ − g formalism [6], the T Hǫµ framework [7] or the Standard Model Extension [8]. Furthermore, though the Universality of Free Fall is valid in a Finslerian setting, gravity cannot be transformed away locally [9], that is, there is no Einstein elevator. On a more basic level, a Finslerian geometry may result from a relaxed version of the Ehlers-Pirani-Schild axiomatics [10] by not requiring the world-function to be twice differentiable.
Therefore, in view of considering all possible deviations from standard Riemannian geometry reflecting effects from quantum gravity, and in view of more fundamental issues, it might be of general interest to study further consequences of Finsler geometry. Since electromagnetic phenomena provide very precise tools for exploring * Electronic address: itin@math.huji.ac.il † Electronic address: claus.laemmerzahl@zarm.uni-bremen.de ‡ Electronic address: volker.perlick@zarm.uni-bremen.de the geometry of space-time, in this paper we will set up a generalization of Maxwell's equations in a Finslerian space-time and derive possible consequences for atomic physics which can be compared with experiments.
A. Positive definite Finsler structures
The central idea of Finsler geometry was already proposed by Riemann in his famous habilitation lecture devoted to the geometry of curved manifolds [11]. In parallel to the (Riemannian) geometry based on a second rank symmetric non-degenerate metrical tensor g αβ (x) with the line element ds 2 = g αβ (x)dx α dx β , Riemann briefly discussed a geometry based on a fourth-rank totally symmetric tensor g αβγδ (x) with the line element ds 4 = g αβγδ (x)dx α dx β dx γ dx δ . (2.1) An intensive study and a further generalization of this type of geometry was given by Finsler [12] in 1918 in his Dissertation. Finsler geometry is based on a Finsler function F (x, y) that assigns a length to each curve. One requires that F (x, y) is positively homogeneous of degree one, to make sure that the length of a curve is independent of its parametrization, and that the Finsler metric is positive definite for all y = 0.
The unparametrized geodesics of a Finsler geometry are the extremals of the length functional (2.2) where the endpoints are kept fixed. The affinely parametrized geodesics are the extremals of the "energy functional" where the endpoints and the parameter interval are kept fixed. Riemannian geometry is, of course, a special case of Finsler geometry, characterized by the additional property that the metric g αβ is independent of y.
The theory of positive definite Finsler metrics, which is detailed e.g. in [13] and [14], has several applications to physics, where the underlying manifold is to be interpreted as three-dimensional space, so the greek indices take values 1,2,3. E.g., the Lagrangian of a charged particle in a magnetostatic field is given by a Finsler function of the Randers form where h µν (x) is a Riemannian metric (i.e., positive definite) and A µ (x) is a one-form. It can be shown that the corresponding Finsler metric (2.4) is, indeed, positive definite for all y = 0 provided that F (x, y) > 0 for all y = 0, see [14], Section 11.1. To mention another example, light propagation in an anisotropic medium that is time-independent is characterized by two positive definite spatial Finsler metrics [15,16]. If these two metrics coincide (i.e., if there is no birefringence), they are necessarily Riemannian [17,18]. Positive definite Finsler metrics have also been used for describing the propagation of seismic waves, see e.g. [19].
B. Finsler structures of Lorentzian signature
In applications to space-time physics, the Euclidean signature of the metric must be replaced by a Lorentzian signature. Following Beem [20], this can be done by considering, instead of the function F (x, y) 2 , a Lagrangian L(x, y) that may take positive, zero and negative values. (Notice that it is the square of the Finsler function that enters into the definition of the metric tensor (2.4).) More precisely, a Finsler structure of Lorentzian signature is a function L(x, y) that is positively homogeneous of degree two, and for which the Finsler metric is non-degenerate and of Lorentzian signature for all y = 0. (Actually, it is recommendable to relax the latter condition by requiring the conditions on the Finsler metric to hold only for almost all y = 0, see [21].) In applications to physics, the underlying manifold is to be interpreted as space-time, so the latin indices take values 0,1,2,3. The homogeneity condition (2.7) implies that The affinely parametrized geodesics of such a Finsler structure are, by definition, the extremals of the "energy functional" The homogeneity condition assures that L is a constant of motion, so the geodesics can be classified as timelike (L < 0), lightlike (L = 0) and spacelike (L > 0).
III. MAXWELL'S EQUATIONS ON A FLAT FINSLER SPACE-TIME
In this section we discuss how Maxwell's equations must be modified if the underlying space-time is Finslerian. We mention that there are different views on this issue, see e.g. Pfeifer and Wohlfarth [22] for an alternative approach. We follow a line of thought that was sketched already in the appendix of [21]. Our guiding principles are that the electromagnetic field strength should be a field on space-time (and not on the tangent bundle, as in [22]), and that the lightlike Finsler geodesics should be the bicharacteristics (i.e., the "rays") of Maxwell's equations.
A. Flat Finsler space-times As in this paper we are interested in laboratory experiments, where space-time curvature plays no role, we assume that the underlying Finsler structure is flat. We prescribe this Finsler structure in terms of a Lagrangian, following Beem's definition. The flatness assumption means that we can choose the coordinates such that the Lagrangian is independent of x, Here and in the following, latin indices take values 0,1,2,3 and greek indices take values 1,2,3. As a consequence of (2.7) and (2.8), the Finsler metric is homogeneous of degree zero, and its derivative is totally symmetric, We will later assume that g ij (y) is a small perturbation of the Minkowski metric, but in this section we will not need this specification.
B. Hamiltonian vs Lagrangian formalism
Recall that the lightlike geodesics of our Finsler structure are the extremals of the functional (2.5) with L(x, y) = 0. In the case at hand, where L is assumed to be independent of x, the lightlike geodesics are the straight lines x i (s) = a i + y i s with L(y) = 0. To characterize these curves in terms of a Hamiltonian, rather than in terms of a Lagrangian, we introduce the canonical momenta and the Hamiltonian In (3.5), the y i must be expressed in terms of the p j with the help of (3.4). The non-degeneracy of the Finsler metric guarantees that this can be done for all y = 0. With (3.1), (3.3) and (3.2) we see that (3.4) can be written more explicitly as Thereupon, the Hamiltonian (3.5) reads is the inverse of g jk (y), with the y i expressed in terms of the p i by (3.4). In accordance with (3.2) and (3.3) we have The Hamiltonian H is homogeneous of degree two with respect to p, i.e.
where we have introduced, as an abbreviation, The lightlike Finsler geodesics (i.e., the lightlike straight lines in the case at hand) are the solutions to Hamilton's equations with H(p) = 0.
C. Modified Maxwell's equations
If the space-time metric is the unperturbed Minkowski metric, g jk = η jk where η jk = diag(−1, 1, 1, 1), Maxwell's equations read Here the two-form F kj is the electromagnetic field strength, J j is the current density and µ 0 is the permeability of the vacuum. If the current is given, (3.13) and (3.14) give a system of first-order partial differential equations for the electromagnetic field strength.
If we replace the Minkowski metric η kl with our flat Finsler metric g lk (p), we see that there is no reason to modify (3.13) because it does not involve the metric. As to (3.14), it is most natural to replace where i is the imaginary unit and g kl (−i∂) stands for the expression that results if in g kl (p) the p j are replaced with −i∂ j = −i∂/∂x j . As g kl (p) is not in general a polynomial in the momentum coordinates, g kl (−i∂)∂ l is not in general a differential operator but rather a pseudo-differential operator. (For background material on pseudo-differential operators see e.g. [23].) With the replacement (3.15), the Maxwell equation (3.14) becomes a pseudo-differential equation, By (3.12), this equation can be equivalently rewritten as As the current and the field strength are both real, the operator iH k (−i∂) should map real functions to real functions. This is the case if the Hamiltonian is even, H(−p) = H(p), i.e., if the homogeneity property (2.7) is true also for negative λ. If this condition is satisfied, (3.13) and (3.17) determine a perfectly reasonable dynamical system for the field strength if the current is given. Note that if H satisfies the property we may write and (3.17) is manifestly real. The Hamiltonians (4.2) and (4.9) to be considered below both satisfy (3.18), where in the case of (4.2) the correct branch of the square-root, i 4/2 = −1, has to be chosen.
To support our claim that (3.13) and (3.17) are the correct Finsler versions of Maxwell's equations, we apply the operator ∂ m to (3.17) for the case that J j = 0, (3.20) By (3.13), this can be rewritten as The second term vanishes because of J m = 0. Using (3.11) we find that F jm satisfies a generalized wave equation, If we solve this equation with a plane-wave ansatz for the electromagnetic field, we find that the wave covector k l has to satisfy the equation To give further support to this claim, we now demonstrate that (3.17) can be brought into a form which is adapted to the formalism of premetric electrodynamics, cf. [24]. To that end we have to show that (3.17) can be rewritten as where the excitation H ml is related to the field strength F kj by a certain constitutive law. We write (3.17) in the equivalent form of (3.16) and we apply the pseudodifferential operator g mj (−i∂). Then we obtain Since g kl is independent of the x i , this can be rewritten as with a constitutive operator This form is equivalent to the original equation (3.17).
In particular, for g ij = η ij we return to the standard Maxwell vacuum electrodynamics on Minkowski spacetime. We have, thus, put our modified Maxwell equations in the premetric form, where the constitutive law involves the pseudo-differential operator (3.28). An important advantage of the premetric formulation is that, quite generally, (3.25) together with the antisymmetry of H kl immediately implies charge conservation, ∂ m J m = 0. The homogeneous part of Maxwell's equations (3.13) is automatically satisfied if we express the electromagnetic field in terms of a potential, We mention in passing that then the inhomogeneous part (3.27) can be derived from the action (3.31) where one has to take into account that the operator κ klij (∂) commutes with the variational derivative.
In the following we will be interested in static fields.
We denote the four components of the potential by ( and the four components of the current density by (J 0 = −cρ, J 1 , J 2 , J 3 ). Then (3.32) can be rewritten, with the help of (3.11), as where ε 0 is the permittivity of the vacuum and we have used that c −2 = ε 0 µ 0 . If the metric is the unperturbed Minkowski metric, we have of course 2H(−i∂)V = −△V where △ is the ordinary Laplacian. (3.33) is the Finslerian modification of the Poisson equation that determines the electrostatic potential V of a static charge density ρ. This is the only equation from Finslerian electrodynamics that we will need in the following.
A. A Finsler perturbation of Minkowski space-time
We further specify our Finsler structure by assuming that the Hamiltonian (3.5) is a small perturbation of the standard Hamiltonian on Minkowski space-time. The latter reads We restrict to the case that the Finsler perturbation affects the spatial part only. The simplest non-trivial ansatz for such a perturbation is a square-root of a fourth-order term, where φ µνρσ is totally symmetric. (A similar perturbation of Minkowski spacetime was considered in [25].) We assume that the Finsler perturbation is so small that we can linearize all equations with respect to the φ µνρσ . Then the Hamiltonian simplifies to (4.3) We will now demonstrate that the trace part of φ µνρσ can be eliminated with the help of a coordinate transformation. To that end, we decompose φ µνρσ in the form whereφ µνρσ is totally symmetric and trace-free. Then (4.3) can be rewritten as After a linear coordinate transformation, with φ µνρσ totally symmetric and trace-free. A totally symmetric fourth-rank tensor in three dimensions has 15 independent components. The trace-free condition allows to express 6 of them in terms of the other ones, e.g.
so we are left with 9 independent Finsler perturbation coefficients.
B. The modified Coulomb field
With the Hamiltonian (4.9) inserted into (3.33), we want to find the solution where the source is a point charge at rest. The equation we have to solve reads Here and in the following we write (4.12) We look for a solution to (4.11) in the form where the first term on the right-hand side is the standard Coulomb solution of the unperturbed problem. As we agreed to linearize all equations with respect to the Finsler coefficients φ αβµν , it is sufficient to determine ψ to within this approximation. Then ψ must satisfy the equation Applying the Laplacian to this equation gives a linear fourth order PDE, The right-hand side of this equation is easily calculated, Under these circumstances we can guess the solution of (4.16) to be of the form Note that we cannot add terms proportional to φ αβγδ δ αβ x γ x δ or φ αβγδ δ αβ δ γδ because these terms vanish. The biharmonic operator applied to (4.17) gives By comparing (4.18) with (4.16) we obtain C = −3q(16πε 0 ) −1 . Thus the solution of (4.15) is Consequently, we have the scalar potential of the point source in the form In spherical coordinates this expression reads where φ αβγδ f αβγδ (θ, ϕ) = φ 1111 sin 4 θ cos 4 ϕ +φ 1112 sin 4 θ cos 3 ϕ sin ϕ + φ 1113 sin 3 θ cos θ cos 3 ϕ +φ 1122 sin 4 θ cos 2 ϕ sin 2 ϕ + φ 1123 sin 3 θ cos θ cos 2 ϕ sin ϕ +φ 1133 sin 2 θ cos 2 θ cos 2 ϕ + φ 1222 sin 4 θ cos ϕ sin 3 ϕ +φ 1223 sin 3 θ cos θ cos ϕ sin 2 ϕ +φ 1233 sin 2 θ cos 2 θ cos ϕ sin ϕ + φ 1333 sin θ cos 3 θ cos ϕ +φ 2222 sin 4 θ sin 4 ϕ + φ 2223 sin 3 θ cos θ sin 3 ϕ +φ 2233 sin 2 θ cos 2 θ sin 2 ϕ + φ 2333 sin θ cos 3 θ sin ϕ +φ 3333 cos 4 θ .
Here we have added to the potential term a Finsler correction according to our results from the preceding section, and we have added to the Laplacian the same correction as in the electrodynamic equations, cf. (4.11). The latter assumption is based on the idea that the Finsler perturbation modifies the underlying geometry such that particles and light are affected in the same way. As an alternative, one might speculate that there are two different Finsler modifications of the space-time structure, one for particles and one for light. This would come up to a Finslerian bimetric theory. We will not investigate such a more complicated theory here but rather stick with (5.1). However, we mention that the order-ofmagnitude estimates of the following calculations remain true for the more general (bimetric) theories as long as the perturbation of the Laplacian term does not exceed the corresponding term in (5.1) by several orders of magnitude.
To give further support to our Schrödinger equation (5.1), we demonstrate that it comes about as the nonrelativistic limit of a modified Klein-Gordon equation. The free Klein-Gordon equation in a Finsler space-time is naturally given by where H is the 4-dimensional Hamiltonian. This can also be derived from an action principle. In our model, We want to derive the non-relativistic limit of this Finslerian Klein-Gordon equation. For that we use the formalism described in [26]. We make an ansatz where the wave function is given by an exponential function of a sum of terms of different orders of c −2 , As the Finsler coefficients are small, this implies ∂ µ S 0 = 0, i.e., S 0 can only be a function of time, S 0 (x) = S 0 (t). The next order, c 2 , yields the equation which possesses the solutions where, for physical reasons, we do not consider the plus sign. The equation of next order, c 0 , gives for the function Φ 1 (x) = e i S1(x) the equation of motion This represents the free Schrödinger equation in our Finsler space-time. Coupling to an electrostatic potential V will be performed through which gives us the time-dependent Schrödinger equation with coupling to an electrostatic potential, Upon inserting for V our expression for the perturbed Coulomb potential, the time-independent Schrödinger equation (5.1) results from a separation ansatz Φ 1 (x) = Ψ r e −iEt/ . Note that in (5.1) the radial variable r can be separated from the angular variables θ and ϕ exactly as in the ordinary theory. The two angular variables, however, cannot be separated from each other.
B. Finsler modified energy levels
We want to determine the bound states and the energy levels by the perturbation method to within linear order in the Finsler coefficients φ αβγδ . This will give us the splitting of the hydrogen spectral lines as produced by the Finsler perturbation. Of course, as we are considering the simple Kepler problem as the unperturbed situation, this splitting is to be viewed on top of all the other (fine-structure and hyperfine-structure) splittings of the hydrogen spectral lines which are well understood.
We denote the unperturbed bound states of the Coulomb potential by is the Bohr radius, the L q p are the generalized Laguerre polynomials and the Y m l are the spherical harmonics. The quantum numbers n, l and m take the values n = 1, 2, · · · ; l = 0, · · ·, n − 1; m = 0, · · ·, ±l. (5.13) The corresponding unperturbed eigenvalues are 14) The first-order corrections to the eigenvalues are determined by the matrix elements The first scalar product on the right-hand side can be calculated more easily in the momentum representation, whereΨ nlm p is the Fourier transform of Ψ nlm r which is given by [27] Ψ nlm p = 2a 3 0 n(n − l − 1)! π(n + l)! (5.17) where the C k s are the Gegenbauer polynomials. We now calculate the necessary matrix elements one by one to determine the perturbations of the lowest energy levels.
The ground state, n = 1, is non-degenerate. Under the Finsler perturbation, its energy value is shifted in first-order perturbation theory according to Calculation of this matrix element yields where we have used the trace-free condition. The next level, n = 2, is fourfold degenerate in the unperturbed situation. Under the Finsler perturbation, it will in general split into four levels, where, in first-order perturbation theory, the ∆E A 2 are the eigenvalues of the perturbation matrix M 2lm,2l ′ m ′ . The entries of this (4 × 4)−matrix can be calculated. Using again the trace-free condition, we find where overlining means complex conjugation. The perturbation matrix consists of a 1 × 1 block and a 3 × 3 block. Therefore, calculating the eigenvalues requires solving a third-order equation. This can be done explicitly, but the resulting expressions are rather awkward and will not be given here.
The transition from the E 2 level to the E 1 level is known as the Lyman-α line. Our If neither the Lyman-α nor the Lyman-β line splits, both (5.28) and (5.31) have to hold, so in this case all Finsler coefficients must be zero. This demonstrates that we get bounds on all Finsler coefficients if we observe, with a certain measuring accuracy, that neither the Lyman-α line nor the Lyman-β line splits. As a special case, we consider a Finsler perturbation that respects the symmetry about the z-axis. This simplifying assumption seems reasonable in a laboratory on Earth if one believes that the Finsler anisotropy has a gravitational origin. Then the expression φ αβγδ f αβγδ (θ, ϕ) in (4.21) must be independent of ϕ.
VI. CONCLUSIONS
We have calculated the Finsler perturbation of atomic spectra for the simplest possible case, using the Schrödinger equation with the standard Coulomb potential for the unperturbed situation and a linearized metric perturbation that derives from the square-root of a fourth-order term. We emphasize again that, if the results are to be compared with measurements of the hydrogen spectrum, the Finslerian splitting of the spectral lines has, of course, to be viewed as coming on top of all the other fine-structure and hyperfine-structure splittings that are well understood. Also, more complicated atomic spectra and more complicated Finslerian metric perturbations can be considered. What we wanted to estimate was the order of magnitude for the bounds on the Finsler perturbations that can be achieved by atomic spectroscopy. We see from (5.38) that these bounds are quite tight. Given the fact that, nowadays, frequencies can be measured in the optical and in the ultraviolet with an accuracy of up to δω ≈ 10 −7 Hz, with this kind of measurements it should be possible to get an upper bound on the dimensionless Finsler coefficients of about 10 −24 . This bound is by several orders of magnitude smaller than the bounds from Solar system tests, cf. [21].
Using nuclear spectroscopy, rather than atomic spectroscopy, it might be possible to get even better bounds. The Hughes-Drever experiment (see, e.g. Will [28]) comes to mind which gives the best bounds on anisotropic mass terms to date. It is based on magnetic resonance measurements of a Li-7 nucleus whose ground state of spin 3/2 splits into four levels when a magnetic field is applied. Anisotropic mass terms would lead to an unequal spacing between these levels. It was also shown that the Hughes-Drever experiment gives very restrictive bounds on torsion, see [29]. The Finsler perturbations discussed in this paper are not exactly of the same mathematical form as anisotropic mass terms or torsion terms, but they also introduce some kind of spatial anisotropy. For this reason, it seems likely that a careful re-analysis of the Hughes-Drever experiment would also give some strong bounds on possible Finsler perturbations, probably even stronger than the bounds from atomic spectroscopy. However, there are two difficulties with the Hughes-Drever experiment, one from the theoretical side and one from the experimental side. Theoretically, the analysis of the experiment would have to be based on a wave equation for a particle with spin, i.e., on a Finsler generalisation of a Dirac-type equation or on a non-relativistic approximation thereof. The basic idea of how such a Dirac-type equation could be found in a Finsler setting is rather straight-forward: One would have to linearize the corresponding Klein-Gordon equation with respect to the derivative operators, see e.g. [30]. However, the procedure is considerably more complicated than in the spinless case and the details have not yet been worked out for the kind of Finsler perturbation discussed in this paper. Experimentally, a Hughes-Drever experiment in its standard setting is performed by keeping the magnetic field fixed in the laboratory and waiting for 24 hours so that the Earth makes a full rotation with respect to the spacetime background geometry. In this way, one can detect "cosmological" anisotropies, i.e, anisotropies in the background geometry, but not "gravitational" anisotropies which would rotate with the Earth. If one thinks of a Finsler perturbation as having a gravitational origin, it would be of a type that could not be detected with a Hughes-Drever experiment in its usual setting. One would have to rotate the magnetic field with respect to the laboratory which is technically more difficult.
For these two reasons, we have restricted in this paper to a test with atomic spectrocopy, rather than with nuclear spectroscopy of the Hughes-Drever type. It should be noted that such an atomic spectroscopy test applies not only to laboratory experiments on Earth, but to any situation where (hydrogen) spectral lines are observed. So it can be used also for estimating Finsler perturbations in the neighborhood of distant stars or gas clouds. | 2014-12-29T22:33:55.000Z | 2014-11-10T00:00:00.000 | {
"year": 2014,
"sha1": "39fd19d5dd9115a8de5b9ae9791e9b37783f767e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1411.2670",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "39fd19d5dd9115a8de5b9ae9791e9b37783f767e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221257266 | pes2o/s2orc | v3-fos-license | DESIGN WITH USE OF 3D PRINTING TECHNOLOGY
Dynamic development of 3D printing technology contributes to its wide applicability. FDM (Fused Deposition Method) is the most known and popular 3D printing method due to its availability and affordability. It is also usable in design of technical objects – to verify design concepts with use of 3D printed prototypes. The prototypes are produced at lower cost and shorter time comparing to other manufacturing methods and might be used for a number of purposes depending on designed object’s features they reflect. In the article, usability of 3D printing method FDM for designing of technical objects is verified based on sample functional prototypes. Methodology applied to develop these prototypes and their stand tests are covered. General conclusion is that 3D printed prototypes manufactured with FDM method proved to be useful for verifying new concepts within design processes carried out in KOMAG.
INTRODUCTION
Development and analysing of design concepts is integral activity within engineering design. Among methods that enable to verify a design concept is fabrication and test of prototypes. A prototype is a pre-production representation of some aspect of a design concept. It approximates a feature (or a number of features) of a product, service, or system. In case of machine parts, fabrication of a prototype might be time-consuming and very expensive, depending on the properties that have to be included in the prototype. This applies in particular to situation when the prototype is to be tested on stand tests. The time and cost factors limit possibilities of verification of design concepts. The problem indicated above regards situation when traditional manufacturing methods are used, like e.g. machining (subtractive methods), moulding. There is also a wide spectrum of additive manufacturing methods, commonly referred to as '3D printing', that enable to fabricate the final assembly parts. Depending on the properties of these parts and materials used, the manufacturing process still can be quite expensive, but at the same time much cheaper and faster than the traditional manufacturing process that would be applied otherwise. In particular also machine parts functional prototypes for verification of design concepts can be produces.
General model showing integration of 3D printing in product development process is presented in the Fig. 1.
Fig. 1 Technical object development -general model
Among 3D printing methods there is FDM (Fused Deposition Method) that enables to fabricate, in a fast and cheap way, a physical model of a machine part that reflects its shape. Therefore, for machine parts functional prototypes reflecting their geometrical features can be produced. KOMAG for years specializes in development of machines, devices and systems for mining industry. In many cases these solutions are also applicable for other industries. Among these solutions are nozzles for spraying systems and machine parts like magnetic clutches [6,13,17]. Both nozzles and clutches are components, manufacturing of which with use of traditional methods is time and cost consuming. Manufacturing of one or several prototype items is related with significantly higher investment cost compared to serial production. This affects possibilities to use their functional prototypes for design verification. In case of nozzles, their performance in strongly affected by their internal shape, and in case of magnetic clutches, their performance is strongly affected by arrangement of magnets. Therefore, these features are among subjects of conceptualization, and they both are strictly related with geometrical features of the components. The question that arose was, whether it is possible to verify new design concepts of nozzles and clutches via stand tests of their functional prototypes fabricated on an FDM 3D printer. Research carried out to find the answer is presented in this article. In particular methodology followed to manufacture 3D printed prototypes of technical objects and methodology followed to verify their usability for design concepts evaluation are described.
LITERATURE REVIEW
3D printing embraces a wide spectrum of very diverse manufacturing methods [12,18], common feature of which is building an object by adding material in a layer by layer mode. Materials used, the way in which subsequent layers are built, 3D printers applied, and features of the objects produced depend on the 3D printing method. 3D printing has proved its advantages -compared to traditional manufacturing methods -in a number of branches and applications [3,4,7,9,11,14,15,20]. In case of design process, these advantages are related with: possibilities to produce and test object prototypes as well as time and money spent for that purpose, overcoming limitations in design concepts and customization. FDM is the most popular and affordable 3D printing method, which is caused among others by high availability and affordability of desktop FDM 3D printers and materials -filaments. Required properties of a prototype depend on the purpose for which it will be used. Properties of objects manufactured in FDM result among other from: filament used, object shape, object orientation and parameters (e.g. layers thickness, infill percentage, infill pattern) established for the 3D printing process [1]. There is a huge number of combinations of these elements affecting a 3D printout, e.g. its strength, surface roughness, dimensional accuracy. The relations between settings applied for 3D printing process and the final object obtained are a subject of research, results of which are presented in a number of publications (e.g. [5,8,10,11,16,19,21]). At the same time, there are no ready-to-use procedures to be applied to achieve required properties of a 3D printed protype, but the research results presented in articles can be transferred to other research as guidelines to follow.
DEVELOPMENT OF 3D PRINTED FUNCTIONAL PROTO-TYPES -METHODOLOGY Development of functional prototype of the new nozzle
Based on the literature research, material and object orientation were taken into account as these that will affect the 3D printed nozzle's strength, surface smoothness and accuracy in reflecting its 3D model's shape. To establish proper settings for the 3D printing of the new nozzle's functional prototype, first 3D printed model of already marketed nozzle was manufactured and tested. ZORTRAX M200 printer was used. On the basis of the printer manufacturer's catalogue cards -five types of filaments were determined as applicable for 3D printing of the prototype nozzle. Z-ULTRA is one of them. To decide on the nozzle's orientation, two items of a sample nozzle were 3D printed from Z-ULTRA material: one oriented vertically and one -horizontally. The STK-ZZ-2 nozzle, designed by KOMAG, was chosen for that purpose (Fig. 2).
Fig. 2 Nozzle 3D printed: vertically (left), horizontally (right)
Vertical orientation brought definitely better results. The cylindrical surfaces of the nozzle remained smooth and maintained the circle shape of a cross-section. A much clearer thread outline can also be observed. In the nozzle printed horizontally, the cylindrical side surfaces are uneven and elliptical in cross-section. The indicated nozzle features are of great importance due to the correct embedment of the sealing O-ring and embedment of the complete nozzle in the matching hole of the feeding body. These conclusions regard surface features of a nozzle, not the internal ones that particularly affect the generated stream. Next, it was established whether all filaments selected before as applicable for manufacturing the prototype nozzles indeed can be used for that purpose. In addition to the nozzles made of Z-ULTRA, items of STK-ZZ-2 were vertically manufactured from the following materials: Z-HIPS, Z-ABS, Z-PETG, Z-TRANSPARENT. Based on organoleptic assessment, each material was accepted for the tests to determine which of these materials is best for manufacturing the prototype of the new designed nozzle, taking into account the quality of the generated stream parameters (fractional distribution and range of the drops). STK-ZZ-2 nozzles, namely 6 items made on the 3D printer and 1 item made of metal -purchased and used for comparative purposes (Fig. 3), were tested.
Fig. 3 STK-ZZ-2 nozzles: 1-5 printed vertically from Z-ULTRA, Z-HIPS, Z-ABS, Z-PETG, Z-TRANSPARENT, 6 -commercial, made of metal, 7 -printed horizontally from Z-ULTRA
The printed nozzles, cleaned out of residual material, were equipped with a set of O-ring seals. The air inlet and outlet of the nozzle were drilled to a nominal size of Φ2 mm, while the water inlet was corrected to Φ1 mm. In addition, the external thread was corrected with an M12x1.5 die. This was necessary due to the difficulties when screwing into the feeding body.
The following quantities were measured and recorded during the stand tests: − particle diameter distribution in the spraying stream, − supply pressure and volumetric air flow rate in the air mains feeding the nozzle, − supply pressure and volumetric water flow rate in the water mains feeding the nozzle. Photographic footage was also made to obtain information about the shape of the spraying stream. During testing of the nozzle operational parameters (Fig. 4), their operation was tested at the same feeding pressure of water and compressed air in the range 0.3, 0.4, 0.5 and 0.6 MPa. The size of drops Dv(10), Dv(50) and Dv(90) was covered. The last step is comparison of tests results obtained for the 3D printed nozzles with the results obtained for the one bought on the market. The Dv(50) indicator showing the maximum drop diameter of the half of sprayed liquid was assumed to be the most representative and most frequently used in the assessment of the quality of the spray stream.
Fig. 4 Test stand for testing quality of the spray stream
Source: [2].
Development of functional prototypes of the new clutches
KOMAG developed two designs of clutches (Fig. 5), in which the resistance force, which allows the transmission of torque, is generated by magnetic pairs.
There were 3 prototypes in total: 1 for face-to-face variant, and 2 for the cylindrical variant. To produce prototypes of the new designed clutches, 3D printed elements and commercially available elements were assembled. 3D printed functional prototypes of these new solutions were manufactured for analysis of impact of magnets number and their mutual configuration on clutch parameters. The prototypes were manufactured in scale. The assumption was that the results obtained will then be used for development of full scale functional prototypes of the clutches being under design. The designers had no possibility to establish the 3D printing parameters and make other adjustments based on prior 3D printing of the existing, already marketed solutions. Therefore, tailoring of the 3D model for 3D printing purposes as well as selection of settings for the 3D printing process had to be carried out -in an iterative wayduring the 'fabrication and tests' cycles.
Due to shrinkage of the material during cooling down, for each prototype, several test prints were carried out to select the appropriate model deviations and obtain the required fits (magnets in the pockets, diameters for bearings, etc.) - Fig. 6a. Then the clutch components were printed and assembled together with the magnets and other elements (Fig. 6b). Fig. 7 shows a change in the quality parameter of the Dv(50) stream depending on the feeding media pressure, for each STK-ZZ-2 nozzle, i.e. the steel one and the 3D printed ones.
Fig. 7 Diameter Dv (50) for various STK-ZZ-2 nozzle materials
A nozzle made of metal in a subtractive manufacturing technology had the best atomization of drops in the spraying stream. A nozzle made of Z-ULTRAT material, printed horizontally, had similar parameters. Other nozzles, printed in horizontal orientation, showed slightly worse parameters. However, the correctness of improving the quality of the stream along with an increase in feeding pressure was maintained. The worse quality results of the spray stream of nozzles printed in the vertical orientation was most likely caused by the fact that the material layers in the nozzle were arranged perpendicularly to the direction of the stream outlet, which could disturb the process of breaking the water film.
To determine the possibility of longer use of spraying nozzles, they were subjected to a 15-hour work cycle. In the nozzle made of Z-PETG, the connecting thread damaged during the test, so no useful results were obtained. Comparison of parameters of the printed nozzles before and after the 15-hour cycle is shown in Table 1. The tests and measurements of spraying parameters of the tested nozzles proved that Z-ULTRAT material is best for printing the prototypes of nozzles, in a vertical orientation. STK-ZZ-2 nozzle printed from this material had the best parameters of spraying stream, and proved to be most durable during the fatigue tests. Therefor Z-ULTRAT materials and vertical orientation were applied for 3D printing of the functional prototype of the new designed nozzle.
Fabrication and tests of functional prototypes of the new nozzle
Poor mechanical strength is the main disadvantage of 3D printed nozzles. They are susceptible to damage when screwed into the feeding body. That is why an air-water nozzle plugged into a socket and secured with a STECKO-6 pin was designed. There were 3 variants (internal shape was the difference) of the nozzle. Both nozzles and the feeding body were 3D printed. Fig. 8 shows the nozzle prototypes and the feeding body equipped with G¼"clutchs and STECKO-6 pin. Places for the metal components (thread for connectors, pin holes) had to be included in the 3D model of the body. It was important to maintain proper manufacture tolerance, foreseeing the material shrinkage, so that the nozzles plugged into the body could be tightly fitted.
Fig. 8 Plug-in nozzle -3 variants + complete feeding body
The spray stream generated by the plug-in nozzles was tested on the testing stand (Fig. 9). The operational parameters and the operational quality of the three nozzle variants are summarized in the Table 2.
Fig. 9 Test of the spraying stream of the plug-in nozzle
The new designed nozzle and its connection socket were significantly simplified, while maintaining the functionality and parameters of the spraying stream. The tests revealed that the plug-in nozzle (all 3 variants) is a very good replacement for the STK-ZZ-2 nozzle. All 3 nozzle designs produced the correct spray pattern. The lowest water consumption was found in the nozzle with a cylindrical outlet and a water inlet tangent to the chamber. This nozzle also had the best quality of the spray stream atomization. Fig. 10 shows a change in the basic quality parameter of the Dv(50) stream depending on the feeding media pressure for each nozzle variant. As the pressure of the spraying media increased, the quality of the spray stream generated for each nozzle slightly increased. The tests confirmed that use of FDM for manufacturing of prototypes of nozzles is usable for testing of their new designs, and brings reliable results.
Fabrication and tests of functional prototypes of the new clutches
Face-to-face clutch consists of two discs with magnet pockets that interact with each other to transmit torque. They were designed so that the diameter of the magnet arrangement could be changed (Fig. 11). The change in the magnet arrangement diameter is possible by using the sockets of diameters Φ46.5, Φ79.1 and Φ111.7 mm.
The coaxial clutch was designed in two variants -singlerow and double-row. For each of them a physical model (fictional prototype) consisting of the following components was created: − printed components: rotor, stator and cover; the cover is identical in both variants, the other components have an individual design − bearings, fixing screws and MPŁ 10x10x10 N42 type magnets. The sketch, spatial model and printed components of the single row cylindrical clutch are presented in Fig. 12, Fig. 13.
The assembled physical model of the clutch is shown in Fig. 14. The physical model was subjected to static tests to determine the limits of the transmitted torque. During the tests, the structure of the clutch shaft was damaged due to exceeding of the structure strength in a result of applied loads (Fig. 15).
Insufficient rotor shaft strength resulted from the fact that the filling level was too low when the core was printed. Another rotor print was made, in which the maximum filling of the print core possible to obtain on the printer was used. This allowed for the successful completion of subsequent tests. The basic dimensions of the double-row cylindrical clutch and the 3D model are shown in Fig. 16. Printed components of the clutch are shown in Fig. 17. Fixing eyes were added to the stator for subsequent stand tests.
To avoid the problem of insufficient rotor strength in a single-row clutch, the rotor was printed from a material with higher strength and the degree of filling of the model was increased. The material used to print the stator remained unchanged. Due to the expected higher resistance torque of the double-row clutch variant, the connecting part was made as a hexagon (wrench 17). The physical model of the clutch that was tested is shown in Fig. 18.
After the clutch models were made, preliminary tests were carried out to determine the rolling resistance generated by each variant of the clutch. Sample results are presented in Fig. 19. Details regarding the tests and the obtained results are presented in [6]. Prototypes of clutches created with use of the parts manufactured by 3D technology proved to be sufficient for testing and obtaining useful results on the basis of which the selected aspects of the proposed designs could be verified. It should be emphasized that with FDM method it is possible to create (and test) prototypes for a number of design variants with relatively low investment costs and required time, which increases the probability of obtaining the optimum solution.
CONCLUSIONS
Application of FDM in design process can be done in particular to verify a design concept of technical object via stand tests of its prototype carried out e.g. to establish important working parameters or other properties. Capabilities and limitations of the 3D printer used, material, and 3D printout orientation are examples of factors that affect surface and internal properties of 3D printed prototype, and -consequently -affect reliability of the prototype tests results. If the designed component is a modification of already existing ones, it is worth to make a decision concerning the above mentioned factors on the basis of the stand tests carried out for an object of known, already implemented design -tests of the object (items bought on the market) and tests of its 3D printed model. This was done in the case of printing and testing of prototypes of new nozzle design solutions. Comparison of the test results of a nozzle available on the market and its 3D printed models allowed to select materials and object orientation for 3D printing of prototypes of new designed nozzle. In the case of 3D printed prototypes of clutches, it was not possible to carry out stand tests for existing structures (clutches) that would provide useful information. On the other hand, in the case of coaxial clutches, experience from previous tests of the single-row variant allowed to take decision about change of parameters (filling density) for 3D printing of the next item of rotor, as well as for 3D printing of the first item of rotor for the clutch model in the double-row variant. When designing machine parts, systems, etc., commercial parts are used alongside absolutely new designed parts. The use of the FDM method allows to create prototypes by combining printed and commercial components, and use them for tests to verify parameters for which such prototypes are sufficient (allow to obtain reliable information). The discussed physical models of clutches are an example. If a prototype is assembled from 3D printed parts and ready-to-use components (not being subject of the design process), it is important to make test prints -limited to a fragment of the component -to select the deviations in the 3D model and guarantee a proper fit of the prototype's components. This approach saves time, enables to avoid wasting of material, and was used when manufacturing clutch prototypes. The examples discussed in this article support the following statements regarding 3D printing: | 2020-08-24T13:15:02.962Z | 2020-08-17T00:00:00.000 | {
"year": 2020,
"sha1": "61653e1b880da009c1b5965b3f2428ca41133f60",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/mspe-2020-0040",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "61653e1b880da009c1b5965b3f2428ca41133f60",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
257163404 | pes2o/s2orc | v3-fos-license | Variable demand model for periodically reviewing with allowing refunding parts of the orders
Depending on a field study for one of the largest iron and paints warehouses in Egypt, this paper presents a new multi-item periodic review inventory model considering the refunding quantity cost. Through this field study, we found that the inventory level is monitored periodically at equal time intervals. Returning a part of the goods that were previously ordered is permitted. Also, a shortage is permissible to occur despite having orders, and it is a combination of the backorder and lost sales. This model has been applied in both crisp and fuzzy environments since the fuzzy case is more suitable for real-life than crisp. The Lagrange multiplier technique is used for solving the restricted mathematical model. Here, the demand is a random variable that follows the normal distribution with zero lead-time. Finally, the model is followed by a real application to clarify the model and prove its efficiency.
lead-time. Fergany [8] introduced a periodic review inventory model with zero lead-time and varying ordering cost.
The cost parameters in real inventory systems and other parameters are uncertain in nature, such as prices, marketing, production, and inventory. In recent years, many researchers have contributed many articles by applying the fuzzy sets theory as a mathematical way to deal with these uncertainties. For example, Dey and Chakraborty [9] developed a fuzzy random periodic review model with variable lead-time. Rong et al. [10] studied a multi-objective inventory model with controllable lead-time and triangular fuzzy numbers. Jauhari et al. [11] introduced a fuzzy periodic review model involving stochastic demand. Xiaobin [12] developed a continuous review inventory model with variable lead time in a fuzzy environment. Sadjadi et al. [13] studied the fuzzy pricing and marketing planning model using a geometric programming approach. Biswajit and Amalendu [14] introduced the periodic review inventory model with variable lead time and fuzzy demand. Priyan and Uthayakumar [15] studied a multi-echelon inventory model under a service level constraint in a fuzzy cost environment. Khurdi et al. [16] introduced a fuzzy collaborative supply chain model for imperfect items and a service level constraint.
Based on a realistic study for one of the biggest irons and paints warehouses in Egypt, some adjustments have been made to the multi-item periodic review inventory model. In this model, the inventory level is reviewed periodically at equal time cycles. The warehouse allows the customers to return part of the goods they previously ordered; therefore, an extra cost is paid and added to the expected total cost (the refunding quantity cost). Shortage can occur despite having orders, and then a part of these orders is fulfilled in the next cycle at the same price at the request time (backorder), while the other part is lost forever and a penalty clause is paid. There is a constraint on the expected varying lost sales cost, for if this cost exceeds a certain limit, it may lead to loss or increase the expected total cost. The constrained problem is solved by using the Lagrange multiplier technique. The demand is a random variable that follows the normal distribution with zero lead-time. This model has been applied in both crisp and fuzzy environments since the fuzzy environment is closer to real-life than crisp. The main goal is to find the minimum expected annual total cost by finding the optimal maximum inventory level and the optimal time between reviews. The results in this paper have been derived by Mathematica program V. 12.0. Figure 1 shows the multi-item periodic review model with zero lead-time.
The following assumptions are made for developing the mathematical model • The warehouse allows refunding a part of the goods that were previously ordered.
• The demand is a random variable, and the replenishment is instantaneous (lead-time is zero). • The stock level decreases at a uniform rate over the cycle. • f (x r ) is the density function for the demand x r . • A fraction of unsatisfied demand that will be backorder is γ r , while the remaining fraction (1 − γ r ) is completely lost.
The mathematical model for crisp environment
• The expected order cost for the cycle is given by • The expected varying holding cost for the cycle is given by where the expected average amount in inventory is given by • The expected varying backorder cost for the cycle is given by where S(Q mr ) represents the expected shortage quantity. • The expected varying lost sales cost for the cycle is given by And the expected varying refunding quantity cost for the cycle is given by The expected annual total cost will be the sum of the expected order cost, the expected varying holding cost, the expected varying backorder cost, the expected varying lost sales cost, and the expected varying refunding quantity cost Then from Eqs. (1), (2), (3), (4) and (5), the expected annual total cost is given by Note: Obviously, the expected order cost n r=1 C or is fixed, so it can be temporarily neglected in calculating the minimum expected annual total cost and eventually added to it. Now, the main objective is to determine the optimal values Q * mr and N * r that minimize the expected annual total cost minE(TC) . This paper puts a constraint on varying lost sales cost. The Karush-Kuhn-Tucker (KKT) conditions (Kuhn and Tucker [17]) are first-order necessary conditions for a solution of nonlinear programming to be optimal if some regularity conditions are satisfied. The Lagrange multiplier method is suitable to solve this constraint problem.
Consider a limitation on the expected varying lost sales cost, i.e.
To solve this primal function which is a convex programming problem, Eqs. (6) and (7) can be written in the following form Subject to: To find optimal values Q * mr and N * r which minimize Eq. (8) under the constraint (Eq. (9)), the Lagrange multipliers function with the Kuhn-Tucker conditions is given by where Lr is a Lagrange multiplier.
The optimal values Q * mr and N * r can be calculated by setting the corresponding first partial derivatives of Eq. (10) equal to zero. Then we obtain: and It can be determined the minimum expected annual total cost (min E(TC)), after finding the optimal values Q * mr and N * r , substituting these in Eq. (8), then adding the fixed value n r=1 C or .
The mathematical model for fuzzy environment
The inventory cost coefficients and other coefficients are fuzzy in nature. Therefore, the decision variables and the objective function should be fuzzy as well. This model is resolved when the cost parameters are triangular fuzzy numbers (TFN), and the right and left shape functions of the objective function and its decision variables should be found by finding the upper and the lower bound of the optimal objective function, i.e., L U (∝) and L V (∝) (the left and right ∝ cuts of L (∝) ). For example, the approximated value of TFN of C or , is observed in Fig. 2. Consider the model when all parameters are triangular fuzzy numbers (TFN) as given below where a ir ,i = 1,2, . . . ,10 are arbitrary positive numbers under the following restrictions: The left and right limits of ∝ cuts of C or ,C hr ,C br ,C Lr and D r are given by Using the signed distance method, we have C or = (C or − a 1r ,C or ,C or + a 2r ), C hr = (C hr − a 3r ,C hr ,C hr + a 4r ), C br = (C br − a 5r ,C br ,C br + a 6r ), C Lr = (C Lr − a 7r ,C Lr ,C Lr + a 8r ) and D r = D r − a 9r ,D r ,D r + a 10r . The optimal values Q * mr and N * r for the fuzzy case can be determined as in crisp case except replacing the crisp costs C or ,C hr ,C br , C Lr and D r by fuzzy costs C or ,C hr ,C br ,C Lr and D r .
The demand follows normal distribution
When the mean μ and the standard deviation σ, i.e. µ r , continuous location parameter, σ r , continuous scale parameter σ r > 0 where is the probability density function of the standard normal distribution, and �(z) is the cumulative distribution function of the standard normal distribution. But The optimal values Q * mr and N * r for crisp case can be calculated as follows: and the expected number of shortages incurred per cycle is the solution of the following equation The decision variables and minimum expected annual total cost for a fuzzy case can be determined by the same way except replacing the crisp costs by fuzzy costs.
Application
A large store for iron and paints that sells its products wholesale follows a policy of reviewing all items periodically. Three items were selected (Tanner I, Lacquered II and, Plastic III). This store allows refunding a part of the goods previously ordered. The henceC or = C or + 1 4 (a 2r − a 1r ),C hr = C hr + 1 4 (a 4r − a 3r ) and Table 8 when α = 0.05 (see the "Appendix"). However, for some unexpected reasons in some cycles, the store faces shortage and pays at least 7% for backorder and 6% for a lost sale. Table 4 shows the allowable cost of lost sales K Lr . The store manager wishes to establish the optimal values Q * mr and N * r that achieve the minimum expected annual total cost for different values of β ∈ (0, 1) when the demand follows the normal distribution −∞ < x r < +∞. Tables 1 and 2 show the crisp and fuzzy values of the cost parameters. Table 3 represents the crisp and fuzzy values of the average demand. Table 4 shows the maximum cost allowed (the limitations) for lost sales and its fraction. Table 5 represents parameter values for normal distribution. In Table 6, the results of crisp and fuzzy values for the normal distribution are calculated. Table 7 presents the optimal values of Q * mr , N * r , and the minimum expected annual total cost for crisp and fuzzy values. Table 8 shows the demand during the period 2016-2018 for 36 samples. By using the SPSS program, Table 9 shows One-Sample Kolmogorov-Smirnov Test. The optimal values Q * mr , N * r , and the minimum expected annual total cost min E(TC) for three items are deduced in Table 7. The results are calculated for the crisp and fuzzy environment. Figures 3, 4 and 5 are displayed to illustrate the crisp and fuzzy values of the expected annual total cost for the three items against the different values β.
Conclusion
By conducting a realistic study of a large warehouse for iron and paints in Egypt, this paper introduced a new multi-item inventory model where the warehouse allows the customers to return a part of the orders they previously ordered. The inventory level of the warehouse is monitored periodically at equal time cycles. Shortages can occur despite having orders, which are a mixture of backorder and lost sales. The demand is a random variable that follows the normal distribution with zero lead-time. It can be concluded that there is a restriction on the expected varying lost sales cost, for if this cost exceeds a certain limit, it may lead to loss or increase the expected total cost. After solving the model in a crisp environment, it resolved in a fuzzy sense, where the fuzzy environment is more suitable for real-life than crisp. Increasing the value of the varying β leads to a loss or an increase in the expected annual total cost. The minimum expected annual total cost is achieved at the minimum value of β (β = 0.1). | 2023-02-25T14:54:00.266Z | 2021-10-19T00:00:00.000 | {
"year": 2021,
"sha1": "dfd9d75d34067106f9575ef8dadbe52e0d0c020e",
"oa_license": "CCBY",
"oa_url": "https://joems.springeropen.com/counter/pdf/10.1186/s42787-021-00129-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "dfd9d75d34067106f9575ef8dadbe52e0d0c020e",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": []
} |
269698952 | pes2o/s2orc | v3-fos-license | Time‑dependent ROC curve analysis to determine the predictive capacity of seven clinical scales for mortality in patients with COVID‑19: Study of a hospital cohort with very high mortality
Clinical data from hospital admissions are typically utilized to determine the prognostic capacity of Coronavirus disease 2019 (COVID-19) indices. However, as disease status and severity markers evolve over time, time-dependent receiver operating characteristic (ROC) curve analysis becomes more appropriate. The present analysis assessed predictive power for death at various time points throughout patient hospitalization. In a cohort study involving 515 hospitalized patients (General Hospital Number 1 of Mexican Social Security Institute, Colima, Mexico from February 2021 to December 2022) with COVID-19, seven severity indices [Pneumonia Severity Index (PSI) PaO2/FiO2 arterial oxygen pressure/fraction of inspired oxygen (Kirby index), the Critical Illness Risk Score (COVID-GRAM), the National Early Warning Score 2 (NEWS-2), the quick Sequential Organ Failure Assessment score (qSOFA), the Fibrosis-4 index (FIB-4) and the Viral Pneumonia Mortality Score (MuLBSTA were evaluated using time-dependent ROC curves. Clinical data were collected at admission and at 2, 4, 6 and 8 days into hospitalization. The study calculated the area under the curve (AUC), sensitivity, specificity, and predictive values for each index at these time points. Mortality was 43.9%. Throughout all time points, NEWS-2 demonstrated the highest predictive power for mortality, as indicated by its AUC values. PSI and COVID-GRAM followed, with predictive power increasing as hospitalization duration progressed. Additionally, NEWS-2 exhibited the highest sensitivity (>96% in all periods) but showed low specificity, which increased from 22.9% at admission to 58.1% by day 8. PSI displayed good predictive capacity from admission to day 6 and excellent predictive power at day 8 and its sensitivity remained >80% throughout all periods, with moderate specificity (70.6-77.3%). COVID-GRAM demonstrated good predictive capacity across all periods, with high sensitivity (84.2-87.3%) but low-to-moderate specificity (61.5-67.6%). The qSOFA index initially had poor predictive power upon admission but improved after 4 days. FIB-4 had a statistically significant predictive capacity in all periods (P=0.001), but with limited clinical value (AUC, 0.639-0.698), and with low sensitivity and specificity. MuLBSTA and IKIRBY exhibited low predictive power at admission and no power after 6 days. In conclusion, in COVID-19 patients with high mortality rates, NEWS-2 and PSI consistently exhibited predictive power for death during hospital stay, with PSI demonstrating the best balance between sensitivity and specificity.
Time-dependent ROC curve analysis to determine the predictive capacity of seven clinical scales for mortality in patients with COVID-19: Study of a hospital cohort with very high mortality
systems during the first 3 years of the pandemic (1).However, the World Health Organization has advised maintaining readiness and vigilance across healthcare systems at all levels to address potential increases in outpatient cases and hospitalization, especially during peak periods of other communicable diseases with high care demand (2).Despite most infections being self-limiting (3), the number of cases made COVID-19 one of the leading causes of mortality worldwide from 2020 to 2022.Nonetheless, this trend has diminished in recent years, partly due to vaccination strategies (4,5).
The prevalence of severe/critical COVID-19 cases and the need for hospitalization may vary based on regional factors (6).Globally, hospitalized patients with COVID-19 experienced mortality rates ranging from 1 to 52% (7), varying significantly based on the pandemic stage, ethnic and sociocultural characteristics, as well as vaccination or treatment strategies (8).
The emergency caused by COVID-19 has led to the necessity and implementation of clinical instruments with high predictive value to support decision-making in patients with severe and critical illness (12).Various clinical risk scales and severity indices for respiratory disease and the progression of organ failure have been implemented to monitor patients hospitalized due to COVID-19.Although some of these scales were developed to monitor bacterial infection, they have been adapted for use in COVID-19, such as the Pneumonia Severity Index (PSI), the National Early Warning Score 2 (NEWS-2) and the Quick Sepsis-Related Organ Failure Assessment Score (qSOFA) (13)(14)(15).Other scales were specifically created for COVID-19, such as Viral Pneumonia Mortality Score (MuLBSTA: multilobular infiltration, hypo-lymphocytosis, bacterial coinfection, smoking history, hypertension, and age) and COVID-Guangzhou Institute of Respiratory Health Calculator at Admission (GRAM) (16,17).The Kirby Index (PaO 2 /FiO 2 , arterial oxygen pressure/fraction of inspired oxygen) is a tool used to measure lung capacity and functionality, particularly for diagnosing and prognosticating the severity of acute respiratory distress syndrome (18,19).The liver fibrosis index (FIB-4) is another scale that is worth studying, because previous studies showed that it has promising predictive power for mortality rate in patients with COVID-19, without underlying liver disease and in all age groups (20,21).
Nevertheless, the prognostic capacity of these scales in COVID-19 has typically been evaluated through receiver operating characteristic (ROC) curve analysis using only clinical data or markers at hospital admission or within the first 48 h of hospitalization (22)(23)(24).However, it is evident that both the disease status and the value of clinical markers used in the scales are changing over time, especially in hospitalized patients with COVID-19 (25).Therefore, in diseases with changing clinical states, it has been proposed that to assess the predictive power of certain markers or indices, it is more appropriate to use time-dependent ROC curve analysis (26).
ROC curves are generated at different time points to determine if a severity scale maintains its predictive capacity consistently or if it may weaken or strengthen as the target time moves away from the baseline (26).
The present study aimed to assess the predictive capacity for mortality of seven commonly used clinical indicators (PSI and Kirby index, COVID-GRAM, NEWS-2, qSOFA, FIB-4 and MuLBSTA) in patients with severe and critical COVID-19 upon admission and at 2, 4, 6 and 8 days of hospitalization using time-dependent ROC curve analysis.These clinical indicators were selected because they have been demonstrated utility in predicting mortality and severity in patients with respiratory disease, including COVID-19.These tools incorporate clinical parameters such as vital signs, laboratory results, and comorbidities to provide a comprehensive assessment of patient prognosis.Additionally, they have been previously validated in similar patient populations and have shown promising results in predicting outcomes in patients with COVID-19.Furthermore, the effectiveness of these predictive tools relies on the availability of relevant data types, including clinical observations, laboratory results, and patient demographics (15,16,18,20,(27)(28)(29). The present study aimed to identify severity indices maintaining consistent predictive capacity in patients with fluctuating health status, such as those hospitalized with COVID-19, within a cohort exhibiting one of the highest mortality rates globally.The study was conducted in compliance with the Declaration of Helsinki and was approved by the local health research committee of General Hospital Number 1 of IMSS-Colima (approval no.R-2021-601-014).Following national legislation and institutional protocols, the local health research committee waived the requirement for written consent from patients involved in this observational study (article 23 of the Regulations of the General Health Law on Health Research in Mexico) (31,32) as it solely entailed analyzing data from a hospital database, posing no risk to patients.Patient confidentiality was maintained throughout the study, which was classified as low risk (31).
Study
Patients.The inclusion criteria were non-pregnant patients aged >18 years diagnosed with COVID-19 based on positive results from Severe Acute Respiratory Syndrome Coronavirus 2 Reverse Transcription PCR (SARS-CoV-2 RT-PCR) or antigen tests.The study enrolled patients admitted to regular hospital floors, high-flow oxygen rooms, or intensive care units.Exclusion criteria included patients receiving only emergency room care without admission and those with incomplete clinical records.515 patients were included in the analysis.The median age was 63.3±16.1 years, with a percentage of male patients was 61.9% and the percentage of female patients was 38.1%.
Measures and follow-up.Patient information, including medical history, COVID-19 vaccination status and clinical parameters from admission to discharge (due to either improvement or death), was retrieved from clinical records.Data collected included age, sex, medical history (comorbidities, Charlson comorbidity index score) (33), history of prior COVID-19 infection, smoking status (based on the Glossary of the National Health Interview Survey of the United States of America) (34), admission disease phase (severe/critical), clinical, laboratory and imaging data for each day of hospitalization, and reason for discharge (death or improvement).Arterial hypertension was identified by criteria aligned with the guidelines set forth by the Eighth Joint National Committee (JNC 8) for hypertension; these criteria encompassed a documented history in the clinical records (prior to hospitalization due to COVID-19 infection) of blood pressure readings equal to or exceeding 140/90 mmHg, a prior diagnosis of hypertension, or a positive record of antihypertensive therapy (35,36).
Data collected during hospitalization included variables necessary to calculate the scores of severity scales, laboratory parameters (such as D-dimer, ferritin, markers of renal or liver function, complete blood count), use of mechanical ventilation or hemodialysis and administration of medication (paracetamol, anticoagulants, antibiotics, vasopressors, steroids, and diuretics).
qSOFA is used to clinically classify a septic patient and as a predictor of hospital mortality (27).It consists of clinical indicators including respiratory rate (≥22/min), altered mental status, and altered systolic blood pressure (≤100 mmHg), with each parameter generating a score from 0 to 3 (51).The components of qSOFA allow for an early and simple evaluation in hospital settings (25,27,51,52).
FIB-4 index is a commonly used, used for non-invasive assessment of liver fibrosis in chronic liver disease due to its accessibility, cost-effectiveness, and validated reliability, offering a safer and more convenient alternative to invasive liver biopsy (20,21).It is calculated using four parameters: Age, levels of aspartate and alanine aminotransferase and platelet count.A score of ≤1.3 indicates low risk of fibrosis, >1.3-2.67 moderate risk and >2.67 indicates high risk of fibrosis.(21,40).The FIB-4 score predicts mortality better than liver transaminases and may serve as a simple tool to identify patients with COVID-19 with a poorer prognosis in the emergency department (20,39).
Statistical analysis.Kolmogorov-Smirnov test was used to determine the normal distribution of data and Levene's test was used to confirm the equality of variances.Qualitative variables are expressed as absolute numbers or percentages, while quantitative variables are expressed as mean ± standard deviation or 95% confidence intervals.Quantitative data with non-normal distribution are expressed as median and range or 25-75th percentile (Q1-Q3).Unpaired Student's t test was used to compare numerical data with normal distribution (body mass index and age) whereas Mann-Whitney U tests were used to compare data with non-normal distribution (length of hospital stay).Categorical values were compared using Fisher's exact test.Univariate linear mixed effects model tests were used to compare the evolution of clinical parameters (PSI, NEWS-2 and COVID-GRAM) between patients according to their reason for discharge (improvement or death; fixed effect) during the hospitalization period (repeated observations), employing two random variables (month of hospital admission and length of hospital stay).Additionally, mixed-effects multinomial logistic regression models were constructed for analysis of longitudinal nominal data [yes vs. no; patients in critical condition, with mechanical ventilation, elevated serum D-dimer, lactate dehydrogenase, ferritin, or blood urea nitrogen (BUN) or use of antibiotics or amines] comparing the basal values with the values of subsequent days.To determine predictive capacity for mortality of the clinical severity scales and indices, the areas under the ROC curve (AUCs) were calculated for the different scales with their 95% confidence intervals, cut-off point, P-values along with sensitivity, specificity, and predictive values upon admission and at 2, 4, 6, and 8 days of hospitalization.Predictive capacity was classified based on AUC values as follows: 0.50-0.60(failed), 0.61-0.70(worthless), 0.71-0.80(poor), 0.81-0.90(good) and >0.90 (excellent), as previously described (54,55).Regarding the scales (PSI, Kirby index, COVID-GRAM, NEWS-2, qSOFA, MuLBSTA and FIB-4), the cut-off point was selected based on the point on the curve that provided the highest sensitivity and specificity (56).Sensitivity and specificity were classified as follows: High, >80; moderate, 65-80% and low, <65% (57).The statistical analysis was performed using SPSS software, version 20 (IBM Corp.).
Results
Patient characteristics and outcomes.During the study period from February 1, 2021, to December 31, 2022 (Fig. 1), 1747 patients were admitted to the respiratory area of the internal medicine service at General Hospital Zone #1, Villa de Álvarez, Colima.Of these, 1,247 were excluded due to bacterial or influenza pneumonia (without COVID-19), pregnancy, age under 18 years and incomplete medical records, leaving 515 patients included in the analysis.The mean age was 63.3±16.1 years with differences between those who lived or died (60.9±16.7 vs. 66.7±14.2years, respectively).The percentage of male patients was 61.9%, with no differences regarding sex for mortality.Patients who died had a higher comorbidity index, use of amines, hemodialysis and invasive ventilatory support, as well as a higher score in all severity indices analyzed upon hospital admission (except Kirby index, where its value is inversely proportional to severity of the disease; Table I).The median length of hospital stay was a 7.0 days (range, 1-38), being shorter for patients discharged due to improvement (median of 4-6 days, range, 1-29) compared with those discharged due to death (median of 8.0 days, range, 1-38; Table I).The characteristics of patients upon admission, as well as the primary treatments used during hospitalization according to their final discharge status (alive or deceased), are summarized in Table I.A total of 31.9% of patients presented with critical illness at the time of admission.Mortality in the analyzed cohort was 43.9%.
Variability of clinical markers during hospitalization.Fig. 2A illustrates the progression of patient outcomes from admission (baseline) to day 8.A significant increase in patient mortality was observed.Specifically, on day 7, 44% of admitted patients died, while among those still hospitalized on day 7, the mortality rate increased to 53%.Similarly, on day 8, mortality rate further rose to 56%.In patients hospitalized with COVID-19, the disease state was not static; the proportion of patients with critical illness and requiring mechanical ventilation increased with time (Fig. 2B and C).Therefore, the value of clinical markers changed throughout the hospitalization period.The proportion of patients with elevated serum levels of D-dimer, lactate dehydrogenase, ferritin and BUN increased with hospital stay (Fig. 2D-G), as did the need for antibiotic treatment or support with amines (Fig. 2H and I).Furthermore, PSI, NEWS-2, and COVID-GRAM remained relatively constant over time, although their values differ depending on the reason for discharge from the hospital (improvement or death; Fig. 2J-L).II shows predicted mortality at each time point.AUC was calculated to determine the optimal cut-off point for each variable in predicting death at different time points (Table II).For all time points, the index with the highest predictive power for mortality (according to its AUC values) was NEWS-2, followed by PSI and COVID-GRAM.These parameters increased predictive power as the hospitalization time progresses.NEWS-2 had good predictive power up to 2 and excellent power from 4 days h.NEWS-2 had the highest sensitivity to predict death (>96% in all periods evaluated), but its specificity was low (22.9% on admission to 58.1% on day 8 of hospitalization).PSI had good predictive capacity from admission to day 6 and excellent power at day 8.Its sensitivity was high (>80%) in all periods, with moderate specificity ranging from 70.6 to 77.3%.COVID-GRAM had good predictive capacity at all time points with high sensitivity (84.2-87.3%),albeit with low-to-moderate specificity (61.5-67.6%).The qSOFA index had an AUC with worthless predictive power (0.697) on the admission, improving its predictive capacity from 96 h (AUC, 0.842).MuLBSTA and Kirby index had poor predictive power on hospital admission (AUC, 0.726 and 0.748, respectively), with decreased after 6 days.Kirby index predictive power for patient survival is shown.MuLBSTA and qSOFA had high sensitivity at all time points (85-99%) with low specificity (14-33%).Kirby index showed low sensitivity (57.9% on day 0 and 57.1% on day 2) and high specificity in the first 2 days (82.5% on day 0 to 84.2% on day 2).However, after six days, both sensitivity and specificity decreased (45.7 and 59.4%, respectively).FIB-4 demonstrated statistically significant predictive capacity at all time points, albeit with limited clinical value (AUC, 0.639-0.698)and showing low sensitivity and specificity.Fig. 3 plots the AUC of indices over time, showing that NEWS-2 and PSI had lowest predictive capacity, and this increased with length of hospital stay.
Predictive capacity of mortality according to severity scales and indices over the course of hospitalization. Table
AUC was calculated to determine the optimal cut-off point for several common clinical biomarkers [neutrophil/lymphocyte ratio (NLR), serum lactate dehydrogenase (LDH), D-dimer, and ferritin) predicting death at various time points (Table III).All of these biomarkers exhibited variable predictive capacity depending on the evaluated time point.Although serum ferritin showed statistically significant predictive capacity at all time points, it was deemed worthless (Table III).LDH demonstrated poor predictive capacity in all analyses.NLR and D-dimer showed inadequate predictive ability on admission day (AUC 0.645 and 0.692, respectively) and the second day (AUC 0.649 and 0.652, respectively), but improved to poor on the fourth day (AUC, 0.754 and 0.728, respectively).Notably, NLR significantly enhanced its predictive capacity on days 6 and 8 of hospitalization (AUC 0.855 and 0.833, respectively), while D-dimer maintained poor predictive capacity (AUC 0.680 and 0.787, respectively).
Discussion
In patients hospitalized with severe and critical COVID-19, there are variations among severity indices regarding their ability to predict death, which may also change as the hospital stay progresses.NEWS-2 and PSI were the best indices for predicting death in patients hospitalized with COVID-19 from admission to day 8, although PSI showed A score equal to or higher than the cut-off point in NEWS-2, PSI, C-GRAM, MuLBSTA, and qSOFA is the predictor of patient death.In the Kirby Index, a score equal to or lower than the cut-off point is the predictor of patient death, showing the AUC value representing the predictive capacity for patient survival.AUC, area under the curve; SEN, sensitivity; SPEC, specificity; PPV, positive predictive value; NPV, negative predictive value; NEWS-2, National Early Warning Score 2; PSI, Pneumonia Severity Index; COVID-GRAM, Critical Illness Risk Score; MuLBSTA, Viral Pneumonia Mortality Score; qSOFA, Quick Sequential Organ Failure Assessment Score; FIB-4, Fibrosis-4.and age variables, which could bias the risk assessment, especially if other clinically relevant factors do not receive the same weight.This could result in an overestimation of risk for certain patients, potentially leading to inappropriate clinical decisions such as unnecessary hospitalization or overly aggressive treatments (37,43), although this does not affect its predictive capacity in COVID- 19.The present study identified potential factors that could enhance the sensitivity and specificity of predictive models for mortality in patients with severe and critical COVID-19.Longitudinal data on specific clinical markers such as NLT, or serum levels of D-dimer, lactate dehydrogenase and ferritin throughout the hospitalization period could assist clinicians in evaluating patient prognosis.However, utility of these markers varied, and they did not surpass the predictive capacity of PSI, NEWS-2, or COVID-GRAM indices.LDH exhibited poor predictive capacity, albeit consistent over time.Conversely, the markers D-dimer and NLR lacked predictive utility upon admission and on the second day, thought their predictive capacity improved from day 4 onwards.NLR, which displayed good predictive capacity on days 6 and 8 (AUC 0.855 AND 0.833, respectively).These findings align with previous studies (10,60,61).Additionally, integrating demographic variables such as age, comorbidities, and vaccination status may predict prognosis for each patient (10).Use of steroids in the present cohort was significantly higher in patients who survived, which may have contributed to improved prognosis, consistent with evidence supporting the use of steroids in patients with COVID-19, especially those requiring mechanical ventilation (62).These insights underscore the importance of considering temporal trends in clinical markers, such as serum levels of D-dimer, which demonstrated increasing predictive power for mortality as hospitalization progressed.
While NEWS-2 has shown variability in its predictive capacity for mortality across different studies and populations, with an AUC of 0.68 (with low sensitivity and specificity) in the UK population, a study in the Spanish population obtained an AUC of 0.81, with moderate sensitivity and low specificity (12,47,49).Other indices, such as qSOFA, also show notable variability in their predictive capacity in different populations, ranging from an AUC of 0.67 to 0.95 (22,24,58).Therefore, there is a controversy assuming its relevance for predicting hospital mortality for various diseases (27,51,52).This is consistent with the results of the present report, where it showed variability in its predictive capacity, which ranged from worthless to good, at the different evaluation time points (AUC 0.69 to 0.89).Regarding the MuLBSTA scale, it has been considered to have potential clinical utility for stratifying the progression of SARS-CoV-2 disease.However, this has been established mainly in Asian and Indian populations and in mild-to-moderate COVID-19 disease (53,63), and in a Spanish cohort of hospitalized patients (64).Therefore, it was relevant to extrapolate the use of this scale in a Latin American population and to evaluate its use not only upon hospital admission and discharge.In hospitalized Spanish patients, the MuLBSTA scale had a poor predictive capacity (AUC 0.73) for mortality/mechanical ventilation, with the PSI and CURB-65 indices having better predictive capacity (64).This is consistent with the results of the present study, where the MuLBSTA scale demonstrates that it is capable of predicting the death of patients hospitalized with COVID-19, but with variability depending on the evaluation time during their hospital stay (AUC varies from 0.69 to 0.82).COVID-GRAM had good predictive capacity at all time points with high sensitivity (84.2-87.3%),albeit with low-to-moderate specificity (61.5-67.6%).The above is consistent with previous studies that report it as an index, which with a cut-off point (≥89) similar to those found in the present work (>86), had a very high sensitivity (97.7%), but low specificity (32.7%) for developing critical illness (16,46).
The variability in the predictive capacity reported for severity indices in COVID-19 may be due to differences in characteristics of the analyzed populations, especially regarding risk factors (comorbidity, age, vaccination status, and therapeutic strategies), which are also reflected in the variations in the mortality rate in different cohorts analyzed (10,23,65).The present study was conducted in a cohort of hospitalized patients with COVID-19 with adverse prognosis and high mortality (45.5%, one of the highest in the world) (10) compared with other studies that had lower mortality rates, ranging from 2.3 to 30.5% (22)(23)(24)58).Another strength of the present study is that the predictive power was determined at different time points.Previous reports have generally evaluated the predictive power of indices only at hospital admission (22)(23)(24).
The present results reveal that there are indices whose predictive capacity remains relatively constant (COVID-GRAM, MuLBSTA and FIB-4), increase (NEWS-2, PSI, qSOFA) or decrease (Kirby index) as the hospital stay progresses.Each severity index is derived from clinical parameters, which may undergo varying degrees of change throughout hospitalization.Consequently, the predictive efficacy of each index may fluctuate based on the significance and temporal variability of the clinical parameters it encompasses.In particular, the variability in the predictive capacity of severity indices, including the decline in the predictive power of the Kirby index over time, could be influenced by the evolving clinical trajectory of the disease, heterogeneous manifestations of COVID-19 and factors such as patient demographics and treatment strategies (18).Further research is warranted to understand the underlying mechanisms driving these changes and to optimize integration of the Kirby index into clinical practice for prognostication in patients with COVID-19.
FIB-4 index was confirmed as a tool capable of predicting mortality in patients with COVID-19, which agrees with previous studies (20,21).Its predictive capacity remained consistent across the evaluated periods, although it was lower (AUC 0.639-0.698)compared with that previously reported in a Taiwanese population (AUC, 0.863) (20).These disparities may be because these populations exhibited significantly different mortality outcomes.For example, in the Taiwanese cohort (n=221), the median FIB-4 on admission was 1.91, with 4.5% of patients succumbing to the illness, while in the present study (n=515), these values were 4.68 and 43.9%, respectively (66).
The variations in the predictive capacity of severity indices among patients hospitalized with severe and critical COVID-19 underscore the complex nature of prognostication in this population.While NEWS-2 and PSI were the most reliable predictors of mortality, it is crucial to understand the factors contributing to the varying performance of indices over time.Notably, the present analysis revealed a decline in the predictive power of Kirby index over time, which may reflect the dynamic changes in lung function and oxygenation status during hospitalization.
In standard ROC curve analysis, a marker is measured at one time, assuming that the marker value (or index) remains fixed throughout the study period.However, in practice, both the disease state and level of prognostic biomarkers change over time (26).During the course of a disease, clinical status varies, making time-dependent ROC curve analysis appropriate.A ROC curve can be generated at various time points and the predictive capacity of the marker can be compared (26).Therefore, the time-dependent ROC curve is an effective tool for measuring performance or robustness of a marker, given the changing clinical status.The predictive capacity of a marker may weaken or strengthen as the target time moves away from baseline.Using a time-dependent ROC curve for an index or marker that varies over time is most appropriate for guiding key medical decisions (26).This is relevant in conditions that can be highly fluctuating, such COVID-19.In countries and hospitals with limited resources, it is key to obtain reliable clinical severity scales and indices that allow for effective and early medical care for patients at high risk of mortality.Identifying the best prognostic index, particularly one whose predictive power remains constant during hospital stay, is key.Therefore, the results of the present study can be useful for clinicians.There are other severity scores for community-acquired pneumonia such as The confusion, uremia, respiratory rate, BP, age ≥65 years) and A-DROP (age, dehydration, respiratory failure, orientation disturbance, and low blood pressure) scores, whose predictive utility is specifically established in patients aged >65 and 70 years, respectively, as well as in bacterial pneumonia, with limited prognostic capacity for assessing severity in viral infection (67)(68)(69).
One important aspect is the possibility of simultaneously applying two or more scales during clinical course to assess their condition and guide treatment.While certain scales may not be effective at certain stages, they may provide valuable clinical insights for future considerations.This approach allows for a more comprehensive evaluation of the patient progression and enables clinicians to adapt treatment strategies, leveraging the strengths of different scales to optimize patient care over time.ROC curve provides a valuable tool for evaluating and enhancing performance of assessment scales.Strategies to improve scales may include incorporating new biomarkers, refining inclusion criteria, external validation, optimizing cutoff points and considering confounding factors.These strategies can enhance accuracy and reliability of scales, resulting in more effective and personalized clinical decision-making.However, one aspect that must be considered when the various predictive scales are used for clinical purposes is that currently there is no standard definition of high, moderate, or low specificity and/or sensitivity.Although this stratification has been used in various contexts (70,71), its interpretation depends on the clinical context and the specific disease or condition (57).
In conclusion, in hospitalized patients with COVID-19 and a high mortality rate, NEWS-2 scale has the best predictive power; it has high sensitivity but low specificity, indicating that it is unlikely to give a false negative result.Therefore, it would identify patients who are likely to die, but it would also inform patients who will not die of this possibility.NEWS-2 (a test with high sensitivity) can be useful for ruling out (with good certainty) the possibility of death if a person has a negative result.On the other hand, PSI also has good to excellent predictive capacity, but additionally has a more balanced sensitivity and specificity (high and moderate, respectively), making it a useful and practical indicator for clinical use.Additionally, in hospitalized patients with COVID-19, where the disease and severity indices can be variable, using time-dependent ROC curves is an effective tool for measuring predictive performance of various indices.NEWS-2 and PSI indices were the most robust instruments for predicting patient death throughout hospital stay.
design.An ambispective (bidirectional) cohort study was conducted longitudinally with data collected from patients with severe and/or critical (30) COVID-19 who were hospitalized from February 2021 to December 2022 at the COVID-19 unit at General Hospital Number 1 of the Mexican Institute of Social Security (IMSS)-Colima (Colima, Mexico).
Table I .
Clinical characteristics of patients.
(22,24,59)alance between specificity and sensitivity.These results are consistent with those previously reported by Artero et al(58)in hospitals in Spain, where it was shown that PSI and CURB-65 were better than qSOFA and MuLBSTA at predicting mortality in patients with COVID-19 and pneumonia, and that PSI had the highest sensitivity (84.1%) and specificity (72.2%).The predictive capability of PSI for hospital mortality was similar to that in other studies (AUC, 0.77-0.85)(22,24,59).The main drawback that has previously postulated on the PSI is the high score assigned to comorbidity
Table III .
Predictive capacity of NLR, D-dimer, ferritin, and LDH for mortality in patients with COVID-19. | 2024-05-11T15:05:34.417Z | 2024-05-09T00:00:00.000 | {
"year": 2024,
"sha1": "8bf3c8d5ec4f9230b478c2b7626d4fd05b8ea9cd",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/br.2024.1788/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99d407e82e8ac7a3b586c007a99db2029b3e8175",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212749871 | pes2o/s2orc | v3-fos-license | Absolute yeast mitochondrial proteome quantification reveals trade-off between biosynthesis and energy generation during diauxic shift
Significance This work offers a unique portrayal of yeast mitochondria through the characterization of its absolute proteome. The study of biophysical changes in the mitochondrial network associated with proteome profiling, throughout yeast growth and the transition from fermentative to respiratory metabolism, lays out the crucial role this organelle has in balancing the overall metabolic status of the cell. Using proteomic mass spectrometry, state of the art fluorescence microscopy, and lipidomics analysis, these data provide a highly quantitative description of key mitochondrial processes across three states of metabolism. In particular, the work highlights the significant contribution of functional and structural remodeling occurring during the diauxic shift of this subcellular organelle.
Saccharomyces cerevisiae constitutes a popular eukaryal model for research on mitochondrial physiology. Being Crabtree-positive, this yeast has evolved the ability to ferment glucose to ethanol and respire ethanol once glucose is consumed. Its transition phase from fermentative to respiratory metabolism, known as the diauxic shift, is reflected by dramatic rearrangements of mitochondrial function and structure. To date, the metabolic adaptations that occur during the diauxic shift have not been fully characterized at the organelle level. In this study, the absolute proteome of mitochondria was quantified alongside precise parametrization of biophysical properties associated with the mitochondrial network using state-of-the-art optical-imaging techniques. This allowed the determination of absolute protein abundances at a subcellular level. By tracking the transformation of mitochondrial mass and volume, alongside changes in the absolute mitochondrial proteome allocation, we could quantify how mitochondria balance their dual role as a biosynthetic hub as well as a center for cellular respiration. Furthermore, our findings suggest that in the transition from a fermentative to a respiratory metabolism, the diauxic shift represents the stage where major structural and functional reorganizations in mitochondrial metabolism occur. This metabolic transition, initiated at the mitochondria level, is then extended to the rest of the yeast cell. mitochondria | absolute proteomics | Saccharomyces cerevisiae | diauxic shift I n the eukaryotic model organism Saccharomyces cerevisiae, glucose fermentation is a high-flux process that has proven to be catalytically more efficient than respiration, in terms of adenosine triphosphate (ATP) production per protein mass (1). The presence of glucose induces a catabolite repression response (CRR) that prevents utilization of alternative carbon sources by down-regulating the transcription of essential metabolic genes like the ones required for mitochondrial biogenesis, oxidative phosphorylation, and the tricarboxylic acid (TCA) cycle (2). When extracellular glucose is exhausted, yeast cells undergo a highly regulated transition from fermentative to respiratory metabolism; the turning point between these two distinct metabolic states are known as the diauxic shift (3) (Fig. 1A). The switch toward respiratory metabolism begins gradually, when extracellular glucose is still available in the medium ( Fig. 1 B and C). This is followed by major metabolic reorganization and adaptation to growth on ethanol, as determined by the concomitant activation of both the TCA cycle and the gluconeogenic pathway. At this stage, a marked change in the cellular protein pool is also accompanied by a decrease in growth rate (1,4). According to Zampar et al. (5), at the cellular level, progression toward the diauxic shift is characterized by three main steps: 1) a decline in glycolytic flux and a down-regulation of phosphofructokinase and pyruvate kinase ahead of glucose exhaustion; 2) a reversion of carbon flux through glycolysis after glucose depletion and an up-regulation of the glyoxylate cycle; and, finally, 3) an inhibition of the pentose phosphate pathway (PPP) and a reconfiguration of alternative strategies for nicotinamide adenine dinucleotide phosphate (NADPH) regeneration (5).
As semiautonomous organelles, mitochondria are considered a center of many essential metabolic processes, and, as such, they are tightly interconnected to other cellular functions and pathways that take place in separate cellular compartments (6). In particular, they are dynamic and interconnected organelles, which undergo continuous events of fusion and fission to form a transitional tubular structure known as the mitochondrial network (7). By being the site of cellular respiration, mitochondria are highly influenced Significance This work offers a unique portrayal of yeast mitochondria through the characterization of its absolute proteome. The study of biophysical changes in the mitochondrial network associated with proteome profiling, throughout yeast growth and the transition from fermentative to respiratory metabolism, lays out the crucial role this organelle has in balancing the overall metabolic status of the cell. Using proteomic mass spectrometry, state of the art fluorescence microscopy, and lipidomics analysis, these data provide a highly quantitative description of key mitochondrial processes across three states of metabolism. In particular, the work highlights the significant contribution of functional and structural remodeling occurring during the diauxic shift of this subcellular organelle.
by changes in cellular growth conditions. The shift from fermentative to respiratory metabolism, therefore, triggers a substantial remodeling of mitochondrial metabolism, as well as the shape, structure, and volume of the mitochondrial network (6). To date, there have been several studies focusing on the metabolic transition from exponential to stationary phase in S. cerevisiae, with some of the most recent studies describing the transcriptional regulation that occurs and relative quantification of the yeast cell proteome during the diauxic shift (5,(8)(9)(10)(11). Fewer studies, however, have performed comparative proteomic analysis of yeast using mitochondria isolated from cells grown on different carbon sources, these principally being fermentable and respiratory nonfermentable (12)(13)(14)(15). To accurately understand the physiological adaptation during these growth phases and highlight the central role played by the mitochondrial network, we performed an absolute quantitative analysis of the mitochondrial proteome isolated throughout diauxic yeast growth. This study applies an approach toward studying the role of mitochondria in yeast cells, providing a realistic representation of the contribution of this organelle in the progression from a fermentative to a respiratory metabolism, illustrating the Janus-faced function of mitochondria in these distinct metabolic states.
Results
Absolute Proteomic Characterization of Yeast Mitochondria. To evaluate the progressive adaptation of the mitochondrial proteome, yeast cultures were grown in a controlled culture environment using batch bioreactors. Samples were collected in biological triplicates at 9, 13, and 20 h after inoculation (Fig. 1A), with cells at these time points undergoing fermentative metabolism, the diauxic shift, and respiratory metabolism, respectively. Mitochondria isolation was performed via differential centrifugation followed by additional sample concentration, according to a preexisting protocol, as described in ref. 16. The samples were then tested via Western blot analysis to confirm the purity of the isolated mitochondria (SI Appendix, Fig. S1A).
For estimation of the absolute protein abundance, we applied a method developed by Schwanhäusser et al. (17) using tandem mass tag (TMT) 10-plex for isobaric mass tag labeling to enable simultaneous quantification of our replicates and the reference sample. This procedure is based on the assumption that the sum of all peptide intensities per protein, divided by the theoretical number of tryptic peptides is proportional to protein concentration (17). Absolute quantification was achieved using standards from UPS2, a set of distinct reference proteins of known concentration. The absolute proteomic data obtained gave a high level of replicate reproducibility (SI Appendix, Fig. S1 C-F), with proteome profiling resulting in the identification of 3,801 proteins across cell samples (∼65% of the expressed proteome) with a total of 3,700 proteins detected for the isolated mitochondrial samples. Despite the high number of proteins detected in the mitochondrial samples, a low level of contaminating, cytosolic proteins were identified, indicating the high purity of our mitochondrial fractions. Among the 3,700 proteins identified, in total, 1,036 were previously reported as mitochondrial (Dataset S1). These identifications were through both high-throughput studies and manual curation according to the Saccharomyces Genome Database (SGD) (18), with 824 proteins being identified as mitochondrial through manual curation alone. Additionally, two recently published studies, assigning a mitochondrial proteome of 901 and 986 proteins, respectively, were used as references for defining mitochondrial proteins (12,15). Our approach, which enabled 50 previously undescribed proteins to be identified subsequently shows a 5% increase relative to previous studies.
Alongside our proteomic data acquisition, we aimed to characterize comprehensively several other cell parameters across the different metabolic states. Therefore, along with measuring cell dry weight (CDW) (SI Appendix, Fig. S1B), we collected data on cell number and cell volume for each time point, using an electrical current exclusion cell-counter and analyzer (CASY). Here, cell volume during the glucose phase and the diauxic shift were similar (41.4 ± 0.98 fL and 39.3 ± 1.99 fL, respectively). When the cells shifted to respiratory metabolism in the ethanol phase, the cell volume reduced to about half the volume measured at fermentative growth (21.7 ± 4.7 fL) (SI Appendix, Fig. S2 A-C) paired with a slower growth rate, as observed in postdiauxic yeast cells.
Determination of Mitochondrial Volume and Mass and Their
Variations during the Diauxic Shift. To reveal the structural changes occurring at the mitochondrial level, we performed in vivo measurements of mitochondrial volume before, during, and after the diauxic shift. For this, we engineered the S. cerevisiae yeast strain CEN.PK113-7D, integrating the pMitoLoc plasmid (19) into its genome. CEN.PK113-7D MitoLoc constitutively expresses two fluorescent proteins, GFP and mCherry, bearing specific mitochondrial targeting sequences derived from the N-terminal localization sequence of the ATPase subunit 9 (preSU9) of Neurospora crassa and the cytochrome C oxidase 4 (preCOX4), respectively. The import of preCOX4-mCherry is strictly dependent on the presence of an active membrane potential, and import is, therefore, proportional to the membrane potential, causing the preCOX4-mCherry to relocate to the cytoplasm upon weakening of the membrane potential (19). The preSU9 is a well-established strong mitochondrial presequence that is less sensitive to the membrane potential than other presequences and can drive import even at low membrane potential (19)(20)(21)(22). The import strength of the preSU9 presequence has been demonstrated in experiments using addition of increasing concentrations of the uncoupler cyanide m-chlorophenyl hydrazone (CCCP) in order to gradually decrease the membrane potential (19,22). In these experiments, the coupling of proteins to preSU9 enables import even at low membrane potentials where other presequences, such as preCOX4, are insufficient to drive translocation. Based on these principles, through application of MitoLoc, we were able to visualize mitochondrial morphology, determine the mitochondrial volume, and, at the same time, qualitatively ensure the mitochondrial functional integrity was intact. The term mitochondrial morphology implies the ability of this organelle to adopt a variety of shapes, from small spheres and short or elongated tubules to reticular (net-like) networks, based on cell type and their metabolism (23). This intrinsic plasticity of mitochondria is regulated mainly by fission and fusion events, as well as branching/debranching and extension/retraction of outer and inner mitochondrial membranes, all operated by molecular effectors (23).
It has been demonstrated that changes in mitochondrial shape has a crucial role associated with mitochondrial respiration. The morphological visualization of yeast cells expressing the MitoLoc genes were performed using a Nikon A1 confocal microscope (n = 50 cells per each condition). Z-stacks were acquired and analyzed using NIS-Elements Confocal software enabling automatic deconvolution, three-dimensional projections, and, finally, volume measurement of the whole mitochondrial network (Fig. 2B).
Analysis of the mitochondrial volume revealed that as cells begin to respire, the mitochondrial network dramatically expands, occupying 4.77% of the whole cellular volume during fermentation, increasing to 11.16% during the diauxic shift, and then to 35.43% when they have fully transitioned to respiratory metabolism in the ethanol phase of growth ( Fig. 2 A and B). This would suggest that significant changes in the size of mitochondria that arise begin to occur during the diauxic shift, at the time of glucose depletion and before the cells start assimilating ethanol at the start of the respiratory phase. This contrasts with what we observed for cell volume, which decreases only after the cells complete the diauxic shift. The volume of mitochondria, therefore, appears to be strictly bounded to the presence and depletion of carbon-source availability for the cell. In fact, it has recently been shown that following the exhaustion of the carbon source, it is possible to recognize, in a heterologous population, quiescent and senescent cells based solely on the structure of their mitochondrial network. These findings subsequently tighten the association between the role and functionality of mitochondria with processes of cellular aging, whereby the latter cells are known to increase their composition of senescent cells showing defects in mitochondrial morphology (24).
Additionally, we found a strong correlation between the cell volume and mitochondrial volume under all of the three growth phases studied, with increases in cell size being reflected by a concomitant increase in their mitochondrial network ( Fig. 2 F-H). These findings are in line with what was already published by Rafelski et al. (25) and are consistent with the observations that mitochondria play a central role in regulating cell size (26).
Based on previous studies of the biophysical properties of mitochondria, it is possible to approximate that the yeast cell and the mitochondrial network have comparable densities (27,28) (SI Appendix, Fig. S2D). Given this assumption and the data collected on cell mass, cell volume, and mitochondrial volume, we were able to calculate the mass of the mitochondrial network for each condition. The data obtained on the mitochondrial mass at the three stages of yeast growth were then used to calculate the absolute protein abundance at the mitochondrial level.
Identification of Significant Changes in Protein Abundance during the
Glucose Phase, Diauxic Shift, and Ethanol Phase of Yeast Growth. To assess overall changes in the mitochondrial proteome as cells transition from fermentative to respiratory metabolism, we used biological triplicate proteomic data for significance testing of the protein abundance changes in a pairwise manner. Student's t test was used to calculate the significance of the changes in protein abundance along with their log2 fold changes (log2FC) between the three different conditions (Dataset S3). To classify the changes in abundance, we set a cutoff of P < 0.05 and an absolute log2FC > 1. This resulted in a total of 428 proteins being significantly regulated when comparing the diauxic shift and the glucose phase (413 up-regulated and 15 down-regulated), 309 being significantly regulated when comparing the ethanol and the glucose phase (273 up-regulated and 36 down-regulated), while only 16 proteins were found to be significantly regulated when comparing the ethanol phase and the diauxic shift (2 up-regulated and 14 downregulated) (Dataset S4) ( Fig. 3 A-C). Although only 16 proteins were significantly regulated between the ethanol phase and the diauxic shift, given the defined log2FC cutoff, 82 proteins were found to be significantly down-regulated (P < 0.05) (Dataset S4).
To identify some general trends with regard to the time point of regulation, we compared those proteins with significant changes in abundance across the different phases ( Fig. 3 D and E). Of the 482 unique proteins significantly regulated during any of the stages, 272 proteins were shared and significantly regulated, both when comparing the diauxic shift and the glucose phase and when comparing the ethanol and glucose phase (259 up-regulated and 13 down-regulated) ( Fig. 3 D and E) (Dataset S4). To identify trends among processes being regulated, we next ran a gene ontology (GO)-term enrichment analysis on biological processes (Benjamini-Hochberg corrected P value: <0.05) using YeastMine (29). The results of this enrichment analysis showed that among the 272 significantly regulated proteins, the TCA cycle, the respiratory chain, and ATP synthase, as well as mitochondrial transporters, were the most represented processes (Dataset S5). These proteins are up-regulated in the diauxic shift and follow a further, although small, up-regulation throughout the ethanol phase, indicating that these proteins are part of an early response to glucose exhaustion that lasts throughout the entire diauxic shift. These results are in good agreement with findings previously reported by Murphy et al. (11), who investigated the temporal changes of the S. cerevisiae proteome during the diauxic shift. In this study, the TCA cycle and oxidative phosphorylation were found to be part of an early response to glucose exhaustion, with an up-regulation of protein abundances of these processes showing to occur throughout the diauxic shift.
From pairwise comparison of the three metabolic phases, we additionally identified significantly regulated proteins that were uniquely up-and down-regulated in each phase, respectively, enabling a sharper overview of what processes may be uniquely associated with the different metabolic stages of yeast growth (Dataset S4). Of the proteins with a significant change in abundance in the diauxic shift, versus the glucose phase, 155 were up-regulated, while only 3 were down-regulated (these being the Altered inheritance of mitochondria protein Aim6, the Phospholipid-transporting ATPase Dnf1 and the cell wall protein Ecm33). Among the 155 specifically up-regulated, several proteins of notable interest include the Dynamin-like GTPase Mgm1, Mitochondrial distribution and morphology protein Mdm31, Most of the ribosomal protein (e.g., Mrp1, Mrp13, Mrpl1, etc.), Mitochondrial translation factor Atp22, Required for respiratory growth protein 1 Rrg1, Cytochrome b translational activator protein Cbs1, Cytochrome oxidase assembly factors Coa1 and Coa4, and the Mitochondrial respiratory-chain complexes assembly protein Yta12.
When considering the group of significantly regulated proteins in the ethanol phase versus the glucose phase, we detected 15 proteins that were uniquely up-regulated versus 23 that were down-regulated proteins specifically for this group. Among the proteins specifically up-regulated, we found the mitochondrial Acyl carrier protein Acp1, the Cytochrome c oxidase assembly protein Cox16, the Dynamin-related protein Dnm1, and MIOREX complex component 3 Mrx3. The larger group of down-regulated proteins include the Iron-sulfur cluster assembly protein 2 Isu2, ATP-dependent molecular chaperone Hsc82, Phosphoglycerate mutase 1 Gpm1, and Membrane-anchored lipid-binding protein Lam1.
Finally, the group of ethanol phase versus diauxic shift includes only a few unique significantly regulated proteins, with only one up-regulated protein and 13 down-regulated proteins. The single up-regulated protein consists of the probable secreted betaglucosidase Uth1 involved in the regulation of mitochondrial biogenesis (30). While the group of down-regulated proteins includes the Ubiquinone biosynthesis protein Coq9, Ubiquinone biosynthesis O-methyltransferase Coq3, Cytochrome c iso-2 Cyc7, and the Stationary phase gene 1 and 4 proteins Spg1 and Spg4.
Through this analysis, it was possible to point out the high number of proteins specifically up-regulated in the diauxic shift (155 against the 15 proteins up-regulated in the ethanol phase versus glucose phase). Among the processes that are specifically up-regulated during the diauxic shift, we identified proteins involved in mitochondria morphology (i.e., MGM1), as well as in energy metabolism. This confirms the crucial role of this metabolic stage in the regulation of a variety of processes associated with the transition from a fermentative to respiratory metabolism.
Finally, we looked at a group of proteins that are significantly up-regulated in the diauxic shift compared with the glucose phase but have a log2FC < 1 when comparing the ethanol phase and the glucose phase. GO-term enrichment analysis on biological processes (Benjamini-Hochberg corrected P value: <0.05) showed that this group of proteins was significantly enriched in GO terms including mitochondrial gene expression, mitochondrial translation, mitochondrial organization, protein targeting to mitochondria, and respiratory-chain complex assembly, indicating that a reprogramming of the mitochondrial network and biogenesis-related functions are essential at an early stage of the adaptation of mitochondria to a respiratory metabolism (Dataset S5). These processes were not identified as significantly enriched among the regulated proteins found by Murphy et al. (11). We speculate that the analysis of the cellular proteome alone, compared with our analysis of the mitochondrial proteome specifically, performed by Murphy et al. and other studies (8-11) does not sufficiently portray changes occurring at the mitochondrial level, which are often involving less abundant, membrane-bounded mitochondrial proteins groups. On top of providing an accurate representation of mitochondrial processes during the diauxic shift, our study, therefore, additionally highlights the benefit of focusing on the mitochondrial proteome specifically and alongside studying the cell proteome (Dataset S2), when analyzing metabolic processes that directly involve this organelle.
Stoichiometric Evaluation of ATP Synthase Subunits and Remodeling of the Cristae Structure during the Diauxic Shift. ATP synthase is a highly conserved complex responsible for the formation of ATP via a rotary mechanism. Through the application of a stoichiometric model, the F1Fo-ATP synthase has been identified as the key flux-controlling enzyme for the respiratory metabolism in S. cerevisiae (1,31). Therefore, to validate the quality of our absolute proteomic data, we investigated the subunit stoichiometry of the multiprotein ATP synthase complex during the three metabolic phases (Fig. 4 A and B and SI Appendix, Fig. S3 A and B).
To achieve this, we calculated each subunit's ratio to its median abundance using its copy number per cell abundance detected. For comparison, we also included proteins whose quantification could not be considered reliable, given the low number of peptides identified (Fig. 4 A and B and SI Appendix, Fig. S3 A and B). Overall, these results showed that despite the changing conditions, cells maintained the ratio of subunits in the stoichiometry required for complex formation, with over 75% of the total complexforming proteins being within −1.7 and 1.0 log2FC of the median in the glucose phase, −2.5 and 1.0 log2FC in the diauxic shift, and −1.8 and 1.0 log2FC in the ethanol phase (SI Appendix, Fig. S3 F and G).
Next, we analyzed protein allocation across the three growth phases, focusing on the ATP synthase complex and "cristae associated proteins," the latter including the respiratory-chain complex located in the cristae membranes and the MICOS complex situated in the cristae junctions (Fig. 4 C, D, F, G, and I).
From the analysis of the ATP synthase, we found that the onset of mitochondrial respiratory metabolism in the ethanol phase is marked by an ∼threefold increase in abundance of this complex (after summing all subunits) compared with growth on glucose. This increase in mitochondrial respiration, as reflected by ATP synthase becoming more abundant, was also coupled to a concomitant ∼threefold increase in the abundance of cristaeassociated proteins, linking the transformation of mitochondrial functions to the remodeling of their morphology (Fig. 4 D, F, G, and I). Furthermore, we observed a drastic increase in protein abundance predominantly during the diauxic shift, with a 2.9-fold increase in the ATP synthase complex during the transition from glucose phase to diauxic shift versus a 0.4-fold increase from diauxic shift to ethanol phase. From the analyses of the respiratory-chain complex and the MICOS complex, we also observed the same trend, with proportional increases occurring mainly as cells transition from the glucose phase to the diauxic shift. As well, in the overall analyses of the cristae associated protein, we found a 3.5-fold increase from glucose phase to diauxic shift and a 0.3-fold increase from diauxic shift to ethanol phase (Fig. 4 D, F, G, and I).
The analyses of proteins located in the inner membrane boundaries (e.g., Yta12, Afg3, and Oxa1) (Fig. 4J) show the highest levels of these proteins during the diauxic shift, followed by constant or a slightly lower allocation in the subsequent respiratory phase. These proteins are required in the assembly of mitochondrial enzymes complexes (Yta12 and Afg3) and the insertion of mitochondrial and some nuclear-encoded proteins in the inner mitochondrial membrane (Oxa1) (32,33). This suggests changes in this part of the mitochondria that principally occur during the diauxic shift may perform a regulatory role in initiating structural changes necessary for respiratory metabolism.
It has been previously established that a tight link exists between the structure and function of mitochondria (23)(24)(25)(26). Indeed, within the inner mitochondrial membranes, the cristae structure has been shown to be crucial for the function of the respiratory chain, due to its ability to 1) modulate organization of respiratory-chain components into supercomplexes (SCs) and 2) affect the structural stability of these SCs (34). Among the proteins involved in the cristae structure, OPA1 (Mgm1 S. cerevisiae homolog), located in the inner membrane boundaries, also known as the master cristae shape regulator, plays a critical role in the stability and assembly of the respiratory-chain SCs (35,36). By looking at the distribution of this protein in the three different metabolic phases considered in this study, we notice a significant increase in its abundance, mainly in the diauxic shift (with a two-fold increase from the glucose phase to the diauxic shift versus 1.7 fold between the glucose phase and ethanol phase) (Fig. 4E). As well as OPA1, the MICOS complex also plays a critical role in connecting structural changes to this organelle's function. In particular, this complex is important for securing the architecture of mitochondrial membranes, by holding the cristae junction and stabilizing the cristae curvature (37). The stability of MICOS, in turn, is also aided by the inner membrane protein Aim24 (Fig. 4H), which, if deleted, results in impaired respiration, destabilization of the MICOS complex, as well as an alteration in cardiolipin (CL) acylation pattern (38). From the analysis of Aim24's distribution during the three metabolic phases of the yeast growth, we observe again as for the other aforementioned proteins, an increase in this protein's allocation during the diauxic shift that remains constant during the ethanol phase (Fig. 4H). Altogether, these data support the hypothesis that the diauxic shift is the phase in which most of the structural and functional rearrangements in the mitochondria occur, in preparation for maximum respiratory function during growth on ethanol.
To reconcile further what we found for cristae at the protein level, with its function in respiration, we next performed Fig. 4. Increasing proteome allocation toward respiratory metabolism is coupled to increased cristae formation. (A) Stoichiometry of the subunits of ATP synthase, in the glucose phase, expressed as the ratio to the median, taking the theoretical stoichiometry of ATP1 and ATP2 (three subunits per complex) into account. Data represent the means ± SD of three biological replicates. (B) Overview of ATP synthase subunit composition in yeast. Subunits in color were detected in this study; subunits in gray were detected with low coverage and quantification accuracy, possibly due to their presence in the inner mitochondrial membrane. Subunits in white were not detected. (C) Absolute abundance of the ATP synthase complex. Data represent the means ± SD of three biological replicates. Statistical analyses were performed using paired t test. (D-I) Absolute abundance of the cristae-associated proteins (D), with detailed analyses on the absolute abundance of the Respiratory Chain Complexes (E); Protein of the Complex IV of the Respiratory Chain (F); MICOS complex (G); the protein Mgm1, Dynamin-like GTPase (H); the protein Aim24, Altered inheritance rate of Mitochondria (I); and inner boundaries membrane proteins Yta12, Alg3, and Oxa1 (J). Data represent the means ± SD of three biological replicates. Statistical analyses were performed using paired t test. Statistically significant differences are indicated as follows: ns (not significant), P > 0.05; *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001. lipidomics on the crude mitochondrial extract to see if the lipid composition of the cristae was also changing in line with mitochondrial metabolism. Our lipidomics data revealed that the mitochondrion-specific glycerophospholipid CL increased in abundance during the diauxic shift (from 1.81 ± 0.18 μg/mg to 5.25 ± 1.11 μg/mg), followed by a decrease (to 3.77 ± 0.30 μg/mg) as cells reached the ethanol phase (Fig. 5A), probably due to the majority of mitochondrial remodeling occurring already during the diauxic shift. Due to its conical shape, CL is known to be abundant in areas of high membrane curvature; moreover, this lipid has been implicated in the stabilization of individual complexes of the respiratory chain (39). We found that as cells shift toward respiration, the acyl-chain composition of the CL species also moves toward a higher degree of unsaturation (Fig. 5 C and D). It has previously been speculated that the increase in CL unsaturation enables an increase in membrane fluidity, in turn allowing cristae to increase in curvature and accommodate more ATP synthase for respiration (39). This observation toward CL reconfiguration was further supported by the proteomic data, which showed an up-regulation of the Mitochondrial CL-specific phospholipase Cld1; the Mitochondrial phosphatidylglycerolphosphatase (PGP phosphatase) Gep4; the Phosphatidylglycerolphosphate synthase Pgs1, a rate-limiting step in the synthesis of CL; the Mitochondrial phosphatidate cytidylyltransferase (CDP-DAG synthase) Tam41, also required for CL biosynthesis; and the Lysophosphatidylcholine acyltransferase Taz1, involved in the CL acyl-chain remodeling , and Pgs1 in the three stages of growth. Data represent the means ± SD of three biological replicates. Statistical analyses were performed using paired t test. (E) Abundance of PE scaled to picogram of MDW, calculated as the mean ± SD of three biological replicates, based on LC-MS analysis of isolated mitochondria. Statistical analyses were performed using paired t test. (F) Absolute abundance of the identified enzymes involved in PE metabolism, Psd1, Cho1, and Mgm35, in the three stages of growth. Data represent the means ± SD of three biological replicates. Statistical analyses were performed using paired t test. All absolute abundances are scaled to picogram of MDW. Statistical analyses were performed using paired t test. Statistically significant differences are indicated as follows: ns (not significant), P > 0.05; *P < 0.05; **P < 0.01. (Fig. 5E). These proteins, which are strictly involved in the generation of the mitochondrial distinctive glycerophospholipid CL, have a peak allocation during the diauxic shift, reflecting the higher concentration of CL during this metabolic phase shown in Fig. 5A.
Furthermore, the increase in unsaturation documented in Fig. 3 B and C is in line with published evidence that acyl-chain unsaturation occurs alongside an increase in respiratory activity (40). A recent study hypothesizes that the increased expression of oxidative phosphorylation complexes creates a stress in the lipid bilayer and is responsible for the induction of the CL remodeling by Cld1 and Taz1. The increased content in tetraunsaturated CL species would, therefore, stabilize the lipidprotein interactions necessary for the correct functionality of protein complexes residing in the mitochondrial membrane (39).
Alongside CL, the other non-bilayer-forming phospholipid, phosphatidylethanolamine (PE), has been demonstrated to play a role in the maintenance of mitochondrial structure and function (41). Indeed, a synergistic activity between CL and PE has been shown to exist, with both being required for optimal mitochondrial import activity and for the respiratory-chain complex to function (42). However, in contrast to what is observed for CL (Fig. 5A), PE appears to accumulate mostly during the ethanol phase, most likely to compensate for reduced levels of CL (Fig. 5B). These data are also reflected by the increased allocation of Psd1, the leading mitochondrial enzyme involved in the synthesis of PE (Fig. 5F). Altogether, integration of the absolute mitochondrial proteome with lipidomic profiling, performed on isolated mitochondria, supports the position of CL and PE as regulators of mitochondrial respiratory function. Additionally, these data highlight modifications in the membrane composition occurring during the diauxic shift, in particular, the acyl-chain remodeling of CL, a feature that is connected to mitochondria-induced biogenesis and proliferation, which are taking place during this metabolic phase.
A study published by Casanovas et al. (10) that employed a jointed proteomic and lipidomic approach to analyze proliferating S. cerevisiae cells, showed, in alignment with our findings, an increase of CL levels during the diauxic shift, together with an upregulation of Taz1 and an activation of CL-remodeling. However, the increase in PE that we observed during the diauxic and postdiauxic phase was not reported by Casanovas et al., who conversely report an increase of PE, alongside other phospholipids during fermentation. This discrepancy is possibly due to additional measurement of phospholipid molecular species present in the cell, outside the mitochondria, that may confound interpretation for how much is derived solely from this organelle. For example, the presence of a second PE-synthesizing enzyme (Psd2) located in the endoplasmic reticulum (ER), would be able to supply the cells with additional required phospholipids (43). This approach may, therefore, preclude the accurate quantification of mitochondrial PE, while a targeted analysis of isolated mitochondria enables a more probable perspective of lipid dynamics in this organelle specifically.
Mitochondrial Absolute Proteome Allocation and Variation of the
Organelle's Role Amid Diauxic Growth. To understand how the mitochondrial proteome is reallocated for respiration in the postdiauxic phase, we split the proteome into 19 groups based on each protein's molecular function. We then analyzed the relative abundance of these groups in the respiratory versus fermentative growth phase. Overall, the mitochondrial proteome showed a general shift in proteome allocation from biosynthetic processes, at the high growth rate, to energy-related processes, such as oxidative phosphorylation, at a lower growth rate (Fig. 6 A and B). Similarly, to analyze the allocation of the whole-cell proteome, (Fig. 6C), we divided the proteome into 14 groups of similar function based on GO (44, 45) annotation and Kyoto Encyclopedia of Genes and Genomes pathways (KEGG) (46) (Dataset S6). These groups covered ∼90% of the proteome mass-wise, and the remaining proteins were assigned to an additional group (other processes). By examining the allocation between the groups in the different growth stages, we found that during exponential growth, the proteome is allocated mostly toward translation-related proteins (∼37%) and glycolytic proteins (∼20%). As the cells transitioned to respiratory metabolism, however, we saw a decrease in allocation toward translation by ∼20%, in line with previous work that showed the proportion of translation-related proteins in the proteome to scale with growth rate (47).
Along with this decrease, there is a noticeable shift in allocation (from 6 to 26%) toward respiratory functions in the cell, the TCA cycle, and other mitochondrial processes (Fig. 6C). However, in contrast to what has been observed for the mitochondrial proteome, in the analysis of the whole-cell proteome, the allocation to mitochondrial functions and energy metabolism is more pronounced during the ethanol phase and to a lesser extent in the diauxic shift. In fact, by just looking at the wholecell proteome, the diauxic shift seems to represent an intermediate state between the glucose and ethanol phase with respect to the allocation of proteins associated with mitochondrial energy metabolism. For instance, considering the whole-cell proteome, we measured an increase of 48% in allocation toward respiratorychain proteins between the diauxic shift and ethanol phase.
However, when focusing solely on the mitochondrial proteome, the difference between the diauxic shift and the ethanol phase for the same class of proteins is only 10.7% (Fig. 6 C and E). This observation is further clarified when comparing side-byside the percentage allocation of the respiratory-chain proteins in the mitochondrial proteome and the cellular proteome (SI Appendix, Fig. S4A). In the mitochondrial proteome, we observe a higher allocation during the glucose phase, followed by a rapid increase (of 43.7%) in proteome allocation for respiratory proteins in the diauxic shift and a smoother transition (with a 10.4% increase) to the ethanol phase. In contrast, by looking at the cellular proteome, we see a milder increment (of 18.6%) between glucose phase and diauxic shift, followed by a drastic rise (of 72.9%) in the protein complexes' allocation from the diauxic shift to the ethanol phase (SI Appendix, Fig. S4A). These results clearly highlight that subcellular level proteomics is required to reveal organelle-specific dynamics, which may be less contrasting in terms of protein abundance at the cellular level. To show organelle-specific changes in proteome allocation and remove potential biases in protein abundance from analyzing the entire cellular proteome at once, we further investigated proteomics data generated from the analysis of isolated mitochondria. Once again, we looked at the total allocation of 19 reported functional groups, as well as each group respectively, in order to highlight the precise protein distribution during the glucose phase, diauxic shift, and ethanol phase (Fig. 6 D-F, SI Appendix, Fig. S5A, and Dataset S7).
The allocation of some specific functional groups of mitochondrial proteins, including those involved in mitochondrial-specific amino acid, sterol, and phospholipid biosynthetic processes, was shown to be significantly enriched during the fermentative growth stage, compared with the ethanol phase ( Fig. 6D and SI Appendix, Fig. S5A). Regarding the phospholipid metabolism, in particular, we see an opposite trend between the biosynthetic pathways connected to the synthesis of CL and PE, involved in the regulation of respiratory mitochondrial functions (Fig. 5 A, B, E, and F), in comparison with the remaining proteins that are involved in the metabolism of bilayer-forming phospholipids. This is highlighted in more detail in SI Appendix, Fig. S5I by looking at the allocation of the proteins Cds1, Cpt1, Opi3, Pgc1, and Pis1, which are respectively involved in the synthesis of CDP-diacylglycerol from phosphatidic acid (PA), the biosynthesis of phosphatidylcholine, the regulation of phosphatidylglycerol (PG) accumulation, and the synthesis of phosphatidylinositol (PI).
The differential distribution in allocation based on these proteins' specific processes suggests three things: during fermentation, mitochondria are predominantly supporting cell proliferation by supplying important building blocks and metabolic intermediaries; secondly, proteins related to mitochondrial energy metabolism are most active after fermentation, with a steep increase during the diauxic shift that continues through to the ethanol phase. Finally, these observations confirm the dual activity of this organelle as a central hub for supplying biosynthetic precursors as well as for energy production (Fig. 6 D and E).
Aligned with this hypothesis, proteins involved in the structural constitution of mitochondrial ribosomes, and, to a lesser extent, the regulation of translation, are more abundant during the diauxic shift ( Fig. 6B and SI Appendix, Fig. S5 E and G). This pattern can be justified by the fact that the core subunits of the respiratory-chain complexes are synthesized locally by mitochondrial ribosomes (48). Otherwise, we observed that some functional protein groups (Heme biosynthesis and proteins involved in the mitochondrial genome maintenance; SI Appendix, Fig. S5 C and H), maintain a more consistent level of expression in the glucose phase and thorough the diauxic shift to reach the ethanol phase, with a slight decrease in the allocation of the oxidative stress response-related protein during the ethanol phase (SI Appendix, Fig. S5F). Similarly, we find that despite the tight connection between the energetic state of mitochondria and its import machinery, changes in allocation toward proteins involved in mitochondrial protein import, including chaperones and proteases, are independent of these two metabolic phases ( Fig. 6B and SI Appendix, Fig. S5K).
Until now, the regulatory dynamics of the mitochondria's import system have remained unknown, with the complexes involved considered as being constitutionally expressed and maintaining a constant import capacity. Recently, however, it has been shown that the functionality of the whole TOM complex is regulated by cytosolic kinases (49), and not by changes in the protein allocation of the TOM import system, which we also observe here for the latter (SI Appendix, Fig. S5K). Furthermore, the analysis of the other major import machinery proteins SAM, TIM23, and TIM23 shows a similar trend (SI Appendix, Fig. S5K). This kinase involvement subsequently indicates a direct link between fermentative metabolism and the inactivation of Tom70 through phosphorylation (50). In fact, an increase in glucose concentration is known to trigger cAMP accumulation and cAMP-dependent protein kinase (PKA) activation. The kinase phosphorylates a binding pocket of Tom70 blocking its interaction with the chaperone protein carrying the precursor protein and consequently arresting the import (50). In addition to the analysis of the complexes constituting the mitochondrial import machinery, we have investigated the allocation of the chaperones and proteases, both having a crucial role in assisting the process of protein import into the mitochondria (SI Appendix, Fig. S5J). The overall distribution of these proteins throughout the three metabolic phases considered shows an increase during the diauxic shift and a quasiidentical allocation during the glucose phase and the ethanol phase. A more in-depth look at some of the key proteases and chaperones identified in this study shows again a higher, although not significant, allocation during the diauxic shift and a similar distribution during fermentative and respiratory growth (SI Appendix, Fig. S5J).
Finally, with respect to the dynamics between fusion and fission events, our results suggest fission to predominantly impact network structural remodeling during these three states in metabolism. We find a more substantial fraction of the mitochondrial proteome to be directed toward fission events during the glucose phase, relative to those related to fusion, in line with cells having a more fragmented mitochondrial network under this condition (51). Indeed, this change in proteome allocation toward fission-related proteins was shown to be significantly (P < 0.01) higher than the increase in fusion as cells began to respire following glucose exhaustion, confirming that the instigation of fission events is a predominant driver in the structural adaptation of mitochondria to a respiratory metabolism (Fig. 6F).
Discussion
In this study, we provide a thorough analysis of mitochondrial functions during yeast's diauxic growth, encompassing the mitochondrial metabolic role coupled with their distinctive dynamic morphological adaptations. By using absolute quantitative subcellular proteomics alongside state-of-the-art optical imaging and lipidomics, we could track the transition of mitochondria from their role as a central biosynthetic hub during fast growth to a primary energy generator, focused on respiratory energy metabolism at lower growth rates. In fact, during fermentative growth, we observed an increase in proteins associated with the production of amino acids, sterols, and phospholipids, which are synthesized by the mitochondria and constitute essential sources of intermediaries for other critical cellular functions, such as translation and membrane biogenesis and proliferation. We found the shift toward respiratory metabolism to be contrastingly characterized by a reduction in biosynthetic functions in favor of processes linked to energy generation.
Through our approach, we combined the absolute proteomic data, obtained at different growth stages, together with biophysical properties of this organelle (volume, structure, and mass), which was crucial for parametrization and interpretation of the proteomic data. In fact, by analyzing the copy number of mitochondrial proteins per cell alone, without taking into account the changes in mitochondrial size and mass, we could only observe an increase in the mitochondrial proteome during the diauxic shift and especially in the ethanol phase, as a direct consequence of the expansion of the mitochondrial network (SI Appendix, Fig. S6A). Additionally, the structural rearrangements undertaken by the organelle could be clarified at the functional system-level. For example, we found a role of fission proteins that act as a control mechanism toward mitochondrial expansion during fermentative metabolism, which complements the regulation of this organelle's expansion via posttranslational modifications, as speculated previously (51). As well as a decrease in fission events, changes in CL composition to higher unsaturation levels and an increase in ATP synthase appear to be cornerstone features of mitochondrial adaptation to metabolic constraints on the cell, underlining the inherently dynamic nature of this organelle (52).
The data presented in this study also suggest that as cellular metabolism switches from fermentation to respiration, the major structural and functional adaptations occurring at the mitochondrial level appear to be completed during the diauxic shift, with relatively fewer changes occurring in proteome allocation as cells shift from the diauxic shift to respiration. We, therefore, propose that the diauxic shift should not be defined as only a simple intermediate phase between the main metabolic stage of fermentation and respiration. Instead, it should be considered as a separate phase responsible for hosting significant reorganization events related to mitochondrial function, in particular, with respect to its role in early-stage adaptation of mitochondria to a respiratory metabolism. In contrast, at the cellular level, there shows to be a more gradual transition of the protein pool from fermentation to respiration with the diauxic shift constituting an intermediary metabolic state. This would indicate that the reorganization of yeast metabolism principally occurs during the diauxic shift and is initiated at the mitochondrial level.
In summary, this study highlights two major trends in the metabolic reorganization of mitochondria occurring during the yeast growth and the transition from fermentation to respiration. The first trend consists of the drastic up-regulation of energy-related processes occurring during the diauxic shift, which are continuously up-regulated to reach a further, although minor, increase during the ethanol phase. The second trend encompasses mitochondrial biosynthetic processes related to, for example, amino acid and phospholipids, as well as cofactor biogenesis. These biosynthetic functions are most prevalent during the glucose phase to sustain exponential cell growth.
In addition, we observed that processes like mitochondrial genome maintenance and oxidative stress response are up-regulated during the diauxic shift, followed by a decrease in the ethanol phase. Similarly, when looking at the overall protein allocation for functions associated with the protein imports (including chaperones and proteases), we also observe a significantly higher protein distribution during the diauxic shift. However, by looking closely at the individual proteins and complexes, the protein allocation during the three metabolic phases of relevant import complexes (i.e., TOM, TIM23, and SAM) appear evenly distributed, and the increase observed in the diauxic shift for individual proteases and chaperones are not significant.
In conclusion, by offering a highly quantitative insight into proteome allocation of mitochondria, this study provides an essential advancement in the systems analysis of eukaryotic subcellular dynamics. This can be used to complement experiments of genome-scale metabolic modeling that incorporate enzymatic constraints at the subcellular level (53). Furthermore, they constitute an important resource to highlight the mitochondrial role in bioenergetic and biosynthetic metabolism, as well as their involvement in the aging process and neurodegenerative diseases.
Materials and Methods
The details of the experimental model, media and growth conditions, highperformance liquid chromatography analysis of exometabolites, cell confocal microscopy, isolation of mitochondria, mitochondrial lipidomics, and the proteomics analysis are provided in SI Appendix, Supplementary Materials and Methods.
Data Availability. The mass spectrometry proteomics datasets have been deposited in the ProteomeXchange Consortium via the Proteomics Identification Database partner repository (54) with the dataset identifiers PXD012802 and PXD012803. All data pertaining to this study, as well as any code used for analysis, are available from the corresponding author upon request. | 2020-03-19T10:22:11.941Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "816c02907469fa45db015db5f85bbc073a4fff8a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.pnas.org/content/pnas/117/13/7524.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "97b93b714a3c1e69b8cc35adebb08be007cbf427",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
235397920 | pes2o/s2orc | v3-fos-license | Empowering patient education on self-care activity among patients with colorectal cancer – a research protocol for a randomised trial
Background Chemotherapy-induced side effects may have a negative effect on nutrition intake, thus increasing the risk of malnutrition and consequently, other serious complications for patients with cancer. The prevalence of malnutrition is common among patients with colorectal cancer. Nurse-led empowering education may have a positive effect on self-care activity in this patient group. Therefore, our purpose is to develop an empowering educational nursing intervention and test its effect on self-care activation and knowledge level among patients with colorectal cancer during chemotherapy. Secondary outcomes are quality of life and risk of malnutrition. Methods An interdisciplinary expert group developed a face-to-face empowering educational intervention using teach-back method. A two-arm, single-centre, superiority trial with stratified randomisation (1:1) and pre-post measures will be used to assess the effect of the intervention compared to standard care. Patients (N = 40 + 40) will be recruited in one university hospital outpatient clinic in Finland. Eligibility criteria are adult patients diagnosed with colorectal cancer starting oral fluoropyrimidine or combination chemotherapy treatment. A registered nurse experienced in oncology will deliver the intervention 2 weeks after the first chemotherapy. Outcomes are measured before intervention (M0) and after a two-month follow-up period (M1). Discussion This study will assess whether nurse-led empowering education using teach-back method is effective on self-care activity among patients with colorectal cancer. If the intervention has a positive effect, it may be implemented into patient education in a corresponding context. Trial registration ClinicalTrials.gov: NCT04160650 Registered 12 November 2019 - retrospectively registered
Background
People are increasingly affected with colorectal cancer (CRC), which is one of the most prevalent cancers globally, comprising about 10% of newly found cases [1,2]. In Finland, about 3300 patients are diagnosed annually with CRC, making it the second common cancer type among both men and women [3]. Chemotherapy is a common treatment for patients with operated high-risk stage II and III CRC as well as advanced and metastatic CRC [4]. Chemotherapy-related toxicities that affect the ability to eat are known as nutrition impact side effects (NIS). Side effects such as nausea, diarrhoea, constipation, mouth sores, heartburn, loss of appetite, altered taste, cold sensitivity, pain, fatigue and distress may lead to inadequate nutritional intake and weight loss, thus increasing the risk of malnutrition [5][6][7][8].
Malnutrition is common among patients with CRC, the prevalence varying according to patients' age, cancer type and stage of cancer [7,8]. In malnutrition, a deficiency or excess of energy, protein, and other nutrients causes measurable adverse effects on body function and clinical outcome [9]. The prevalence of both malnutrition and NIS are higher in older population [10]. For example, among geriatric patients with gastrointestinal system cancer (n = 153) about 38% were malnourished and 35% at risk of malnutrition at the time of the first outpatient visit. Chemotherapy has been shown to increase the incidence of malnutrition and weight loss [7,11]. The complications of severe malnutrition may lead to greater chemotherapy toxicity, worse physical function and quality of life (QoL), and reduced overall survival [12,13]. Moreover, malnourished inpatients have longer hospital length of stay and higher treatment costs [8]. Therefore, it is essential to develop effective interventions to support empowerment and promote nutrition intake of patients receiving chemotherapy.
Nutrition-related interventions for patients with cancer have been studied extensively with mixed results. Interventions have included individualised nutritional support and counselling to reach protein and energy goals [14], personalised nutrition intervention [15][16][17], dietary counselling or advice [18,19], oral nutritional supplements (ONS), and nutrition advice with written information [18]. Interventions have proved to be effective on energy and protein intake [14][15][16][17][18][19], weight [15], QoL [14,16,17,19], morbidity and mortality [14,16,17] as well as the risk of adverse clinical outcomes at 30 days [14]. Among patients with head and neck cancer undergoing radiotherapy, individualised nutritional counselling compared to ad libitum diet and ONS was capable of sustaining a significant impact on patients' outcomes after 3 months' follow-up [16]. Conversely, some of the interventions mentioned above have not been effective on weight [18,19], nutritional status, QoL, functional status [18] and mortality. According to authors, the lack of effect might have been related to small sample sizes, short follow-up periods or low methodological quality of the studies [19]. Alternatively, Internetbased interventions including information and some interactive activities with experts to manage common eating and nutritional problems during cancer treatments have been tested. The results have not shown statistically significant changes on patients' knowledge levels, anxiety and QoL, probably due to limited sample size or insufficient intervention [20].
Only few studies have explored the effect of nurse-led nutritional interventions among this patient group, yet the results have been promising. A multidisciplinary team approach for nutritional interventions (individual recipes, nutritional risk screening, total energy requirement calculation, education and diet adjustments) conducted by specialist nurses has obtained a positive effect on pre-albumin levels among CRC patients undergoing chemotherapy [21]. An individualised educational program with face-to-face and telephone counselling gained positive results on energy and total protein intake among patients with CRC (n = 19 + 21) in palliative care context [22]. The same type of intervention among patients with gastric cancer (n = 72 + 72) implemented by a nurse specialist had a positive effect on nutritional intake, haemoglobin, total serum and albumin levels, as well as on chemotherapy compliance rate [23]. Our literature search on the subject did not find other results of nurse-led studies, so there is still a lack of nursingspecific outcomes of nutrition-related interventions. In general, educational nursing interventions have shown positive outcomes on the level of knowledge and symptom severity yet the results have been inconsistent on QoL among patients with cancer [24].
Self-care at home between the chemotherapy cycles has an important role in the success of overall care. Selfcare involves both the ability to care for oneself and the activities necessary to achieve, maintain, or promote one's optimal health. Through self-care, various outcomes may be achieved; for example, improved symptom control, coping with the illness, and QoL. In addition, health services usage and costs may decrease [25]. In this study, self-care is seen as an ability to manage NIS and gain control over one's health. We use empowering patient education and teach-back method to support patients in their self-care.
Empowerment refers to the ability to manage the challenges of the illness and having a feeling of control over one's life. It is perceived as an inner strength of a human [26,27]. Empowerment occurs when individuals' capacity to think critically and make informed decisions is supported and they make decisions about their own care [28]. Empowerment is created in dialogue in nurse-patient relationship [29] as nurses support patients by offering knowledge and assist them to find, construct and use their own resources in self-care. The dimensions of empowerment can be categorised as experiential (patients' earlier experiences), functional (function of one's body and mind), ethical (feeling of being valued and respected), financial (affording support, technical aids), bio physiological (knowing one's own body and its symptoms), social (interaction with other people) or cognitive (knowledge for improving one's health) [26,30]. In this study, empowerment is seen as a process where the nurse supports patients' empowerment to be more active in self-care. During this process, patients gain knowledge to develop skills for NIS-related problemsolving in daily life [27,28] in order to reduce the risk of malnutrition and promote the QoL.
Previous studies examining patients' expectations towards their care demonstrate the expectation of having knowledge [31][32][33] to manage NIS and own health independently at home. As nutritional care is seen as part of fundamental care [34], nurses have a good opportunity to support patients' empowerment on self-care of NIS and prevent the risk of malnutrition during chemotherapy. With this research protocol, we answer the question how the empowering educational nursing intervention using teach-back method will be tested on self-care activation, knowledge level (primary outcomes), QoL and risk of malnutrition (secondary outcomes) among patients with CRC during chemotherapy.
Aim, design and setting of the study
The aim of this intervention is to improve patients' empowerment in self-care. The design is a two-arm, single-centre trial with stratified randomisation (1:1) with repeated measures (Fig. 1). We hypothesise that patients with CRC who receive nurse-led empowering education of NIS vs standard education have higher self-care activation level, better knowledge level, less risk of malnutrition and less worsening of QoL at 2 months' follow-up compared to the control group (CG).
The study setting is a large university hospital in Southern Finland, responsible for specialised health care for about 2.1 million people. In the Cancer Centre outpatient clinic, about 4500 patients diagnosed with CRC (colon, rectum) receive chemotherapy treatment annually. Up to 40 new patients per month come for an evaluation of chemotherapy initiation.
Characteristics of participants
Eligible participants are adult men and women diagnosed with CRC, > 18 years and having oral fluoropyrimidine or combination chemotherapy treatment. Exclusion criteria are physical, cognitive or psychological impairment preventing participation and insufficient comprehension of Finnish language. Intervention Standard care Patients both in CG and intervention group (IG) receive standard education delivered by a registered nurse (RN) during the first visit in the outpatient clinic and later on in the infusion unit. Standard education includes the following verbal information: side effects and their self-care; nausea, diarrhoea, obstipation and sores in the mouth, peripheral neuropathy symptoms, local venous irritation, heart symptoms, mucous and skin irritation self-monitoring of NIS, fluid intake, medication dose changes, effect of chemotherapy weight control taste alteration cold sensitivity importance of varied diet oral nutritional supplements clinical nutritionist services Standard education includes the following written information: -'Nutrition guide for cancer patients' -'Instructions for those receiving anticancer treatment' -'Information on strong opioids' -'Cancer pain management' -'Anti-nausea medication' -'When you have nausea' -'Management of diarrhoea' -'Management of constipation' -'Oral care instructions for cancer patients' In addition to a RN, physicians give patients instructions on medication and related side effects.
Intervention protocol
The intervention was developed during autumn 2018 and spring 2019 by an interdisciplinary expert group consisting of two RN experienced in oncology, a clinical nutritionist, an oncologist, and the researcher (LT). We held four shared meetings and additional discussions between each member and the researcher. According to the protocol, patients in IG will receive knowledge of healthy diet and malnutrition. In addition, they will receive tailored knowledge of NIS. The tailoring is based on the side effect self-monitoring diary and patient activation level assessment according to the Patient Activation Measure (PAM) [35]. In addition, patients receive knowledge of the prevention of NIS and self-care strategies based on organisation guidelines. The teach-back method is used to verify participants' understanding of the received knowledge, to tailor education, to uncover health beliefs, and to activate patients in dialogue [36]. Participants have to be able to teach back the main parts of the knowledge related to malnutrition as well as the reasons, prevalence, and self-care of the NIS they are suffering from. The teach-back method has shown a positive effect in self-care by improving outcomes in disease-specific knowledge, adherence and self-efficacy among people with chronic disease. It has also reduced hospital readmission rates [37].
Empowerment is supported by offering additional knowledge according to patients' expectations. In addition, the research nurse uses active listening and strengthening self-care strategies that have been successful. The progression of the discourse is based on patients' expectations and active involvement: the nurse provides the patient with expert knowledge and maintains an empathic connection to the patient throughout the session [38].
The content of the educational intervention is as follows: -Illustrating the purpose of the session.
-Exploring patients' current knowledge of healthy diet and malnutrition, offering additional knowledge using teach-back. Due to the COVID-19 pandemic, the intervention was scheduled for the second chemotherapy course. The intervention will not entail any extra costs for patients. In case the intervention cannot be conducted face-toface, it is possible to make it available online, e.g. as a video call.
Intervention nurse
The eligibility criteria for the research nurse comprise being a registered nurse and experienced in oncology nursing. The research nurse is trained to deliver the intervention by the researcher during three-hour faceto-face sessions held three times. The research nurse answers open-ended questions related to the content and method of education before the intervention commences. The intervention is delivered systematically according to the prescheduled protocol. After each session, the research nurse documents the length and content of the intervention. The researcher and research nurse will meet once a week to check that the protocol is adhered to.
Outcome measures
Primary outcomes are activation in self-care and knowledge level. Secondary outcomes are risk of malnutrition and QoL. The schedule of enrolment, interventions, and assessments [39] is presented in Fig. 2.
Self-care activation
Self-care activation is measured, as achieving selfmanagement is one of the most frequent consequences associated with patient empowerment [29]. In addition, there is evidence that patient activation is associated with health-related outcomes. It has been found that active people are more likely to have received preventive care, less likely to smoke or have a high BMI, and have better clinical indicators (systolic blood pressure, LDL). In addition, they are less likely to be hospitalised or use emergency services [40]. Self-care activation is measured using the Patient Activation Measure (PAM) instrument [41], which measures patient activation, a concept related to empowerment. The instrument covers the elements used to define empowerment (patients' capacities, knowledge, behaviours and support by others) [42]. PAM measures individuals' knowledge, skills and confidence to manage their own health. The questionnaire consists of 15 items on a 5-point Likert scale. Individuals fall into one of the following levels of activation along a 0-100-point scale: 1) Being overwhelmed and unprepared to play an active role in own health, 2) lacking knowledge and confidence for self-care, 3) taking action but lacking confidence and skill to support behaviours, 4) adopting health supporting behaviours, but may have difficulties to maintain them in stressing situations. The measure has proved to be highly valid and reliable with good psychometric properties, indicating its use in tailoring interventions and assessing changes. In this study, PAM scores are used to support patient activation individually. Creating situations where patients can experience success in taking control of their health is an essential part of effective self-care support [41,43,44].
Knowledge level
Knowledge is seen as essential for empowerment [29]. Knowledge tests have been used to measure the outcomes in educational interventions, to evaluate their effect or to monitor the learning progress during education [45]. Positive outcomes have been reported on disease-specific knowledge, adherence to medication and diet as well as on self-efficacy among people with chronic disease. A positive but inconsistent effect has also been reported in self-care and hospital readmission rates [37]. For this study, a knowledge test based on literature was developed in the research group consisting of a clinical nutritionist, a physician, nurses experienced on oncology and the researcher. A clinical nutritionist and two patient experts validated the test. The responses required (15 items) are either yes or no. Each correct answer gives one point and total score is the sum of correct answers. The knowledge test covers the following topics:
Malnutrition; definition and prevalence in patients
with CRC (2 items) 2. Impact of malnutrition on treatment, morbidity and mortality (2 items) 3. Chemotherapy-induced side effects that may reduce nutritional status; reasons, manifestation and selfcare (11 items)
Quality of life
QoL is measured as the risk of malnutrition is strongly associated with QoL in cancer patients initiating adjuvant chemotherapy [46]. Improved QoL is seen as a long-term consequence of patient empowerment [29]. QoL is assessed using the Functional Assessment of Cancer Therapy Scale -Colorectal (FACT-C) [47], which is a reliable and valid measure and sensitive to changes in functional status [48]. The questionnaire consists of 36 items on a 5-point Likert scale in four areas of wellbeing: physical (0-28 points), social (0-28 points), emotional (0-24 points), functional (0-28 points) and CRC subscale (0-28 points). Total sum is 0-136 points. Higher score means better QoL. Points are assigned for low level (0-34 points), satisfactory (34-68 points), average (68-102 points) and high level (102-136 points). FACT-C has been shown to have good overall validity and reliability, to be short, have flexible scoring, responsiveness to change in performance status, to be significantly correlated with other assessments of mood and show positive results in hypothesis testing [49,50].
Risk of malnutrition
The risk of malnutrition is assessed as the educational intervention is presumed to prevent malnutrition or reduce the risk of malnutrition. In diagnostic assessment of malnutrition, the following criteria are recommended: non-volitional weight loss, low body mass index (BMI), reduced muscle mass, reduced food intake or assimilation and disease burden/inflammation [51]. To identify the patients at risk, we use the validated Nutritional Risk Screening 2002 tool (NRS2002) [52], which was developed to detect the presence of malnutrition and to predict whether malnutrition is likely to worsen due to patients' illness [53]. Patients are assessed based on BMI, recent weight loss percentage and change in food intake (0-3 points), age (0-1 points) and severity of disease (0-3 points). The total sum is seven points. Patients with a total score of ≥3 are classified as nutritionally at risk.
Side effects
Side effect self-monitoring is used to assess the intensity of the side effects as well as to reinforce the intervention effect by reflecting on the self-care activities. After the first chemotherapy treatment, nurses give the selfmonitoring diary to patients in both IG and CG, to be returned to the researcher after the fourth (or the last) chemotherapy cycle. Patients in IG document the side effects and their intensity as they appear (NRS 0-10; 0 = not at all, 10 = the worst possible) before and after the performed self-care activities. In addition, patients document their individual expectations for additional knowledge, and this information is used during the educational intervention. Patients in CG only selfmonitor and document the intensity of each side effect as it appears (NRS 0-10; 0 = not at all, 10 = the worst possible).
Clinical data
Clinical data is gathered from electronic patients' records. We are interested to find out whether the educational intervention is related to better adherence to treatment schedule. Therefore, data of patient-induced treatment changes, cancellations, transferring and interruptions are documented from baseline to 8 weeks. It has been indicated that worse nutritional status is related to greater morbidity [17]. Therefore, we collect data of patients' emergency room visits and hospitalisation from baseline to 8 weeks.
The sample size was calculated to detect a 7-point mean difference in the PAM scale between groups assuming standard deviation of 11 points for both groups with 80% power and significance level of 0.05. This leads to required sample size of 40 participants per group. The meaningful difference between the average score of individuals who engage in healthy behaviours and those who do not is considered 4 points on the PAM scale [44]. To reach the target sample size (40 + 40), the researcher recruits participants by sending questionnaires with a research info and contacting them by phone the day before they visit the outpatient clinic. The researcher also meets the eligible participants to provide verbal information of the trial and answer questions. The strategies to improve patients' adherence to the intervention protocol comprise the use of a self-monitoring diary and positive feedback from the research nurse. For individual participants, the intervention will be discontinued in the case of worsening condition, treatment change, or at their own request.
Assignment of interventions
The researcher (LT) enrols the participants, and those willing to participate are randomly assigned to CG and IG using stratified randomisation according to stage of disease and existence of stoma. An allocation sequence using blockrand package [54] in R version 3.6.1 [55] was used. For each block, the block size was randomly chosen from a set of 2, 4 and 6. Allocation ratio of 1:1 was used. The statistician generated an unpredictable allocation sequence using sequentially numbered, opaque, sealed envelopes. The envelopes, numbered in advance, are opened sequentially after the participant's name is written on the appropriate envelope [56]. The researcher (LT) allocates participants to the intervention with equal probability as a simple random sample is drawn from each group. Thus, the person enrolling participants does not know in advance which treatment the next person will get [39]. Patients' blinding is not possible as the RN informs the patients in IG of the one-hour educational session. The research nurse cannot be blinded because she provides the intervention. The data analyst is blinded as data are anonymised by using codes (001, 002 etc.).
Data collection
Baseline measurement (M0) is conducted before patients' first contact in the outpatient clinic. Follow-up measurement (M1) is conducted 8 weeks after the intervention. The duration of a chemotherapy treatment varies from 3 to 6 months. The occurrence of chemotherapy side effects is individual and they are principally temporal being most severe in 3-7 days after each cycle. During the two-month follow-up period, patients have received four cycles of treatment, which is considered sufficient to assess the effects of the intervention. The researcher sends research information and baseline questionnaires (demographic data, PAM, knowledge test, FACT-C, NRS2002) to the patients with a return envelope enclosed. She contacts potential participants by phone before their first treatment to inform about the study and asks for their verbal consent. Patients return the questionnaires within a week before the first appointment in the outpatient clinic. In connection with this appointment, the researcher contacts the participants for verbal research info and written informed consent. Recruitment continues until the sample size is reached (40 + 40). The enrolment period will last 4-5 months based on calculated sample size and the assumption of 10 new CRC patients a week and that approximately 10% of the patients will refuse to participate. Outcome data is not collected on participants who suspend on their own initiative or deviate from intervention protocols. The researcher manages data confidentially by entering and storing data on a password-protected computer. Original study questionnaires are kept at the participating site. The researcher codes data for ease of data storage, review, tabulation, and analysis. Files in electronic and paper form will be discarded after the publication of the research results (2022). In the information letter, patients are instructed to report spontaneously any unintended effects to an outpatient clinic nurse. Nurses report these events to the physician and first author, who make the final decision to discontinue the study, if necessary. A formal data monitoring committee will not be set up due to the short duration of the study and the harms are known to be minimal.
Data analysis
Statistical methods are used for analysing primary and secondary outcomes between the two groups and related factors. The main analysis is to compare the change from M0 to M1 in primary outcomes (patient activation and knowledge level) and secondary outcomes (QoL and risk of malnutrition) between IG and CG. Categorical variables will be described using frequencies and percentages. Continuous variables will be expressed as means with standard deviations for normally distributed variables and medians with interquartile ranges for nonnormally distributed variables. The differences in changes between groups in continuous outcomes will be compared with two-sample t-test or Mann-Whitney Utest and the changes within groups with paired t-test or Wilcoxon signed rank test, as suitable. Logistic regression using generalised estimating equation (GEE) will be used to test the differences between and within groups at risk of malnutrition (classified as 0-2 and 3-7). Results will be presented using estimates of group differences with 95% confidence intervals. The effect of missing data on the results will be examined using sensitivity analysis. An intent-to-treat analysis will be applied and two-sided statistical tests with significance level of 0.05 will be used in the statistical analyses. Methods for any additional analyses are determined by the data.
Declaration of interests
None.
Access to data
The searcher has access to the final trial dataset. A contractual agreement is made with the statistician to handle the data confidentially according to the research protocol.
Discussion
We developed a study protocol to test the effectiveness of an empowering educational nursing intervention using the teach-back method on the self-care activation and knowledge level (primary outcomes), QoL and risk of malnutrition (secondary outcomes) among patients with CRC during chemotherapy. This study is currently in the recruitment phase (first enrolment 21.10.2019). Today, various Internet-based interventions are offered for patients with cancer. A vast amount of information is available from different sources. Patients with colorectal cancer are usually older people and the Internet is not available for all. Individualised interventions have proved effective on patients' energy and protein intake, weight, QoL, morbidity, mortality and the risk of adverse clinical outcomes. Previous studies have concentrated merely on clinical outcomes and QoL. Therefore, we are interested to test the effectiveness of face-to-face individualised education on patients' self-care activation and knowledge level. The intervention has potential to improve nutritional outcomes for patients affected with CRC. If the empowering patient education with teachback method proves to be effective, it will be implemented as a part of RNs' daily work. Further research will provide valuable information of costs and benefits when implementing this educational programme. At the protocol phase, the cost of training the intervention nurse and staff (20 h) and the salary cost of the research nurse (8 months) is approximately € 15,000. | 2021-06-11T13:56:34.341Z | 2021-06-10T00:00:00.000 | {
"year": 2021,
"sha1": "e5eb4c837cef74157ed844ed4d7f6a6a18aa694e",
"oa_license": "CCBY",
"oa_url": "https://bmcnurs.biomedcentral.com/track/pdf/10.1186/s12912-021-00617-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a575c962583db8e563a5591c399d20b901b4af3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56323345 | pes2o/s2orc | v3-fos-license | Computerized Adaptive Testing with R : Recent Updates of the Package catR
The purpose of this paper is to list the recent updates of the R package catR. This package allows for generating response patterns under a computerized adaptive testing (CAT) framework with underlying item response theory (IRT) models. Among the most important updates, well-known polytomous IRT models are now supported by catR; several item selection rules have been added; and it is now possible to perform post-hoc simulations. Some functions were also rewritten or withdrawn to improve the usefulness and performances of the package.
Introduction
In the field of psychometrics, computerized adaptive testing (CAT) is an important area of current research and practical implementations have shown a huge increment in the last decade.Unlike traditional linear testing wherein all respondents receive the same set of items, the main purpose of CAT is to perform iterative and adaptive administration of the items.The items are selected and administered one by one, and the selection of the next item is conditional upon the previously administered items, the responses of the respondent and the provisional estimate of ability level.CAT has several advantages with respect to linear testing: Among others, it requires less items to reach the same level of precision for ability estimation, leading thus to shorter tests for the respondents, and ability estimates are available directly after the test administration for immediate feedback to the test takers.
Although the CAT literature has increased in the past two decades (e.g., van der Linden and Glas 2010; Wainer 2000), there is still a lack of open-source and flexible software to run CATs and to perform intensive simulation studies in this framework.The R (R Core Team 2016) package catR (Magis and Raîche 2012) was originally developed for this purpose.The package is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=catR.It offers a variety of options to generate response patterns in a CAT environment, by providing first a pre-calibrated item bank, then by selecting all options related to CAT assessments (such as ad-interim and final ability estimators, a method for next item selection and a stopping rule).Its general architecture makes catR flexible, easy to update, and several of its components can be used even outside the CAT framework (for instance, the ability estimation and related standard error computation routines).Though basically developed as a working routine for CAT studies, catR can also be used as the core computing for real CAT assessment platforms, such as the web-based platform Concerto (Kosinski et al. 2013).
Since its very first version 1.0, released in June 2010, the package was updated with minor yet important updates to fix programming errors and enhance general improvement, leading to version 2.6 (released in March 2013).Recently, catR received a major update, due to both an increasing interest for the package and the need for further developments to match more realistic situations.One major update was to incorporate most common polytomous item response theory (IRT) models into catR.This mandatory extension was motivated by the fact that most questionnaires contain polytomous (e.g.multiple-choice) items for which specific models exist but were not yet available in catR.
The purpose of this note is to briefly review the major changes and improvements of catR from version 2.6 to its most recent version 3.12 (released in January 2017).Sections 2 to 4 present the three main updates of catR: the inclusion of polytomous IRT models, the implementation of additional item selection rules, and the option to run post-hoc simulations.Several technical details are also included in Appendix A. The package itself will not be described again, so we refer the interested reader to Magis and Raîche (2012) for more details.
Polytomous IRT models
As already mentioned, the main update of catR involves the inclusion of the most common polytomous IRT models: the graded response model (GRM; Samejima 1969Samejima , 1996)), the modified graded response model (MGRM ;Muraki 1990), the partial credit model (PCM; Masters 1982), the generalized partial credit model (GPCM; Muraki 1992), the rating scale model (RSM ;Andrich 1978a,b) and the nominal response model (NRM; Bock 1972).These models were integrated into the package with the following requirements and guidelines: (a) catR function names were not modified; (b) by default, all functions remain operational with dichotomous IRT models; (c) all functions support polytomous IRT models and return similar yet appropriate output.These choices were made to prevent a deep modification of the current use of catR, especially for researchers who are currently using the package with dichotomous IRT models.
The specification of a polytomous IRT model is composed of two elements: an appropriately defined matrix of item parameters and the new argument model added to almost all existing functions.By default, model takes the NULL value and refers to dichotomous models (for which the item bank format is left unchanged from previous versions of catR).Other possible values are the polytomous model acronyms, for instance "GRM" for the graded response model, "PCM" for the partial credit model and so on.
The format of a bank of item parameters under polytomous IRT models requires some explanation.First, the "one-row-per-item" structure was preserved in this framework.Second, all models being different in terms of number of parameters per item, the number of columns in this bank will vary from one item bank and one model to another.A complete description therefore requires a detailed presentation of the polytomous IRT models.
Parametrization of polytomous models
For a given item j, let the response categories be coded as 0, 1, . . ., g j so that g j + 1 response categories are available.Let X j be the item response and θ the ability level of the respondent.Set also P jk (θ) = P(X j = k|θ) as the response category probability, that is, the probability that response category k (k = 0, 1, . . ., g j ) is picked up for item j.
The GRM and MGRM belong to the class of so-called difference models (Thissen and Steinberg 1986) and are defined by means of cumulative response probabilities P * jk (θ) = P(X j ≥ k|θ), that is, the probability of selecting a response category in {k, k + 1, . . ., g j }, and with the convention that P * j0 (θ) = 1 and P * jk (θ) = 0 for any k > g j .Response category probabilities are then computed as P jk (θ) = P * jk (θ) − P * j,k+1 (θ).Using the notations given in Embretson and Reise (2000), the cumulative probability of the GRM takes the following form: while the cumulative probability of the MGRM is written as: . (2) The GRM allows thus for category threshold parameters β jk that vary across items, while the MGRM assumes the same number of response categories for all items (i.e., g j = g for all items) and identical threshold parameters c k across items.
The PCM, GPCM, RSM and NRM, on the other hand, belong to the class of divide-by-total models (Thissen and Steinberg 1986).The respective response category probabilities are set as follows: for the GPCM, for the RSM, and for the NRM.The PCM is a particular case of the GPCM (3) with the restriction α j = 1.
The RSM assumes all items have an equal number of response categories (i.e., g j = g for all j), while other models allow for different numbers of response categories across items.
Specification of the item bank
In order to correctly specify the polytomous item bank in catR, it is first mandatory that the items be calibrated using the same parametrization of the models (1) to (5) above.Then, since each item will be coded as one row of the item bank, the ordering of the item parameters is central.It was decided to make use of the following ordering for any item j: • for the GRM: (α j , β j1 , . . ., β j,g j ) • for the MGRM: (α j , b j , c 1 , . . ., c g ) • for the PCM: (δ j1 , . . ., δ j,g j ) ) • for the GPCM: (α j , δ j1 , . . ., δ j,g j ) ) • for the RSM: (λ j , δ 1 , . . ., δ g ) • for the NRM: (α j1 , c j1 , α j2 , c j2 , . . ., α j,g j , c j,g j ) In other words, the number of columns in the item bank will vary from one model to another.If g max stands for the maximum number of response categories across all items (g max = g in case of MGRM and RSM), then the number of columns in the item bank (without the possible subgroup membership indicators) is g max + 1 for the GRM, MGRM, GPCM and RSM, g max for the PCM and 2 × g max for the NRM.If an item has less than the maximal number of response categories, the corresponding row of the item bank is completed by NA values for the missing response categories.
Additional item selection rules
The former version of the package included seven item selection rules, listed in Magis and Raîche (2012, p. 9).Now, catR holds five additional item selection rules that are briefly described below.
1.The thOpt procedure (Li and Schafer 2005;Magis 2013).In the thOpt rule, the item selected is the one belonging to the subset of administrable items of the bank (B) with minimum distance between the currently estimated trait level θ and the value where the item achieves its maximum in the Fisher information function θ max i : The computation of θ max i is done with the equations provided in Magis (2013).
2. The Kullback-Leibler divergency criterion weighted by the posterior distribution (KLP; Chang and Ying 1996).The Kullback-Leibler (KL) information function evaluates the item discrimination capacity between any possible pairs of trait levels.This means that KL is a global information measure.Chang and Ying (1996) proposed to weight the KL measure with the posterior trait level distribution: where f (θ) is the prior distribution of ability, L(θ) is the likelihood function and KL i (θ θ) is calculated as follows (see also van der Linden and Pashley 2010): with L(θ|X i ) being the contribution term of item i to the full likelihood L(θ).
3. The Kullback-Leibler divergency criterion weighted by the likelihood function (KL; Barrada, Olea, Ponsoda, and Abad 2009b).In this version of the KL selection rule, no prior distribution is considered, so the item selected is: 4. The progressive method (Revuelta and Ponsoda 1998).In the progressive method the selected item is the one for which the weighted sum of a random component and the Fisher information is highest.At the beginning of the test, when the trait estimation error is high, the weight of the random component is maximum and the weight of the Fisher information is minimum.As the number of administered items increases (in fixed length CATs) or when the estimated standard error approaches the standard error threshold (when the "precision" rule is applied), the weight of the random component decreases and the weight of the Fisher information increases.The progressive method can be described as follows: where R i is a random number belonging to the interval 0, max i∈B I i ( θ) and I i ( θ) is the Fisher information function computed at the θ value.
For fixed length CATs, Barrada, Olea, Ponsoda, and Abad (2008) proposed the following equation to relate W to the number of item positions in the test (ranging from 1 to Q): The t parameter marks the speed at which the weight of the random component is reduced, and thus the speed at which the importance of item information increases.Higher values imply a higher relevance of the random component in the item selection.
When the stopping rule surpasses a predefined standard error value, the W value is computed with an adaptation of the method proposed by McClarty, Sperling, and Dodd (2006): where I stop is the Fisher information required for reaching the standard error threshold and M is the maximum test length.
5. The proportional method (Barrada et al. 2008;Segall 2004).While the rest of the selection methods implemented in catR are deterministic, the proportional method is stochastic.The probability of selecting the item is given by: where n is the size of the item bank and z i indicates whether the item belongs (1) or not (0) to B. Once the probabilities of each item being selected are computed, a cumulative distribution of probabilities is derived.Then, a random number drawn from the uniform interval (0,1) is used to identify the item to be selected.
For fixed length CATs, Barrada et al. (2008) have proposed defining H as follows: The s parameter has the same role as the t parameter in the progressive rule.
For the "precision" stopping rule, the computation of H is: Note that for clarity the formerly called Urry's method (Urry 1970) has been renamed as the bOpt criterion.Moreover, all item selection rules are available for both dichotomous and polytomous IRT models, except the thOpt and the bOpt methods (which are restricted to dichotomous models).Also, the progressive and proportional methods are not available for classification CATs.
Post-hoc simulations
The generation of a CAT response pattern is done by random draws from the Bernoulli distribution for each item response.More precisely, once the next item to administer is selected, the probability of answering this item correctly, say P j (θ), is computed with the estimate of ability θ and the item response X j is drawn from the Bernoulli distribution with success probability P j (θ).Note that by including polytomous IRT models, this random sampling scheme was updated by considering draws from the appropriate multinomial distribution.
The package catR now allows for post-hoc simulations, that is, item responses that are not randomly drawn but picked from a given response pattern.This response pattern is directly provided in the randomCAT() or simulateRespondents() functions with the newly added arguments responses and responsesMatrix, respectively (see Appendix A).By default, these arguments take the NULL value, so that item responses are randomly drawn from the appropriate (Bernoulli or multinomial) distribution.Otherwise, responses must be a vector and responsesMatrix a matrix of item responses (either dichotomous or polytomous) of the same length of the number of items in the bank, and with the same ordering (i.e., first item response to the first item in the bank etc.).
In the case of post-hoc simulations, the true ability level may not be provided (as it will be unknown in practical cases).The randomCAT() function nevertheless returns its value through the trueTheta argument, with a by-default value of zero (for compatibility with traditional random CAT generation).
In post-hoc simulations, when real examinees (not simulees) have responded to the full item bank, it is common to treat the estimated ability with the full vector of responses as the best guess of the true ability level.In those cases, the best trueTheta estimate could be obtained with the thetaEst function.Otherwise, trueTheta in the context of post-hoc simulations could be fixed to any arbitrary value.In any case, the trueTheta argument is not used for the generation of item responses.
The post-hoc simulation feature can be applied to at least two different situations.First, the responses of examinees to the full item bank are available and the user wants to evaluate the effects of switching from a linear test to an adaptive test (see, e.g., Fischer et al. 2014;Gibbons et al. 2008).Second, the responses to the items come from a previous phase of the simulation process and must remain constant in the adaptive phase.For instance, with post-hoc simulations it is possible to simulate the effects of item parameter calibration error in adaptive testing (Olea, Barrada, Abad, Ponsoda, and Cuevas 2012;van der Linden and Glas 2000).An example is also provided in the next section.
Illustration
Let us now illustrate briefly the main updates of catR by displaying the full code to generate CAT patterns.The main steps have been described in Magis and Raîche (2012) and will not be detailed here, emphasis being put on new topics instead.
Throughout this section the following options will be selected and kept identical across examples for sake of clarity (they can obviously be modified according to the user's interests).
1.An item bank of 500 items is randomly generated with the PCM as the IRT model.Moreover, each item has between two and five response categories.
2. Each CAT starts by selecting from the item bank, the item that is most informative for the true ability level of zero.This means, among others, that each CAT will start with the same item (this restriction can nevertheless be relaxed by using another approach; the current one, however, is commonly used in real CAT assessments).
3. Ad-interim ability is estimated with the maximum a posteriori (or Bayes modal) method, with the standard normal prior distribution of ability.
4. The next item to administer is selected by making use of the Kullback-Leibler (KL) divergency criterion.
5. The stopping rule is set as a precision criterion: Adaptive administration ends when the standard error of the ad-interim ability estimate becomes smaller than 0.3.
6.The final ability estimator is the traditional maximum likelihood (ML) estimator.
7. The examples do not contain any option for content balancing nor for item exposure control.
The first rows of the generated item bank (stored into the R object bank) can be looked at for information: R> head(bank) leading to: According to the PCM parametrization in (3), items 1 and 2 hold four responses categories, items 3, 4 and 6 have five response categories and item 5 has only three response categories.
Example 1
In the first example, a single CAT pattern will be generated from the usual random response generation process, with a true ability level of one and all aforementioned CAT options.The corresponding R code is given below: The corresponding output is returned as follows: Random Output was not captured!
The CAT required only six item responses to reach the pre-specified level of precision in the ability estimation process.Note that the final SE value (0.305) is larger than the requested threshold (0.3), which is due to the change in ability estimator between the test and final steps.Moreover, the final ability estimate equals 1.415, not far from the true underlying ability level of one.
Example 2
In the second example, an illustration of post-hoc simulation is performed.First, for the sake of such analysis, some response patterns must be provided.Here we make use of the new genPattern() function to create this pattern (see Appendix A for further details), though in practical situations it is often provided from real test assessments.
R> x <-genPattern(th = 1, it = bank, model = "PCM") Then, the CAT pattern is obtained using the following code.Note that in this context of post-hoc simulation, the true ability level may not be provided anymore (as it is only used to generate the item responses).
R> res2 <-randomCAT(itemBank = bank, responses = x, model = "PCM", The output is very similar to the one for res1 in the previous section, so only the specific parts are displayed below Post-hoc simulation of a full bank provided response pattern Here also only six items are required to fulfill the CAT stopping rule.In this case, it is worth checking that the item responses returned by the CAT process R> res2$pattern are actually equal to the item responses from the input response pattern
Example 3
In this final example, the new function simulateRespondents(), described in Appendix A, will be illustrated.An artificial set of 20 respondents is considered, with true ability levels being equally spaced between −2 and 2. All other CAT options remain unchanged.The full R code is displayed below.
R> thetas <-seq(from = -2, to = 2, length = 20) R> res3 <-simulateRespondents(thetas = thetas, itemBank = bank, + model = "PCM", start = start, test = test, stop = stop, final = final) R> res3 The output of this function is displayed in a somewhat different setting than the output from randomCAT().That is, summary statistics on the whole set of simulated patterns are returned instead (though all individual results can be retrieved from the elements of the output list res3, for instance by calling str(res3)).This output is reproduced below.These results can be saved by setting save.output to TRUE in the simulateRespondents function Note that the long computational time (about four minutes) is due to the use of the KL rule as method for next item selection, which is a very computationally intensive one.Other methods such as MFI for instance would reduce this computational effort to a few seconds instead.
Final comments
The R package catR offers a flexible routine to generate response patterns under a CAT scenario.It has many options for ability estimation, next item selection, item exposure and content balancing control, as well as several rules for selecting the first items and stopping the CAT.Both dichotomous and polytomous IRT models are now supported by catR, and post-hoc simulations can also be considered as an alternative to usual random response draws.
Practical applications of catR are numerous.First, it was originally developed as a research tool to perform intensive and comparative simulation studies.Up to now, a common dynamic in the research area of CAT has been that each researcher has developed his/her own code to perform the simulations.However, making use of a common package like catR would alleviate some related problems: (a) it reduces the time to implement CAT routines; (b) it provides more consistency in research and allows replication studies; and (c) it facilitates the use of more complex IRT models that are available in catR.Moreover, the modularity of its architecture and its open-source access implies that any researcher can use it, as it stands or by modifying some functions.The inclusion of polytomous IRT models and additional item selection rules will allow studies to broaden this area of research, for instance by comparing several items selection rules or ability estimators with various models and test situations.
The package catR can also be useful with real or simulated data.We can foresee several scenarios for which a free accessible alternative as catR can reduce costs.
1. Pre-operational analysis, to simulate the adequate protocol (item selection rule or trait level estimator) when considering to start a CAT implementation with real item banks.
2. Empirical evaluation of the gain in switching from linear to adaptive administration of previously developed and calibrated items using post-hoc simulations (e.g., Fischer et al. 2014;Gibbons et al. 2008).
3. Operational purposes, as the support for the platform of CAT administration.One example is the web-based platform Concerto (Kosinski et al. 2013) that requires catR as underlying computational routine for CAT administrations.
Note that catR is not the only R package devoted to adaptive testing.Among others, mirtCAT (Chalmers 2016) seems to be a valuable alternative.Its main asset is to allow the creation of graphical user interfaces for administering CATs in real time.catR, on the other hand, is more complete in terms of CAT options for selecting the first item(s), next item selection and stopping rules.In addition, mirtCAT package supports several multidimensional IRT models, which is currently not the case with catR.
Future updates of catR will focus on several modern aspects of CAT assessment.Some possible future extensions are: the inclusion of multidimensional IRT models (Reckase 2009); cognitive diagnosis CAT models (Cheng 2009;Kaplan, de la Torre, and Barrada 2015); new or other methods for item exposure and content balancing control; and testlet IRT models (Wainer, Bradlow, and Wang 2009).
A. Additional updates and modifications
Together with previously described updates of the package, several technical modifications and improvements were performed.They are briefly listed below for completeness.
A.1. What remains unchanged
The general architecture of catR is such that some elements can be modified, updated or removed without needing to rewrite the whole package.Therefore, despite important improvements, the general structure of the package was left unchanged.That is, to generate a response pattern, one must provide a calibrated item bank in an appropriate format, a true ability level (or a full response pattern for post-hoc simulations), and four lists of options for the starting, testing, stopping and final steps of a CAT (see Figure 1 of Magis and Raîche 2012, p. 7 for further details).Hence, previous code developed for catR version 2.6 or before will remain valid with the most recent version of the package.
A.2. Removed or replaced features
The main modification in catR is the removal of the createItemBank() function and its replacement with a simpler function called breakBank().The purpose of createItemBank() was to produce an item information matrix to quickly pick-up Fisher information for a given item and ability level.This structure was however not very user-friendly and required the creation and storage of an information matrix, and on-the-fly computation of information functions is very fast and straightforward with modern computers.
Another feature of createItemBank(), however, was to break down the item bank into two pieces (whenever supplied): the item parameters on the one hand and the subgroup membership of the items on the other hand (for content balancing purposes).This feature had to be preserved for proper functioning of catR, and this was achieved by creating the simpler function breakBank() instead.This new function takes as input the original matrix with both item parameters and subgroup membership and returns as output a list with the two elements.Note that breakBank() is used internally in the main function randomCAT() of catR, so now only the original, full matrix of item parameters (plus perhaps subgroup membership) must be supplied as input information in randomCAT().
Note also that, in order to remove the former dependency of catR to the package sfsmisc (Maechler et al. 2016) for numerical integration, the updated package contains its own internal function for numerical integration, called integrate.catR().
A.3. Item bank and response pattern generation
Two functions were created to automatically generate item banks according to a pre-selected IRT model.These functions are called genDichoMatrix() and genPolyMatrix() for dichotomous and polytomous IRT models, respectively.Both share four identical arguments: items to specify the requested number of items in the bank; model to determine the IRT model; seed to set the random seed; and cbControl to specify the options for content balancing control.In addition, genDichoMatrix() also allows specification of the parent distribution of each of the four parameters.genPolyMatrix(), on the other hand, requires the maximum number of item categories and can force the items to have exactly the same number of cat-egories.The parent distributions, however, are currently set to default distributions.The interested reader can find more details about these functions in the catR help files.
Another useful function, called genPattern(), was created.As its name suggests, it performs random generation of a response pattern given a set of item parameters (argument it), a targeted ability level (argument th) and a pre-specified model, either dichotomous or polytomous IRT model (argument model).As already previously mentioned, this random generation is made by an appropriate call to the function rbinom() for dichotomous items and to rmultinom() for polytomous IRT models.The function returns a vector of random item responses with the same length of the number of rows in the item bank.Note that a single item can be specified by a vector of parameters (in the appropriate order according to the IRT model), and genPattern() converts it into an appropriate matrix for random response generation.
A.4. Multiple pattern generation
Finally, because the randomCAT() function can only produce one adaptive test at each call, an additional function was added to generate several response patterns simultaneously.This function, called simulateRespondents(), allows easy simulation of a large number CAT administrations and provides both statistical summaries and plots regarding accuracy and item exposure control.The results and plots are for the overall sample of examinees and is conditional on the deciles of the trait level distribution.Ten different plots can be displayed and saved.The availability of the plots depends on the stopping rule used.The details can be checked in the help files of the simulateRespondents() function.
The function simulateRespondents() makes use of most of the arguments of randomCAT(), with three main exceptions.First, the argument trueTheta is replaced by thetas and can hold a vector of true ability levels: Each value will be used to generate one response pattern with successive calls of randomCAT().Second, in case of post-hoc simulations, the argument responsesMatrix contains a matrix of response patterns (one pattern per row) from which the item responses will be drawn.Third, two methods for controlling the maximum exposure rate that no item should surpass (r max ) are available, the restrictive method (Revuelta and Ponsoda 1998) and the item-eligibility method (van der Linden and Veldkamp 2004).In both the restrictive and the item-eligibility methods, exposure control parameters k i are used to define the subset B of the bank which is available for administration for each examinee and these parameters are computed on-the-fly, with each new examinee (Barrada, Abad, and Veldkamp 2009a).
In the restricted method, the control parameters can adopt just two values, 0 and 1.The k i parameter will be set at 0 if the exposure rate of the item from the first CAT administration until the gth examinee er (1...g) i is greater than or equal to r max ; otherwise, the control parameter will be set at 1: In the item-eligibility method, the parameters for the (g + 1)th examinee are calculated considering r max , er (1...g) i , and the exposure control parameters for the previous examinee k (g) i : | 2018-12-17T19:21:31.798Z | 2017-01-11T00:00:00.000 | {
"year": 2017,
"sha1": "b8477f43cb0cc84bd22cc9e740f296b21c0d14ad",
"oa_license": "CCBY",
"oa_url": "https://www.jstatsoft.org/index.php/jss/article/view/v076c01/v76c01.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b8477f43cb0cc84bd22cc9e740f296b21c0d14ad",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
6914370 | pes2o/s2orc | v3-fos-license | Community gene annotation in practice
Manual annotation of genomic data is extremely valuable to produce an accurate reference gene set but is expensive compared with automatic methods and so has been limited to model organisms. Annotation tools that have been developed at the Wellcome Trust Sanger Institute (WTSI, http://www.sanger.ac.uk/.) are being used to fill that gap, as they can be used remotely and so open up viable community annotation collaborations. We introduce the ‘Blessed’ annotator and ‘Gatekeeper’ approach to Community Annotation using the Otterlace/ZMap genome annotation tool. We also describe the strategies adopted for annotation consistency, quality control and viewing of the annotation. Database URL: http://vega.sanger.ac.uk/index.html
Introduction
High-quality manual annotation of a genome is enormously valuable to aid its interpretation and provide an accurate gene set which serves as a solid foundation for a wide array of further studies, as the value of a genome is only as good as its annotation. Manual annotation can prove to be costly, as it requires a considerable infrastructure, such as a large-scale automated analysis pipeline and specific tools, in order to be viable. The human and vertebrate analysis and annotation (Havana) team at the Wellcome Trust Sanger Institute (WTSI) (1) manually annotate the human, mouse and zebrafish genomes using the Otterlace/ZMap genome annotation tool (2). The manual annotation from the Havana team is released every three months and publicly available from the Vertebrate Genome Annotation Database (VEGA) database (3).
Despite it being over 10 years since the publication of the Stein paper, manual gene structure annotation is still lacking for many organisms and has been hard to adopt as a community effort because of the limitation of tools available. Where community annotation has been extremely successful is the wikigenes project (14). However, this has been associated with adding descriptive text to attribute functionality to existing gene structures rather than annotation of new gene structures. The main genome browsers have now adopted a mix of factory and museum models, which is employed by NCBI, UCSC, Ensembl and Ensembl Genomes (15). For human annotation, Ensembl and UCSC now display a merged geneset, which is a mix of manual and automated annotation, called the GENCODE geneset (16). This GENCODE annotation will comprise the first pass annotation of the whole of the human genome by the end of 2012. Many genome browsers now also make use of the Distributed Annotation System (DAS) (17) about a region of interest, as DAS makes use of the common reference sequence as a basis to visualize additional annotation. The issue of lag time between browser builds is thus eliminated and data can be accessed and displayed as soon as it is made available to the community.
Community annotation using Otterlace/Zmap
The Otterlace manual annotation system (18) comprises a relational database that stores manual annotation data and supports the graphical interface, Zmap. The Otterlace database schema is based on the Ensembl schema. The annotation data is stored in a MySQL database (19) and forms the backend to the Vega database. ZMap is a stand-alone sequence feature viewer derived from the Acedb FMap display (18). It is written in the C language for high performance and has a command interface so it can be integrated with other annotation software, such as Otterlace. It has a very flexible data model allowing the incorporation of new data sources (e.g. short reads) as they become available.
Funding for manual annotation is limited and therefore we have explored a community annotation approach, which utilizes our annotation software and analysis pipelines. We have used the 'Blessed Annotator' and 'Gatekeeper' approach within two projects.
Blessed annotator
A variation on the Museum approach. This has been applied to the knockout mouse project (KOMP) (20) and the North American Conditional Mouse Knockout project (NorCOMM) (21). This is part of the International Knockout Mouse Consortium (IKMC) (22) that aims to generate mutants for all of the protein-coding genes in mouse of which WTSI is a member. Since internally, we had developed tools for the analysis of mouse knockout genes for the European Conditional Mouse Mutagenesis Program (EUCOMM) (23), we developed the 'Blessed Annotator' approach for KOMP and NorCOMM external annotators. In additional to gene annotation, the mouse projects required the identification of the critical exon; that is the exon in a gene that can be removed to induce Nonsense Mediated Decay (NMD) (24) and so knock-out the expression of that gene. Following on from this, the knock-out construct itself, missing the critical exon, was annotated in order to provide information for the vector constructs that the laboratory partners generate (25).
We conducted extensive training for a small group of annotators from Washington University for KOMP and one annotator at the University of Manitoba for NorCOMM, who were given remote access to our annotation tools so they could continue their work after the initial training period. After a period of close mentoring and quality control (QC), their annotation is considered to be of sufficient quality to be integrated into the mouse gene-build. Both groups have been using our software for 3 years to contribute to their projects.
Gatekeeper
We have also used the 'Gatekeeper' approach for multiple species. This is an extension and refinement of the party plus cottage industry approach. We have held several annotation jamborees at WTSI in Hinxton, Xenopus Tropicalis in 2005 (cDNAs), Cow in 2007(WGS) (26) and Pig in 2008 (WGS). These were week-long intensive jamborees to annotate cDNA and genomic sequence with our in-house annotation tools (Otterlace/ZMap), aimed mainly at the Principal Investigators (PI's) of interested groups. The disadvantage with the jamboree model is that the annotation is a one-off event and the PI's are usually unable to be available to extend and refine the annotation subsequently, so this is not suitable for a longer-term annotation project. The development and refinement of our annotation tools, which is discussed in the following section, led to their use externally and hence opened up the possibility of external community annotation.
WTSI is involved in the sequencing of the swine genome as part of the swine genome sequencing consortium (27), and is finishing and manually annotating the pig X and Y chromosomes. Our involvement in the annotation in pig and the interest generated by the pig jamborees led to an approach by the Immune Response Annotation Group (IRAG), to annotate 1700 genes in pig that are involved in immune response. The genes were chosen by searching for the gene ontology term for immune system process (GO:0002376), core genes involved in host pathogen interplay (28) and gene sets under positive selection in humans (29) within Ensembl (30).
A group of researchers working on resistance to disease and immunity in swine was identified to establish shared and species-specific immune response and to refine the annotation of immunity-related genes. Group training was instigated at WTSI and Iowa State University, with regular follow up meetings by web conferencing tools, such as WebEx and Skype. Groups of researchers were assigned genes of interest and annotated them using Otterlace/ ZMap under the instruction and guidance of professional annotation staff.
Software and analysis tools
The Otterlace annotation client runs on a local machine and downloads all of its data from the WTSI web server. The genomic region being annotated is stored in a persistent annotation session directory on the user's computer, which can be recovered following system reboots. Annotation actions require only occasional network access, so the system is tolerant of interruptions to network connectivity.
The genomic sequence is run through an analysis pipeline that consists of homology searches, gene predictions and de novo sequence analysis. The pipeline analysis includes: BLASTX against SwissProt and TrEmbl proteins, BLASTN against ESTs and vertebrate mRNAs, tandem repeat finder, Augustus (31)and Genscan (32) gene predictions. The results are displayed in the ZMap graphical interface ( Figure 1B). ZMap is written in the C programming language to give good drawing performance and makes use of threading to load multiple datasets simultaneously resulting in much faster startup times.
Large-scale data analysis, such as searches of mRNA libraries against the whole genome, are performed on WTSI systems, served by Otter CGI scripts, and presented in ZMap on the client where they can then be used to construct the annotation. Additional sources of evidence, such as BAM files on FTP or web servers anywhere in the world, can be configured on the server and then loaded into ZMap for display. As many of these data sources can be very large ZMap allows the annotator to choose which tracks and how much of each track is loaded.
Access to the Otter system is restricted to authorized users. External annotators register themselves with the WTSI SingleSignOn system, using their email address at their Institute. This takes care of authentication, and access to each species (authorization) is controlled via a configuration file which lists their email address and which is administered by the Otter support staff at WTSI.
Locks are used to prevent more than one annotator making changes to the same region of the genome ( Figure 1A). Existing genes which are not contained entirely within the region being annotated cannot be edited in the otterlace session and appear 'greyed out' ( Figure 1C).
Quality control in Otterlace
The Otterlace client performs a number of quality and sanity checks as genes and transcripts are built by the annotator. The names of transcripts with problems are highlighted in red in the session window, and a 'tool tip' gives a brief description of the problem when the annotator mouses over the transcript name ( Figure 1D). The transcript editing window shows the 2 bp in the intron immediately adjacent to each exon, and colours them green if they match a splice consensus, and red if they do not ( Figure 1E). Introns are checked to make sure that they are not too short. When present, the protein translation is checked for internal stop codons and completeness, and the transcript is checked to ensure that it is not subject to NMD (34), or if it is subject to NMD has been correctly flagged. The format of the transcript name is checked to ensure that it conforms to an approved naming convention. Transcripts must have evidence attached (accessions of the nucleotide or protein sequences used to build them), and more than one transcript in the same gene cannot share the same evidence. The locus must have the full name associated with the gene symbol added in the Full name field. A vocabulary of attributes, which can be attached to transcripts or loci is provided to avoid keying errors, and these appear in the transcript window with green shading ( Figure 1E). This integrated QC within Otterlace proved a valuable tool for external annotators as it flags errors as they occur and reduces the need for QC by Havana annotators. For the Blessed annotator model, due to the extended training period there is minimal manual QC over a period of several years for several thousand genes. However, for the Gatekeeper annotator model, the manual QC is much more extensive due to the much shorter training period of the annotators. Thus, this model requires more frequent input by professional annotators but over a shorter timescale compared to the KOMP and NorCOMM projects. The annotators were all trained with reference to the Havana team annotation guidelines (35) which was very important to give an assurance of the quality of the annotation.
The annotation
The annotation for the KOMP and NorCOMM projects took advantage of the customized software features that were already available for the EUCOMM project (25) in particular identifying critical exons and making knock-out constructs. The number of genes targetted for annotation is 5000 for KOMP and 500 for NorCOM, and they are complimentary to the EUCOMM project. This Blessed annotation makes use of the full complement of biotypes that are available within Otterlace, and is integrated into the gene set for mouse that is available from the VEGA website. Gene target for knockouts are identified from Ensembl predictions. Figure 2 gives and example of the importance of manual annotation for this project.
The IRAG project has 30 external annotators working through a list of 1700 genes. For the pig project a condensed version of the biotypes was used due to the dearth of sequence evidence available for pig and the lower quality of the genome sequence. The reduced numbers of pig mRNA and SwissProt entries that are available and required to make a coding locus biotype Known_CDS, resulted in many more Novel_CDS made from cross-species mRNA evidence. Working with unfinished genomic contigs was a challenge for both the software and the annotators, as for high quality finished genomes, such as human, the annotation is added to finished BAC sequences. For the pig autosomes many BACs consist of several, often unordered, contigs that are not finished to a high quality. Figure 3 shows and example of how manual annotation can assist in assessing the quality of a genome assembly.
In order to find genuine deletions and duplications of pig genes relative to the human genome, a high-quality genome is required. The current pig assembly 9.2, is thought to be missing 10% of the genome. The process of gene annotation identifies assembly and sequencing errors, but as full finishing will only be performed on the X chromosome it is unlikely that these errors will be resolved under current plans.
Despite the concerns about the quality of the genome, with reference to high-quality manual annotation, the group has already identified at least 12 genes that show genuine duplication, for example the REG3A gene. Genes that are thought to be absent in the swine genome will be re-assessed when the new genome build is available to ensure that they are not artefactual deletions.
Discussion
Compared with other community annotation projects, it is apparent that the 'Blessed Annotator' and 'Gatekeeper' models can give a much wider range of biotypes with a relatively small number of annotators. This comparison is shown in Table 1. The majority of the projects, including Methanosarcina acetivorans, cow and Bee, only annotate Coding genes. The Drosophila project includes non-protein coding RNAs, as does rice. Drosophila also include pseudogenes, but admit that it has very low pseudogene numbers (17 in the whole genome). The rice project gives a five-level Coding gene breakdown, but these are based on automated annotation with a final manual curation step. The annotation using Otterlace/ZMap includes the full biotypes that we have developed in the Havana team. Please see the annotation guidelines for further information (35) These include: (1) Coding loci: Known_CDS, Novel_CDS, Putative_CDS, NMD. transcribed_unprocessed_pseudogene, unitary_pseudogene, transcribed_unitary_pseudogene, poly-morphic_pseudogene.
We also annotate polyA signal and sites to ensure that we have annotated the full-length gene. These detailed biotypes give a much more informative picture of the genome and increases the value of the manual annotation when compared to other annotation methods.
The goal of both of our approaches to community annotation is to manually annotate all of the genes required by the projects, with a much depth of detail as possible. Due to the use of the SingleSignOn system for access to the annotation tools, we can track authorship and as mutiple users are prevented from annotating the same region of a genome at the same time there is no duplication of annotation. We routinely transfer annotation over to new genome builds when they are available. For the IRAG project, the annotation will be transferred over to build 10.2 and the annotation reviewed and checked for additional supporting evidence. This is particularly important where genes are partial or thought to be duplicated or deleted with reference to the human genome due to the unfinished nature of the pig genome. This may cause transfer issues where contigs have been re-ordered with relation to the previous genome build and in cases where new sequence has been incorporated into the new build. The use of annotation guidelines is essential to ensure consistency throughout the annotation of all annotation groups, although discussion is valuable when appropriate to aid in their interpretation. The Gatekeeper annotation approach is particularly challenging, as consistent and timely QC is required to address the diverse levels of experience and expertise throughout the group. This method is being successfully adopted by the Bovine Genome Database, which is using the annotation tool Apollo to allow collaborators to annotate new gene structures (38) and a professional curator to validate the data.
It is essential to ensure consistency in gene naming and the IRAG project has provided a good start to establishing this for the pig and thus has highlighted the need for a Swine Genome Nomenclature Committee. This has also demonstrated the added value of manual annotation compared to automated annotation, to give accurate gene structures and gene locations and aid the production of a reference gene set. A possible next step in this community annotation effort could be the annotation of gene families across the genomes of multiple species by utilising the experience of experts in the field. We are also looking at using the information from RNA-seq to confirm and expand alternative transcripts is many different tissue types. | 2014-10-01T00:00:00.000Z | 2012-02-13T00:00:00.000 | {
"year": 2012,
"sha1": "a01eacb3490eaf6cd4dfc930b2ef40d665242bb9",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/bas009/1191755/bas009.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99aee215e17670b625a088d4c798daf567291fcd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
8034491 | pes2o/s2orc | v3-fos-license | Antifungal Activity of Lactobacillus pentosus ŁOCK 0979 in the Presence of Polyols and Galactosyl-Polyols
The antifungal activity of Lactobacillus pentosus ŁOCK 0979 depends both on the culture medium and on the fungal species. In the control medium, the strain exhibited limited antagonistic activity against indicator food-borne molds and yeasts. However, the supplementation of the bacterial culture medium with polyols (erythritol, lactitol, maltitol, mannitol, sorbitol, xylitol) or their galactosyl derivatives (gal-erythritol, gal-sorbitol, gal-xylitol) enhanced the antifungal properties of Lactobacillus pentosus ŁOCK 0979. Its metabolites were identified and quantified by enzymatic methods, HPLC, UHPLC-MS coupled with QuEChERS, and GC-MS. The presence of polyols and gal-polyols significantly affected the acid metabolite profile of the bacterial culture supernatant. In addition, lactitol and mannitol were used by bacteria as alternative carbon sources. A number of compounds with potential antifungal properties were identified, such as phenyllactic acid, hydroxyphenyllactic acid, and benzoic acid. Lactobacillus bacteria cultivated with mannitol synthesized hydroxy-fatty acids, including 2-hydroxy-4-methylpentanoic acid, a well-described antifungal agent. Scanning electron microscopy (SEM) and light microscopy confirmed a strong antifungal effect of L. pentosus ŁOCK 0979.
Introduction
Filamentous fungi and yeasts are present in almost all types of ecosystems due to their high adaptation ability and low nutritional requirements. Filamentous fungi are widespread food spoilage microorganisms responsible for significant economic losses in the agri-food industry [6]; they are also a major health concern due to mycotoxin production. The most common genera of spoilage fungi include Penicillium, Fusarium, Aspergillus, Cladosporium, and Rhizopus [21]. Commercial foodstuffs are usually protected from such microorganisms by physical and chemical techniques. However, as chemical preservatives have become less socially acceptable, natural preservation methods are being sought. Lactic acid fermentation has been known and used these purposes since antiquity. In recent years, lactic acid bacteria (LAB) have been extensively investigated for their antifungal properties and bioprotective cultures have been proposed as a promising biotechnological approach [22,24,25]. Of particular application interest are lactobacilli, which convert carbohydrates into lactic and acetic acids (primary metabolites), as well as a range of secondary metabolites, such as carbon dioxide, ethanol, hydrogen peroxide, fatty acids, acetoin, diacetyl, cyclic dipeptides, bacteriocins, and bacteriocin-like inhibitory substances [3]. Since these metabolites exhibit only weak antifungal properties, many research teams are seeking Lactobacillus strains with a higher natural ability to inhibit fungal and yeast growth [4,9,14,15,26]. Ryu et al. [26] reported that Lactobacillus plantarum HD1 synthesizes 5-oxododecanoic acid (MW 214), 3-hydroxydecanoic acid (MW 188), and 3-hydroxy-5dodecenoic acid (MW 214), which are considered antifungal.
In turn, Magnusson [14,16] showed that some LAB can convert glycerol to 1,3-propanediol, which inhibits fungal growth. While the qualitative and quantitative composition of antifungal compounds generated by LAB is species-or even strain-specific, it can be modulated by culture medium modification. For instance, Lipińska et al. [13] adjusted the antifungal spectrum of lactobacilli by adding polyols and their galactosyl derivatives, proving that the antagonistic activity of LAB depends on culture medium composition, the LAB species, and the sensitivity of the fungal species. It was found that in the presence of xylitol and gal-xylitol in the bacterial culture medium Lactobacillus pentosus ŁOCK 0979 effectively inhibited the growth of A. niger, A. alternata, A. brassicicola, F. lateritium, and M. hiemalis [13]. The modulation of LAB metabolism by supplementing the culture medium with various, often atypical, compounds may give rise to new systems inhibiting the growth of spoilage microorganisms.
The objective of the study was to determine the antifungal properties, metabolite profile, and enzymatic activity of the strain L. pentosus ŁOCK 0979 cultured in the presence of polyols, namely, erythritol, xylitol, maltitol, mannitol, sorbitol, and lactitol, and their transglycosylation derivatives (galerythritol, gal-xylitol, and gal-sorbitol).
Microbiological Strains and Polyols
The study material consisted of the bacterial strain L. pentosus ŁOCK 0979 and 10 fungal strains deposited with the Pure Cultures Collection of Industrial Microorganisms of the Institute of Fermentation Technology and Microbiology, Lodz University of Technology (ŁOCK 105). The indicator fungi included the yeasts Candida vini 0008 and 0009 and the molds Mucor hiemalis 0519, Geotrichum candidum 0511, Alternaria alternata 0409, Alternaria brassicicola 0412, Aspergillus niger 0433, Fusarium lateritium 0508, Aspergillus ochraceus, and Penicillium sp. Two of the tested fungi, A. ochraceus and Penicillium sp., were newly isolated from spoiled food.
The polyols used in the study (erythritol, lactitol, maltitol, mannitol, sorbitol, xylitol) occur naturally in some foodstuffs and may be added to others, e.g., as sweeteners. In turn, gal-erythritol, gal-xylitol, and gal-sorbitol are modern prebiotics which confer beneficial effects [8], as related in the blood and digesta of laboratory rats (Klewicki 2007).
Determination of Antifungal Activity of Lactobacillus pentosus ŁOCK 0979 in the Presence of Polyols and Galactosyl-Polyols The antagonistic activity of L. pentosus ŁOCK 0979 against the indicator fungi was tested using the double-layer method described by Lipińska et al. [13]. First, 10 μL of overnight bacterial culture was dropped on MRS agar plates (Merck or BTL) supplemented with 1% (m/v) polyols, galactosylpolyols, or galactose, separately. The control group consisted of MRS agar plates (Merck) with lactobacilli colonies cultured with neither polyols nor gal-polyols. After 18-24 h, the plates were overlaid with Sabouraud 4% dextrose agar (Merck) inoculated with an indicator fungal strain (10 5 -10 6 spores × mL −1 ). Indicator strain inhibition zones around Lactobacillus sp. colonies were measured after 24-72 h of cultivation at 30°C. The results were given as fungal inhibition diameters minus the diameter of Lactobacillus sp. colonies.
Preparation of Cell-Free Supernatant After Lactic Acid Fermentation
Following lactic acid fermentation by L. pentosus ŁOCK 0979 in media with one of polyols or galactosyl-polyols, samples of cell-free supernatant (CFS) were prepared in order to identify and quantify the antifungal agents produced by the bacteria in the modified MRS media.
Determination of the Content of Polyols and Saccharides Using HPLC
Each CFS sample was diluted 10-fold. The obtained solution was passed through a 5-mL column BAKERBOND® spe Octadecyl (18) (J.T. Baker, USA) with cation and anion exchange resins (1:2 v/v). The first fraction (3 mL) was discarded, and the second one (3 mL) was collected for HPLC analysis. The content of saccharides and polyols was determined using an Aminex HPX-87C column from Bio-Rad (0.78 × 30 cm, mobile phase: water, flow rate: 0.5 mL × min −1 , 85°C). An RI detector and an integrating system from Knauer were used. The tests were done in triplicate and prepared in three parallel columns. Statistical analysis consisted of the Duncan test (p ≤ 0.05).
Spectrophotometric Determination of Glucose
Glucose concentration in the pure culture medium and in postfermentation CFS was determined spectrophotometrically according to the instructions supplied with the enzymatic kit (BioMaxima) and a calibration curve. CFS and pure culture medium samples were diluted 50-and 100-fold, respectively, relative to the initial glucose content of the medium (10 g × L −1 ). Subsequently, 1 mL of the reagent (glucose oxidase and glucose peroxidase) and 0.01 mL of a tested sample (diluted culture or pure medium) were placed in a cuvette and mixed. After 5 min (37°C), the absorbance of the tested sample was measured relative to the reagent blank (λ = 540 nm), with the results being proportionate to glucose content in the sample. Based on the prepared calibration curve (in the range of 0.01-4 g glucose × L −1 ) glucose concentration was determined both in medium and CFS samples, accounting for their dilution. The tests for each sample were done in triplicate, and statistical analysis involved one-way ANOVA (p ≤ 0.05).
Concentration of D-Lactic Acid, L-Lactic Acid, and Acetic Acid
The quantification of D-lactic, L-lactic, and acetic acid requires enzymatic reactions described in the assay procedures: K-DLLATE 07/14 and K-ACET 11/05 (Megazyme International Ireland). In the case of D-and L-lactic acids, the manufacturer's procedure for the sequential assay of both optical isomers was applied. The concentration of all tested acids was estimated using colorimetric tests with the absorbance measured (λ = 340 nm) in a control sample (non-inoculated medium) and diluted CFS. The calculations were made according to the manufacturer's recommendations, taking into consideration the dilution factor (F = 50). The tests were done in triplicate, and statistical analysis involved one-way ANOVA (p ≤ 0.05).
Quantification of Antifungal Acids Using UHPLC-MS in Conjunction with QuEChERS
Antifungal metabolites produced by L. pentosus ŁOCK 0979 in the presence of polyols and their galactosyl derivatives were quantified using the QuEChERS method and an ultra-highperformance liquid chromatography-mass spectrometry (UHPLC-MS) system according to a protocol modified from Oliveira et al. [20]. In the sample preparation step, 1 mL of formic acid, 10 mL of ethyl acetate, and 10 mL of a CFS sample were added to a Falcon test tube containing 4 mg of magnesium sulfate and 1 g of sodium chloride. The mixture was shaken for 1 min and centrifuged (10 min, 1077×g). Then 5 mL of the organic solvent was removed and added to an Agilent dSPE kit (150 mg of C18, 900 mg of magnesium sulfate). The mixture was shaken for 1 min, centrifuged (10 min, 1077×g), and decanted into a test tube with 100 mL of dimethyl sulfoxide (DMSO). The solutions were concentrated for 3.5 h in a ScanVac ScanSpeed 40 centrifuge evaporator (2000 rpm, 45°C) equipped with a CoolSafe 110-4 Pro cold trap (Labogene, Lynge, Denmark) until only 100 μL DMSO remained. The concentrated solution was mixed with 400 μL of 10% (v/v) acetonitrile, centrifuged (10 min, 10,000×g), and transferred into a 1.5-mL amber vial.
A Dionex UltiMate 3000 ultra-high-performance liquid chromatograph from Thermo Fisher Scientific (Germering, Germany) coupled with a diode array detector (DAD) and a Q Exactive Orbitrap mass spectrometer (MS, Thermo Fisher Scientific, Bremen, Germany) was used for LC-MS analysis. Chromatographic separation was performed using a 150-mm C18 column with a 2.1-mm internal diameter and 2.6-μm particle size (Kinetex 2.6u, Torrance, CA, USA). The column temperature was maintained at 30°C, and the injection volume was 2.5 μL. The mobile phase consisted of the following: A was water containing 0.1% formic acid and B was a mixture of acetonitrile and water (90:10, v/v) containing 0.1% formic acid. The flow rate was 0.5 mL/min. The following gradient was used: 0-16.5 min, 5-40% B; 16.5-17.5 min, 40-95% B; 17.5-20 min, 95% B; 20-22 min, 95-5% B; 22-27 min, 5% B. After DAD detection, the separated compounds entered into the MS system via a heated electrospray ionization (H-ESI) source with a flow rate of 0.5 mL/min. Analyses were carried out in the negative ion mode. Chromatographic data were collected using Xcalibur software (Thermo). The source parameters were as follows: a vaporizer temperature of 400°C, an ion spray voltage of 4 kV, a capillary temperature of 380°C, and sheath and auxiliary gas flow rates of 60 and 15 units, respectively. The detector was operated in either full MS or full MS/dd-MS2 scan modes. In the full MS mode, the scan rage of m/z 50-400 was used. The full MS/dd-MS 2 scan mode was used to generate MS2 data. In this mode, the selected precursor ions entered into a high-energy collision-induced dissociation (CID) cell, where they were fragmented with normalized collision energy (NCE) to obtain product ion spectra (MS 2 ). In our experiments, the NCE used to generate MS 2 spectra was set to 30. Tuning and optimization were performed using direct injection of the standard solution diluted in an 80:20 (v/v) mixture of mobile phases A and B at a flow rate of 0.25 mL/min. Acids were quantified using the selected ion monitoring (SIM) mode. The standard curves of these compounds were used for quantification. Table 1 gives acquisition parameters for 13 acids in the tested solution.
Acid quantification was performed in triplicate, and statistical analysis was conducted using the Duncan test (p ≤ 0.05).
Identification of Fatty Acids and Hydroxylated Fatty Acids by Gas Chromatography Coupled with Mass Spectrometry
Lactobacillus pentosus ŁOCK 0979 was grown at 30°C in 150 mL of MRS broth (Merck) with 1% (m/v) mannitol. The 48-h culture was centrifuged (10 min, 12,000×g, 20°C), and supernatant pH was adjusted to 4.0 with hydrochloric acid. Then, 100 mL of the sample was extracted with 30 mL of dichloromethane, mixed for 3 min, and settled for 10 min. Extraction of the aqueous phase was repeated twice using further portions of dichloromethane. The organic phases were combined, dried over anhydrous sodium sulfate, and, following filtration, concentrated to approx. 0.5 mL in a rotatory evaporator. The residue was derivatized with 200 μL of 0.25 M trimethylsulfonium hydroxide solution (TMSH, Sigma Aldrich) in methanol.
The samples were analyzed by gas chromatography coupled with mass spectrometry (TRACE GC Ultra-ISQ) using a Stabilwax-DA capillary column (30 m × 0.25 mm i.d., film thickness 0.25 μm). The operating conditions were as follows: temperature program-50°C (3 min)-240°C (30 min) at 4°C/min, injection temperature-240°C, carrier gas-helium (constant flow 1 mL/min). Mass spectrometer parameters were as follows: 33-550 amu, ionization energy 70 eV, ion source temperature 200°C. Identification of compounds was based on a comparison of their mass spectra with computerized libraries (Wiley Registry 10th Edition/NIST Mass Spectral Library 2012).
Mannitol was chosen as one of the best agents enhancing the antifungal effect of L. pentosus ŁOCK 0979. Moreover, in the presence of mannitol, many signals from acidic compounds were obtained using UHPLC-MS analysis, but they could not be identified by the UHPLC-MS method due to their molecular structure (data not presented).
API®ZYM 25200 Test of Bacterial Enzymatic Activity
Bacteria were grown for 24 h in 9 mL of MRS broth (Merck) with 1% (m/v) polyols or galactosyl-polyols, added one by one. The cultures were centrifuged (10 min, 12,000×g, 20°C), and the biomass was suspended in saline to obtain a cell concentration corresponding to 5-6 on the McFarland scale (approx. 1.5 × 10 9 cells × mL −1 ). API ZYM strips were placed in API ZYM boxes humidified by distilled water. The strips were inoculated with 65 μL of the sample and incubated for 4 h at 37°C. Then, the reagents ZYM A and ZYM B (bioMerieux) were added dropwise. The strips were placed under a powerful light source for 10 s and then exposed to daylight for 5-10 min. Results were read according to the manufacturer's recommendations. Fungal strains sensitive to the metabolites of polyols or galpolyols were selected based on the antagonistic activity of L. pentosus ŁOCK 0979 cultured in different culture media. CFS samples were added to Sabouraud 4% dextrose agar (Merck) in the amount of 10% (v/v). The medium was subsequently placed in 6-well plates and inoculated with selected fungal strains using an inoculation loop. Microscopic examination was carried out after 2 days (yeasts) and 7 days (molds) using a light microscope. Additionally, yeast morphology was examined using a scanning electron microscope (JEOL JCM-6000, Tokyo, Japan) after coating with gold particles for 45 s (JEOL JFC-1200 Fine Coater, Tokyo, Japan). The experiments were conducted in duplicate. The control samples consisted of fungi cultured on Sabouraud agar without bacterial CFS.
Antifungal Activity of Lactobacillus pentosus ŁOCK 0979 in the Presence of Polyols and Galactosyl-Polyols
The antagonistic activity of L. pentosus ŁOCK 0979 against the tested yeasts was weak, but its anticandidal properties were enhanced in the presence of galactosyl-polyols, and especially gal-erythritol. The addition of gal-sorbitol and gal-xylitol to the bacterial culture medium led to inhibition of only one of the two strains of yeast, that is, C. vini ŁOCK 0009 (Table 2).
Mold inhibition by L. pentosus ŁOCK 0979 depended both on the culture medium composition and on mold species. The A. alternata test strain and the A. ochraceus and Penicillium sp. strains isolated from the environment exhibited the greatest sensitivity to lactic acid fermentation products both in the controls and in samples with polyols and gal-polyols ( Table 2). In contrast, the growth of A. brassicicola and A. niger was inhibited by L. pentosus ŁOCK 0979 only if the bacterial culture medium was supplemented with polyols (both mold strains) or gal-polyols (only A. brassicicola). The antifungal activity of L. pentosus ŁOCK 0979 was also improved by polyols and their galactosyl derivatives with respect to F. lateritium. Its growth was inhibited by the bacteria cultured in the presence of maltitol and sorbitol, as well as all tested galactosyl-polyols. Antagonistic activity against G. candidum and M. hiemalis was weak ( Table 2).
In a similar way, additional control trials were conducted using MRS medium with glucose (Merck) and 1% (m/v) galactose, as well as a glucose-free MRS medium (BTL) with 1% (m/ v) galactose. These media enhanced the antagonistic activity of L. pentosus ŁOCK 0979 only against one indicator fungal strain (F. lateritium) as compared to bacteria cultivated on MRS agar (Merck). Therefore, it can be assumed that the small amounts of galactose released as a result of galactosyl-polyol hydrolysis (Table 3) are not a critical determinant of antifungal properties.
Content of Polyols and Saccharides in Cell-Free Supernatant
The content of polyols and saccharides before and after lactic acid fermentation by L. pentosus ŁOCK 0979 in the presence of X no tested, − no inhibition zone, +/− inhibition zone between 0.5 and 2 mm, + inhibition zone between 2.1 and 10 mm, ++ inhibition zone between 10.1 and 20 mm, +++ inhibition zone above 20 mm, nt not tested polyols and gal-polyols was determined using HPLC, and the concentration of glucose was evaluated spectrophotometrically ( Table 3). The results show that in lactic acid fermentation the bacteria used glucose as a primary carbon source, while the galactosyl-polyols and polyols (lactitol, mannitol) were used as additional carbon sources to varying degrees ( Table 3). The mannitol and lactitol present in the culture media were partially used by L. pentosus ŁOCK 0979 (as reflected in 10 and 26% decline in concentration after lactic acid fermentation, respectively). Galactosyl-polyols (gal-erythritol, gal-xylitol, and gal-sorbitol) were hydrolyzed to galactose and the respective polyols. Residual galactose was found in postfermentation CFS from samples supplemented with galerythritol and gal-sorbitol ( Table 3). The content of erythritol, xylitol, maltitol, and sorbitol in the medium did not change significantly following lactic acid fermentation (Table 3).
Acidity and Production of Acetic and Lactic Acids
The concentration of acetic acid and lactic acid (Land Denantiomers separately) was evaluated enzymatically using Megazyme kits. The total content of acetic acid was from 4.09 ± 0.178 to 7.62 ± 0.010 g × L −1 , while that of lactic acid was from 4.58 ± 0.390 to 20.26 ± 1.489 g × L −1 ; in all samples, the dominant stereoisomer was L-lactic acid accounting for 61-100% of the total ( Table 4). The mean pH of the postfermentation supernatant was higher in the case of gal-polyols (pH 4.13 ± 0.12) than polyols (pH 3.88 ± 0.290), except for sorbitol (pH 4.4). Samples with higher pH (gal-polyols, sorbitol) exhibited a lower concentration of lactic acid (4.58-9.49 g × L −1 ) and a higher concentration of acetic acid (5.76-7.62 g × L −1 ). L. pentosus ŁOCK 0979 generated the highest amounts of lactic and acetic acids in the presence of xylitol (20.26 ± 1.489 g × L − 1 ) and gal-xylitol (7.62 ± 0.010 g × L −1 ), respectively ( Table 4).
Effects of Polyols and Gal-Polyols on the Content of Antifungal Acids
The following 13 antifungal acids were quantified: DL-3phenyllactic acid (PLA), DL-p-hydroxyphenyllactic acid (HPLA), benzoic acid (BA), hydrocaffeic acid (HCaA), hydrocinnamic acid (HCiA), vanillic acid (VA), 4hydroxybenzoic acid (4-HBA), catechol (Cat), caffeic acid Means designated with the same lowercase letter are not significantly different (Duncan's multiple range test) below the limit of detection (CaA), ferulic acid (FA), 3-hydroxybenzoic acid (3-HBA), 2,4-dihydroxybenzoic acid (2,4-dHBA), and p-coumaric acid (p-CoumA), using a UHPLC-MS system coupled with QuEChERS. It was found that the minimum inhibitory concentrations of the tested acids are many times higher than their actual concentrations in CFS (Table 5). Statistically significant differences in the content of PLA and HCaAwere linked to the composition of the culture medium. PLA content in CFS from the bacterial culture with gal-xylitol differed from that found in cultures with lactitol and xylitol. In the presence of galerythritol and gal-xylitol, L. pentosus ŁOCK 0979 produced double to triple the amount of vanillic acid and half the amount of Cat as compared to the other CFS. The content of HCaA, HCiA, VA, 4-HBA, Cat, CaA, FA, 3-HBA, 2,4-dHBA, and p-CoumA was low and ranged from approx. 0 (<LOD) to 0.161 mg × L −1 ( Table 5).
Production of Hydroxy Fatty Acids in the Presence of Mannitol
Volatile compounds with potential antifungal properties (fatty acids, hydroxy fatty acids) synthesized by L. pentosus ŁOCK 0979 cultured in MRS and in the presence of 1% (m/v) mannitol were identified ( Table 6). Mannitol induced antagonistic activity of L. pentosus ŁOCK 0979 against some of the test fungi (A. brassicicola, A. niger, F. lateritium), which must therefore be attributable to one or more metabolites synthesized in the presence of this polyol. Moreover, CFS samples revealed 2-hydroxy-4-methylpentanoic acid, a compound described by Ndagano et al. [19] as a strong antifungal agent.
Enzymatic Activity of Lactobacillus pentosus ŁOCK 0979
Examination of the activity of enzymes metabolizing lipids, proteins, and phosphates revealed some minor differences between L. pentosus ŁOCK 0979 cultures conducted in media supplemented with various polyols and gal-polyols ( Table 7). As compared to the controls, esterase activity was found only in the presence of lactitol and gal-sorbitol, while that of esterase lipase in the presence of gal-erythritol.
Effects of Polyols and Gal-Polyols on Fungal Growth and Morphology
The yeasts C. vini ŁOCK 0008 and ŁOCK 0009 and the mold A. brassicicola were examined microscopically following culture in Sabouraud agar with 10% (v/v) CFS. The results are given in Tables 8 and 10. Scanning electron micrographs of C. vini ŁOCK 0009 cultivated in the presence of CFSs are presented in Table 9.
Light microscopy revealed greater cell differentiation in the yeast C. vini ŁOCK 0008 grown in Sabouraud agar with 10% (v/v) CFS from L. pentosus ŁOCK 0979 cultured in the presence of gal-erythritol than the control (Table 8). Candida vini ŁOCK 0008 cells were at different developmental stages and included both single cells and some initial degrees of pseudomycelium formation. The same was true of C. vini ŁOCK 0009 grown in Sabouraud agar with 10% (v/v) CFS from L. pentosus ŁOCK 0979 cultured in the presence of galpolyols (gal-erythritol, gal-xylitol, gal-sorbitol). What is more, the addition of gal-polyols to the bacterial medium led to fungal deformation and gave rise to blastoconidia (CFS galerythritol, CFS gal-xylitol, CFS gal-sorbitol) (red arrows in the Table 8). The yeast cells were narrower, and some of them (Table 8).
SEM provided more details about the form and surface of C. vini ŁOCK 0009. In the control, yeasts were elliptical and developed pseudohyphae with smooth and flat surfaces ( Table 9). The cells of C. vini ŁOCK 0009 cultivated with the CFS of L. pentosus ŁOCK 0979 in the presence of galpolyols were strongly deformed. Their shape was warped and the surface rough, and cell damage was visible in the form of concave areas on the surface (Table 9). Additionally, in the presence of CFS gal-xylitol yeasts, cells were coated by extracellular matrix (Table 9). Morphological changes were also observed in the mycelium of the mold A. brassicicola grown in Sabouraud agar with 10% (v/v) CFS from L. pentosus ŁOCK 0979 cultured in bacterial media supplemented with erythritol and xylitol. Furthermore, growth inhibition and mycelium deformation were found for all gal-polyols in the bacterial medium (Table 10). The addition of lactitol and mannitol to the bacterial medium led to complete inhibition of fungal growth (hence no photomicrographs).
Discussion
The control of spoilage microorganisms and, by the same token, the extension of the shelf-life of foodstuffs still pose formidable challenges. While the use of Lactobacillus sp. as natural bioprotective agents was already reported by Magnusson [14], a definitive explanation of their mechanism of action against undesirable fungi was not provided.
Lactobacilli inhibit the growth of other bacteria (of the same or different species) as well as that of fungi, including pathogenic and toxin-producing molds [1,7]. While many authors have reported the antifungal properties of LAB [3,10], their exact underlying mechanisms remain elusive. Nevertheless, it is known that a major role is played by some bacterial metabolites, and especially by organic acids, hydroxy fatty acids, cyclic dipeptides, and low molecular weight proteinaceous compounds [22]. In addition to primary metabolites (lactic and acetic acids), which are produced by all Lactobacilli. sp., some LAB species synthesize secondary metabolites, which may selectively affect other microorganisms; these include propionic, hexanoic, salicylic, succinic, formic, 2-pyrrolidone-5-carboxylic, 3-phenyllactic, and 4-hydroxyphenyllactic acids [2,14,17]. One of the best described antifungal products of lactic acid fermentation is 3-phenyllactic acid (PLA), synthesized by LAB such as L. casei, L. fermentum, L. rhamnosus, L. reuteri, and L. sakei [14,17,18]. While L. pentosus ŁOCK 0979 does produce PLA, its concentration in the CFS is much lower than its minimum inhibitory concentration reported by other authors [20]. The antifungal metabolites of Lactobacillus sp. constitute a rich mixture of active compounds whose qualitative and quantitative composition largely depends on the compounds found in the bacterial culture medium. Ndagano et al. [19], who evaluated the effects of different concentrations and proportions of acetic and lactic acids on fungal viability, observed significant synergies: the mixture was more potent than the sum of its individual components taken together. Synergies may also be found for some parameters of the culture medium, such as pH.
The study presented herein was preceded by evaluation of 60 Lactobacillus sp. strains, including L. pentosus ŁOCK 0979, cultured in the presence of polyols and galactosyl-polyols as alternative carbon sources [13] to select bacteria with strong antagonistic properties against as many indicator fungal strains as possible. L. pentosus ŁOCK 0979 was selected for further research as one of the most prospective antifungal strain. In this context, the use of polyols and their galactosyl derivatives to enhance the inhibitory properties of lactobacilli is a novel solution which offers a promising method for modulating LAB metabolism. In situ studies on food products describing enhanced antifungal activity of lactobacilli on fruits have been presented by Lipinska et al. [13].
In the presented experiments, L. pentosus ŁOCK 0979 exhibited the ability to partially absorb lactitol, mannitol, and all the tested galactosyl-polyols. While Tyler et al. [30] isolated Lactobacillus florum 2F, a heterofermentative strain which can biosynthesize erythritol and mannitol, the consumption of polyols and galactosyl-polyols by Lactobacillus bacteria represents a new line of research with scant available literature.
Since the antifungal activity of lactobacilli consists of a complex set of interactions beginning at the cellular level, in this study considerable attention was given to the enzymatic activity of the bacteria in the presence of polyols and gal-polyols. Some small differences were found in enzymes metabolizing lipids, proteins, and phosphates, and in particular in esterase and esterase lipase, which catalyze the hydrolysis and synthesis of organic acid esters, primarily from water-soluble substrates, such as triacylglycerols containing short chain fatty acids. Bacterial esterase activity promotes the hydrolysis of a wide spectrum of substrates to acids [5,28], including antifungal metabolites. LAB enzymes can be used to modify the gustatory and olfactory properties of wines and cheeses and to produce some ingredients of foodstuffs, pharmaceuticals, and cosmetics [12]. Fatty acids and hydroxy fatty acids synthesized by LAB affect fungal viability by irreversibly weakening and deforming the lipid bilayer [23]. The morphological changes in the structure of the cell walls of C. vini (strains 0008 and 0009) and A. brassicicola mycelia, which were revealed in this study using SEM and light microscopy, corroborate that mechanism of action for the obtained CFSs, which led to strong deformation of yeasts' surface; similar morphological changes of Candida sp. were described by Shengli et al. [27]. In the presence of CFSs, yeasts can produce extracellular matrix to promote their adherence and protect cells from environmental insults [29].
Conclusions
The antifungal activity of L. pentosus ŁOCK 0979 depends on the bacterial culture medium as well as on the fungal strain. The present study shows changes in the antifungal profile of the studied bacterial strain linked to the composition of the culture medium. Although no single decisive factor (metabolite) was found to be responsible for inhibiting fungal growth, the results indicate how bacterial metabolite profiles may be beneficially modulated. Thus, the authors have broken new ground in developing natural ways of ensuring the microbiological safety of food.
Acknowledgements The authors thank Jaroslaw Arkusinski for technical assistance.
Funding This study was funded by the National Science Center (grant no. 2013/09/B/NZ9/01806).
Compliance with Ethical Standards
Ethical Statement All authors of this paper have read and approved the final version submitted. The contents of this manuscript have not been copyrighted or published previously.
1. The contents of this manuscript are not now under consideration for publication elsewhere. | 2018-04-03T02:44:27.628Z | 2017-11-06T00:00:00.000 | {
"year": 2017,
"sha1": "3fc88deb8ff73927be3e3c292365f16c7e61d5a4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12602-017-9344-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fc88deb8ff73927be3e3c292365f16c7e61d5a4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
219416415 | pes2o/s2orc | v3-fos-license | Flow-Shop Predictive Modeling for Multi-Automated Guided Vehicles Scheduling in Smart Spinning Cyber–Physical Production Systems
Pointed at a problem that leads to the high complexity of the production management tasks in the multi-stage spinning industry, mixed flow batch production is often the case in response to a customer’s personalized demands. Manual handling cans have a large number of tasks, and there is a long turnover period in their semi-finished products. A novel heuristic research was conducted that considered mixed-flow shop scheduling problems with automated guided vehicle (AGV) distribution and path planning to prevent conflict and deadlock by optimizing distribution efficiency and improving the automation degree of can distribution in a draw-out workshop. In this paper, a cross-region shared resource pool and an inter-regional independent resource pool, two AGV predictive scheduling strategies are established for the ring-spinning combing process. Besides completion time, AGV utilization rate and unit AGV time also analyzed with the bottleneck process of the production line. The results of the optimal computational experiment prove that a draw frame equipped with multi-AGV and coordinated scheduling optimization will significantly improve the efficiency of can distribution. Flow-shop predictive modeling for multi-AGV resources is scarce in the literature, even though this modeling also produces, for each AGV, a control mode and, if essential, a preventive maintenance plan.
Introduction
Spinning is the lifeblood industry of the textile industry, where the fiber is drawn-out, twisted, and wrapped onto a bobbin by using state-of-the-art twisting techniques. The ring-spinning production process includes opening and cleaning (opening, blending, and cleaning take place in a blow room), carding, pre-combed drawing, lap forming, combing, post-combed drawing, simplex, ring-spinning, and winding. The spinning process is dynamic, implicating multiple discrete workshops, and around more than 80 production pieces of equipment are used during it. The spindle's speed when spinning can reach up to 25,000 r/min and the spindle is the leading apparatus of these extremely sophisticated systems. Though these smart, or intelligent systems, are relatively efficient, they also suffer from various problems; particularly in the draw-out workshop, manual handling can distribution has a large number of demanding tasks, and there is a long turnover period in semi-finished products.
1.
To effectively reduce the makespans and total completion time. To the best of our knowledge, this is the first time a novel approach that handles both cross-region shared resource and inter-regional independent resource pools.
2.
Based on the intended categories of scheduling tasks, an AGV transportation route strategy is developed for mass production in spinning CPPS.
3.
A mathematical model for real-time task processing for dissimilar cotton and polyester processing in multi-AGV scheduling is designed to prevent conflict and deadlock by assigning different tasks: AGV assignments, AGV sorting, and task sources.
Our results demonstrate the adequacy of the presented methods when the number of AGVs intensifies to a certain extent, as increasing the of number AGVs decreased the completion time drop sharply, which reduces the AGV utilization.
The paper is organized as follows: Section 2 imparts a comprehensive background of the analysis of the production process of the spinning workshop, a mathematical model of multi-AGV scheduling, model assumptions, object function, and the uniqueness constraint. Section 3 concentrates on the explanation of the multi-AGV scheduling simulation modeling, simulation model construction, and the AGV resource pool strategy-based bottleneck analysis. Section 4 emphasizes the results by doing a comparative study of two AGV resource pool strategies and a comprehensive analysis of multi-AGV scheduling in the process flow. Finally, conclusions are summarized in Section 5.
Analysis of the Production Process of the Spinning Workshop
We tried to examine previously-discussed drawing shops to improve our earlier presented approach in two key areas: First, the series structure, where the raw materials for each process only depend on one process and the same processed products used for another process, is shown in Figure 1. The second parallel structure, in which the raw material of each procedure depends on multiple processes or the product of one process used as the raw material of other various processes, is shown in Figure 2.
Electronics 2020, 9, x FOR PEER REVIEW 3 of 34 3. A mathematical model for real-time task processing for dissimilar cotton and polyester processing in multi-AGV scheduling is designed to prevent conflict and deadlock by assigning different tasks: AGV assignments, AGV sorting, and task sources.
Our results demonstrate the adequacy of the presented methods when the number of AGVs intensifies to a certain extent, as increasing the of number AGVs decreased the completion time drop sharply, which reduces the AGV utilization.
The paper is organized as follows: Section 2 imparts a comprehensive background of the analysis of the production process of the spinning workshop, a mathematical model of multi-AGV scheduling, model assumptions, object function, and the uniqueness constraint. Section 3 concentrates on the explanation of the multi-AGV scheduling simulation modeling, simulation model construction, and the AGV resource pool strategy-based bottleneck analysis. Section 4 emphasizes the results by doing a comparative study of two AGV resource pool strategies and a comprehensive analysis of multi-AGV scheduling in the process flow. Finally, conclusions are summarized in Section 5.
Analysis of the Production Process of the Spinning Workshop
We tried to examine previously-discussed drawing shops to improve our earlier presented approach in two key areas: First, the series structure, where the raw materials for each process only depend on one process and the same processed products used for another process, is shown in Figure 1. The second parallel structure, in which the raw material of each procedure depends on multiple processes or the product of one process used as the raw material of other various processes, is shown in Figure 2. A traditional draw shop has a low-level of automation, and semi-finished products in various processes mainly rely on manual transportation. On the other hand, the efficiency of manual handling is poor, and it cannot meet the demand for rapid response, which has become a bottleneck for speeding up production lines, which causes higher labor costs and a more significant workload for employees, thus resulting in insufficient production. The usage of an AGV instead of manual handling between traditional production processes can effectively improve the automation level of a workshop [28][29][30]. By relying on information technology to enhance the accurate response capabilities of an AGV, it can achieve precisely guided tasks such as advanced stocking and rapid shipment, which avoid the traditional experience of employees [31,32]. The ability to rely on AGVs can effectively improve the production efficiency of a workshop.
A spinning workshop is a mixed-flow workshop. A typical production line contains multiple operational processes that involve various workshops. The corresponding process sequence must be satisfied with numerous methods. Therefore, a spinning workshop is regarded as a multi-stage parallel production assembly-line system. Figure 3 illustrates the ring-spinning production process of a spinning workshop, the first stage explains the opening and cleaning of the cotton bales and then the carding, mainly to remove impurities from cotton or polyester and to produce strip-shaped semifinished products. The second stage is primarily to pre-draw and to draw after carding. The density of cotton or polyester is not uniform. Pre-drawing and drawing further mix the raw materials evenly, A traditional draw shop has a low-level of automation, and semi-finished products in various processes mainly rely on manual transportation. On the other hand, the efficiency of manual handling is poor, and it cannot meet the demand for rapid response, which has become a bottleneck for speeding up production lines, which causes higher labor costs and a more significant workload for employees, thus resulting in insufficient production. The usage of an AGV instead of manual handling between traditional production processes can effectively improve the automation level of a workshop [28][29][30]. By relying on information technology to enhance the accurate response capabilities of an AGV, it can achieve precisely guided tasks such as advanced stocking and rapid shipment, which avoid the traditional experience of employees [31,32]. The ability to rely on AGVs can effectively improve the production efficiency of a workshop. A spinning workshop is a mixed-flow workshop. A typical production line contains multiple operational processes that involve various workshops. The corresponding process sequence must be satisfied with numerous methods. Therefore, a spinning workshop is regarded as a multi-stage parallel production assembly-line system. Figure 3 illustrates the ring-spinning production process of a spinning workshop, the first stage explains the opening and cleaning of the cotton bales and then the carding, mainly to remove impurities from cotton or polyester and to produce strip-shaped semi-finished products. The second stage is primarily to pre-draw and to draw after carding. The density of cotton or polyester is not uniform. Pre-drawing and drawing further mix the raw materials evenly, and mixing the two raw materials of cotton and polyester produces semi-finished products with different ratios. Phase 3 is thickness and thinness, and the purpose of it is further processing that makes the final products ready for the customer's demands according to the desired radius size. It can be observed from the figure that each stage in the flow-process includes multiple processes, each process performed by one machine or numerous identical machines, and each process relies on semi-finished products of one or more operations as raw materials. The mixing process in a spinning and drawing workshop uses cotton slivers produced in the combing process and polyester slivers produced in the polyester process as the raw materials. In contrast, the mixing process uses the products from the first process as raw materials. This article mainly studies the two-stage workshop in the production stage. For convenience, it is called the drawing-out flow-shop.
Processing Equipment Definition
Multiple parallel processes at the same level represent multiple production lines of different semi-finished products, and a production line of semi-finished products is composed of multiple serial processes. Keep in mind that the workshop has ℎ level operations, is the number of product lines under the ℎ-level operations, and represents the number of continuing operations of the z-product line. Under each process, there are identical pieces of processing equipment for processing, and denotes the ℎ machine that executes the process . Each piece of processing equipment has different attributes: in addition to the production efficiency of the processing equipment, there is the production time per unit of product, the type and quantity of raw material required for processing, the batch produced the number of products processed in one batch, starting processing time, and processing completion time. Among these, the production efficiency of the equipment, the type and quantity of raw materials, and the number of products obtained in one processing are the input of the model, and the other variables are the amounts of decisions.
Processing Equipment Definition
Multiple parallel processes at the same level represent multiple production lines of different semi-finished products, and a production line of semi-finished products is composed of multiple serial processes. Keep in mind that the workshop has ith level operations, Z i is the number of product lines under the ith-level operations, and P iz represents the number of continuing operations of the z-product line. Under each process, there are K p iz identical pieces of processing equipment for processing, and k p iz denotes the kth machine that executes the process p iz . Each piece of processing equipment has different attributes: in addition to the production efficiency of the processing equipment, there is the production time per unit of product, the type and quantity of raw material required for processing, the batch produced the number of products processed in one batch, starting processing time, and processing completion time. Among these, the production efficiency of the equipment, the type and quantity of raw materials, and the number of products obtained in one processing are the input of the model, and the other variables are the amounts of decisions.
Raw Material and Product Definition
Considering the particularity of assembly line production, the processing raw material of one device is a semi-finished product produced by another device, so the total number of products are represented by j + PizP i z ; p i z is the process required for the process p iz processing, indicating the process p iz comes from the Jth raw material of the process; p i z represents the total number of finished products processed by the process p iz ; and J − P iz represents the Jth finished product of the process p iz . In addition to the initial raw materials of the assembly line, all raw materials contain two attributes: one is the p i z processing completion time of the process, and p iz is the transportation cut-off time of the process. The state of the material after the start of processing becomes the product status; therefore, one does not need to consider the initial processing properties of the material.
AGV Definition
Considering that there are V identical AGVs in a spinning draw frame, each AGV transports between any two machines, and the transportation time depends on the actual physical distance and road conditions between the two machines.
An AGV car has two walking states in its scheduling system. One is without-load when the car receives a transportation task and it needs to travel from the current position to the starting position of the transport task without-load. The car is currently in the initial position of transportation without-load, and the distance covered by it is 0. The second is with-load, that is, the trolley runs from the initial position of the transportation task to the end position of the transportation task. Each AGV contains different attributes, including with-load start time, without-load end time, the without-load starting point for a transport task, load start time, load end time, load line for a transport task, and specific raw materials for means of transport. Table 1 shows that the process p iz for each machine must start processing after collecting the number of raw materials, that is, the process p iz must collect all the required raw materials before it can be processed once, and this is recorded as a processing batch. When p = 1, it is the first sub-process of each level process. The raw materials of this sub-process may depend on a variety of semi-finished products of the previous level process where i = i + 1; when p > 1, it means that it depends on the same level. The semi-finished products of the same product line in the process are used as raw materials, and, at this time, i = i , p = p + 1, and z = z . Table 2 shows the definition of time decision variables:
•
There is only one product per piece of processing equipment, considering that some machines process different proportions of raw materials to produce various products. This article defines them as separate machines, i.e., each machine is set to handle one type of product. In an actual environment, the parameters of a machine are different when producing different products. During the production process of an assembly line, the machine does not automatically adjust settings.
•
The processing equipment responsible for the same process has the same processing performance. Number of product types output by the ith operation to the next operation z i The z product on the ith process; when i = 0, it means this is the starting process N z I Final product type z quantity P iz The number of operations owned by the z product line of the ith operation p iz The p operation of the ith level operation of z product line R + p iz p i z The number of products from the operation p i z required to process the operation of a batch p iz R − p iz Number of products produced in one processing batch of the process p iz L p iz A total batch of operations p iz l p iz The Ith of the operation p iz J + p iz p i z The total number of products p iz required for the operation p i z j + p iz p i z The process p iz comes from the jth raw material of the process p i z . When p = 1, it is represented as the entrance process of each stage of the process. At this time, there are a variety of products that depend on the previous process. Where i = i + 1 and when p > 1, it means that the process is relying on the products in the same process as that of the raw materials; at this time, i = i , p = p + 1, and z = z J − p iz Total number of finished products processed by the operation p iz j −
p iz
The jth finished product of the process p iz K p iz Total number of execution machines p iz operations The time required for a machine on the p iz to process unit raw materials Table 2. Variable definitions.
S p iz lk
The Ith batch of the operation p iz starts on the machine k p iz T p iz lk The end time of the Ith batch of operation p iz is processed on the machine k p iz T jp iz k Processing completion time j − p iz on k p iz C jp iz p i z Raw materials j + p iz p i z shipping to the p iz of the process t p iz p i z AGV transit time between operation p iz and operation p i z S C jp iz p i z v Car v starts C jp iz p i z , the transportation task of the without-load phase Car v starts C jp iz p i z ) the end time of the without-load phase of the transportation mission
S C jp iz p i z v
Car v starts C jp iz p i z the start time of the load phase of the transportation mission T C jp iz p i z v Car v starts C jp iz p i z the end time of the load phase of the transportation mission The capacity of each AGV can only transport unit quantities of raw materials, and the mass and volume of each unit of raw materials does not exceed the AGVs rated load.
3.
There is sufficient avoidance space at intersections and around the equipment, and the AGV avoidance passage time is negligible. 4.
AGV loading and unloading time are included in the transportation time.
5.
The vehicle has no faults during transportation [34,35]. The model only considers the process that requires AGV transportation and simplifies the process that does not require AGV transportation. The first-stage process is the first-stage process that requires an AGV to participate in shipping, and the last-stage process is what happens subsequently.
Object Function
In this study, the AGV scheduling plan with the highest production efficiency in the drawing shop was studied. The objective function was to minimize the time required for the drawing shop to complete all finished products, i.e., to reduce the maximum completion time. Here, Equation (1) defines U as the complete set.
Uniqueness Constraint
Unique constraints include those on processing materials, production equipment, and the AGV.
Raw-Material Processing Uniqueness
Each processing raw material corresponds one-to-one with the process, batch, and production equipment. In this case, only the goods to be transported by the AGV cart can be determined to determine the transport route of the cart and a set that restricts one batch under one process on one machine. Equation (2) defines the input raw materials of a specified process that can only be attributed to one batch under the procedure for processing. Equation (3) represents the constraint on the batch to which raw materials belong in the first step of each stage of the production line. Equation (4) indicates that the batch belongs to the raw material constraint in other levels of each stage of the production line. The products of a specified process can only be produced by a batch under this process, as shown in Equation (5).
AGV Uniqueness Constraint
Raw material j is transported only in one AGV transportation process, as shown in Equations (6) and (7). Since the first process of each stage of the production line is different from other means, the situation of p is discussed separately in Equations (6) and (7). Electronics 2020, 9, 799
Unique Constraints of Production Equipment
Concerning the limitation on processing equipment, a machine processes only one raw material at a time, and this constraint is determined by the attributes of the processing equipment for different processes in the drawing shop, as shown in Equations (8) and (9). When p = 1, the raw materials for machining may come from different production lines. When p > 1, the raw materials for machining come from the same production line. Additionally, one machine can only produce one product at a time. Equation (10) defines the processing equipment of the same process that provides the same number of products in one operation, and the number of products produced by the processing equipment of different methods is not necessarily the same.
Multi-AGV Scheduling Simulation Modeling
The case-study used here was mainly based on a multi-AGV scheduling mathematical model and an optimal batch distribution strategy. Based on the Jingwei Wuxi ring spinning production process, Siemens Plant Simulation was used to establish a multi-AGV scheduling simulation model for the drawing shop. The scheduling strategy built a simulation scenario and analyzed the impact of different AGV numbers on scheduling performance, i.e., completion time and some factors that affect the configuration of AGV numbers.
Production Operation Drawing Workshop Simulation Input
By exploiting state-of-the-art mechanical technology, we considered the production of two product types: The first was the 55%/45% product, which comprises 55% cotton and 45% polyester in a mixed process. The second was the 60%/40% product, which comprised 60% cotton and 40% polyester in a mixed process. Table 3 shows the cotton production process parameters, and Table 4 shows the polyester production process parameters. The process parameters of product one (55%/45%), are shown in Table 5. The process parameters of product two (60%/40%) are shown in Table 6. Table 7 shows the raw material requirements for each device (regardless of the binocular device). Table 8 shows the processing time of each device.
In our simulation, each region initialized an AGV resource pool. Depending on the set policy, the regions where the AGV could operate were different. AGVs ran differently between different areas. Table 9 shows the connectivity and running time between different areas. The drawing shop layout was divided into four areas, and the first area was pre-combined for combing, lap-forming, and combine combing; this was called 'Region A.' Table 3. Cotton processing parameters.
Starting Area Final Area Operation Hours/S
The production efficiency of the production line was analyzed under different AGV quantities throughout the multiple simulations [36]. The number of AGVs in the resource pool of each area increased from 1 to 3, and a total of 81 groups of simulations were performed. AGV1, AGV2, AGV3, and AGV4 signify the number of AGV initializations for the four regions. Completion time refers to the time required to process all products, and the production requirements of the two products were set up to 50. The following is a comparative analysis of the impact of the number of AGV allocation factors and its effect on the completion time in the application scenario of the batch distribution strategy.
Machine Module
The machine tool of the draw frame was characterized by the need to process multiple raw materials at a time. Various finished products were produced one after another, so the simulation could understand that the input and output were different in one processing batch. To describe this type of machine in 'Siemens Tecnomatix Plant Simulation,' multiple necessary components were required to implement this feature. The composition of the combed pre-parallel device, which divided into two major modules, is schematically shown in Figure 4. The first module mainly realized the functionality that a processing batch needed a fixed amount of raw materials, and the second module realized the processing of the processed materials after the required raw materials were satisfied.
Component assembly was introduced for continuous manufacturing to correctly understand the function of processing various raw materials in one processing batch. A single processor was used to constrain the processing time of a shipment after the number of raw materials was satisfied, upon which the machine started processing, executed the 'Method' method, and released the exit of 'Buffer3.' Then, the entities in 'Buffer3' entered 'Buffer1.' When there was an entity flowing out of 'Buffer3,' the 'Method1' method was executed to count the number of products that could be produced by a processing batch. Afterward, the entities in 'Buffer1' flowed out one after another. The constraint requirement that a processing batch needed to process multiple raw materials at the same time in order to successively produce several products was realized.
The simulation implementation of a machine that required multiple raw materials was similar, except that, in this case, there were numerous input ports. Taking a device with a mixed process as an example, Figure 5 shows the mixed-process machine composition of a 55%/45% product. type of machine in 'Siemens Tecnomatix Plant Simulation,' multiple necessary components were required to implement this feature. The composition of the combed pre-parallel device, which divided into two major modules, is schematically shown in Figure 4. The first module mainly realized the functionality that a processing batch needed a fixed amount of raw materials, and the second module realized the processing of the processed materials after the required raw materials were satisfied. Component assembly was introduced for continuous manufacturing to correctly understand the function of processing various raw materials in one processing batch. A single processor was used to constrain the processing time of a shipment after the number of raw materials was satisfied, upon which the machine started processing, executed the 'Method' method, and released the exit of 'Buffer3.' Then, the entities in 'Buffer3' entered 'Buffer1.' When there was an entity flowing out of 'Buffer3,' the 'Method1' method was executed to count the number of products that could be produced by a processing batch. Afterward, the entities in 'Buffer1' flowed out one after another. The constraint requirement that a processing batch needed to process multiple raw materials at the same time in order to successively produce several products was realized.
The simulation implementation of a machine that required multiple raw materials was similar, except that, in this case, there were numerous input ports. Taking a device with a mixed process as an example, Figure 5 shows the mixed-process machine composition of a 55%/45% product.
AVG Transportation Simulation
The "workers" in the resource were used to simulate AGV transportation. In the simulation model building, the function of the AGV was to transport the product from one process to another process, and this could be achieved by "workers." Figure 6 shows the loading point of one process, the driving route of the AGV, and the unloading point of the next process. The single processor at the entrance was the unloading point of the AGV, which needed to set the exiting property of the processor. Then, the AGV could carry the goods to the specified location and then unload; as such, the AGV was simulated in this way.
AVG Transportation Simulation
The "workers" in the resource were used to simulate AGV transportation. In the simulation model building, the function of the AGV was to transport the product from one process to another process, and this could be achieved by "workers." Figure 6 shows the loading point of one process, the driving route of the AGV, and the unloading point of the next process. The single processor at the entrance was the unloading point of the AGV, which needed to set the exiting property of the processor. Then, the AGV could carry the goods to the specified location and then unload; as such, the AGV was simulated in this way.
Scheduling Policy
The machine selection strategy of the tight node was controlled by a flow-controller, where the plan set in Exit Strategy was that of the cyclic-sequence time, and the corresponding list of the configured batch distribution policy could complete the setting. When the cyclic time was selected, the rotation distribution was given; this distribution laid evenly, as shown in Figure 7.
Scheduling Policy
The machine selection strategy of the tight node was controlled by a flow-controller, where the plan set in Exit Strategy was that of the cyclic-sequence time, and the corresponding list of the configured batch distribution policy could complete the setting. When the cyclic time was selected, the rotation distribution was given; this distribution laid evenly, as shown in Figure 7. In plant-simulation resource pool policy, setting up the same broker for the machining machine for the same regional operation allowed for the operation within the region to share the AGV. The broker, which was the part of the 'Tecnomatix Plant Simulation environment,' mostly cooperated with the exporter and the importers of the station, the parallel station, the assembly station, and the dismantling station. If it was a cross-regional independent resource pool policy, different brokers were configured for different neighborhoods. AGVs in the various areas could not call each other; implementing a cross-zone shared resource pool set brokers for multiple regions as the same as those for the connecting regions. In plant-simulation resource pool policy, setting up the same broker for the machining machine for the same regional operation allowed for the operation within the region to share the AGV. The broker, which was the part of the 'Tecnomatix Plant Simulation environment,' mostly cooperated with the exporter and the importers of the station, the parallel station, the assembly station, and the dismantling station. If it was a cross-regional independent resource pool policy, different brokers were configured for different neighborhoods. AGVs in the various areas could not call each other; implementing a cross-zone shared resource pool set brokers for multiple regions as the same as those for the connecting regions. Figure 8 shows the extensive simulation model of the multi-AGV flow-shop scheduling workshop. Based on the simulation scene of the shop floor assembly line, the following analysis simulation experiment was carried out.
Simulation Model
First was the AGV cross-regional shared resource pool and the AGV cross-regional independent resource pool policy comparison. The second was under the optimal production equipment distribution strategy and AGV resource pool strategy. The influence of AGV quantity on multi-AGV scheduling performance and the analysis of AGV quantity allocation factors was also considered. Through the above analysis, the results of multi-AGV scheduling decision-making were obtained. Because the machine selection strategy was based on which batch distribution was better than the uniform distribution strategy, the cross-regional independent resource pool strategy and cross-regional shared resource pool strategy had an inconsistent relationship in different situations. From there, we went on to combine the actual case of the spinning workshop and to further compare the performance of the two resource pool strategies. Based on the simulation scene of the shop floor assembly line, the following analysis simulation experiment was carried out.
First was the AGV cross-regional shared resource pool and the AGV cross-regional independent resource pool policy comparison. The second was under the optimal production equipment distribution strategy and AGV resource pool strategy. The influence of AGV quantity on multi-AGV scheduling performance and the analysis of AGV quantity allocation factors was also considered. Through the above analysis, the results of multi-AGV scheduling decision-making were obtained. Because the machine selection strategy was based on which batch distribution was better than the uniform distribution strategy, the cross-regional independent resource pool strategy and crossregional shared resource pool strategy had an inconsistent relationship in different situations. From there, we went on to combine the actual case of the spinning workshop and to further compare the performance of the two resource pool strategies.
AGV Resource Pool Strategy Based on Bottleneck Analysis
In order to improve simulation efficiency, a production efficiency analysis was carried out on each production process. Since the critical issue in this paper was the production process involving AGV transportation, the procedure before raw material cotton and polyester was not considered, nor was the third mixing process. The cotton production was a serial assembly line process. Therefore, the output efficiency of the raw cotton material was related to the bottleneck process in these three mixing processes. The raw material consumption and finished output speed of the three processes of cotton raw materials were as follows.
1. Pre-drawing combs: Each machine produced five cans every 10 min with a total of 2 machines. 2. Drawing roller: Each machine consumed 24 cans every 2 min and produced 30 cotton rolls, with a total of 1 machine. 3. Combs: Each machine consumed eight cotton bobbins every 16 min and produced five cotton laps with a total of 5 machines;
AGV Resource Pool Strategy Based on Bottleneck Analysis
In order to improve simulation efficiency, a production efficiency analysis was carried out on each production process. Since the critical issue in this paper was the production process involving AGV transportation, the procedure before raw material cotton and polyester was not considered, nor was the third mixing process. The cotton production was a serial assembly line process. Therefore, the output efficiency of the raw cotton material was related to the bottleneck process in these three mixing processes. The raw material consumption and finished output speed of the three processes of cotton raw materials were as follows.
1.
Pre-drawing combs: Each machine produced five cans every 10 min with a total of 2 machines.
2.
Drawing roller: Each machine consumed 24 cans every 2 min and produced 30 cotton rolls, with a total of 1 machine.
3.
Combs: Each machine consumed eight cotton bobbins every 16 min and produced five cotton laps with a total of 5 machines; In the polyester production process, there was only a polyester blending process. The efficiency of the blending process was such that each machine produced five cans per 10 min with a total of 2 machines. Regardless of the transportation time, the output efficiency of the cotton raw materials was equal to the output efficiency of the bottleneck process in the three processes, assuming that the output of the immediately preceding process in each of the above processes was sufficient. The process of pre-drawing combs produced an average of one cotton can per minute. The consumption of 12 cotton bobbins yielded 15 cotton laps, while the process of combing consumed 40 cotton bobbins every 16 min. The bottleneck process was pre-drawing combing, and the final cotton raw material output efficiency was an average of 5 cotton rolls produced every 4 min. Because there was only one process in polyester, the output efficiency of the polyester was the output efficiency of the process. An average of one polyester cylinder was produced per minute. Figure 9 shows the production process.
output of the immediately preceding process in each of the above processes was sufficient. The process of pre-drawing combs produced an average of one cotton can per minute. The consumption of 12 cotton bobbins yielded 15 cotton laps, while the process of combing consumed 40 cotton bobbins every 16 min. The bottleneck process was pre-drawing combing, and the final cotton raw material output efficiency was an average of 5 cotton rolls produced every 4 min. Because there was only one process in polyester, the output efficiency of the polyester was the output efficiency of the process. An average of one polyester cylinder was produced per minute. Figure 9 shows the production process. In the mixed process of cotton and polyester, different machines could be used for different products, and the required ratio of raw materials was also different. However, the same processes existed for different products. However, machines were not shared between various products, so the subsequent process was still a continual process for each product. The bottleneck process determined the output. The following were the inputs and outputs of the first, second, and third mixing processes for the two products.
Product One (55%/45%):
Each machine consumed three cotton rolls and three polyester drums every 10 min in its first mixing process and produced a total of six cans. In the second and third mixing process with one machine, it consumed six cans in every 15 min to produce six cans.
Product Two (60%/40%):
Each machine consumed four cotton rolls and three polyester cans every 10 min in its first mixing process, yielding a total of seven cans. In the second and third mixing process with one machine, it consumed six cans in every 15 min to produce six cans.
If the supply of raw materials was sufficient, the efficiency of product one consumed three cotton rolls and three polyester cans every 15 min, thus yielding six cans. Moreover, the efficiency of product two consumed four cotton rolls and three polyester cans every 15 min, thus yielding seven cans. The second and third mixing processes were the bottleneck processes. Based on the raw material processing and mixing process in the spinning and drawing workshop, every 16 min, the raw material processing process provided raw material at an average speed of 25 cotton cans and 16 polyester cans. In the mixing process, when all the machines were turned on, the efficiency was such that it consumed seven cotton cans and six polyester cans every 15 min on average, thus producing 6 of 'product one,' 6 of 'product 2,' and a total of 12 products. According to the efficiency comparison ratio in the material section and the mixing process section, it was found that the bottleneck was in the mixing process section. The output efficiency of the cotton pre-drawing combs and polyester In the mixed process of cotton and polyester, different machines could be used for different products, and the required ratio of raw materials was also different. However, the same processes existed for different products. However, machines were not shared between various products, so the subsequent process was still a continual process for each product. The bottleneck process determined the output. The following were the inputs and outputs of the first, second, and third mixing processes for the two products.
Product One (55%/45%)
Each machine consumed three cotton rolls and three polyester drums every 10 min in its first mixing process and produced a total of six cans. In the second and third mixing process with one machine, it consumed six cans in every 15 min to produce six cans.
Product Two (60%/40%)
Each machine consumed four cotton rolls and three polyester cans every 10 min in its first mixing process, yielding a total of seven cans. In the second and third mixing process with one machine, it consumed six cans in every 15 min to produce six cans.
If the supply of raw materials was sufficient, the efficiency of product one consumed three cotton rolls and three polyester cans every 15 min, thus yielding six cans. Moreover, the efficiency of product two consumed four cotton rolls and three polyester cans every 15 min, thus yielding seven cans. The second and third mixing processes were the bottleneck processes. Based on the raw material processing and mixing process in the spinning and drawing workshop, every 16 min, the raw material processing process provided raw material at an average speed of 25 cotton cans and 16 polyester cans. In the mixing process, when all the machines were turned on, the efficiency was such that it consumed seven cotton cans and six polyester cans every 15 min on average, thus producing 6 of 'product one,' 6 of 'product 2,' and a total of 12 products. According to the efficiency comparison ratio in the material section and the mixing process section, it was found that the bottleneck was in the mixing process section. The output efficiency of the cotton pre-drawing combs and polyester strips was the same as the bottleneck of raw material. Finally, the process of mixing one, two, and three became the output bottleneck.
According to the workshop scheduling theory, the bottleneck process is the crucial section that restricts the output of an assembly line. According to the above analysis, the bottleneck process was identified. Therefore, if the output speed of the assembly line needed to increase, the output capacity of the bottleneck process needed to be increased as well. In the above calculations, the time required for the product to move between the processes was ignored. When we considered the handling time for the goods to transport, the output was further reduced. Still, due to the existence of the bottleneck process, the machine utilization of other non-bottleneck processes was not full. In practical applications, if transportation resources are sufficient and the transportation time is shorter than the product processing time, then the transportation time could be ignored. However, after the introduction of an AGV, transportation resources become scarce. Different transportation tasks compete for limited transportation resources. Therefore, the next section focuses on analyzing how to schedule capacity resources to meet the processing requirements of each machine and to produce a specified number of products in the shortest time.
Analysis of Simulation Results Based on Cross-Regional Shared Resource Pools
As shown in Table 10, for the simulation result under the cross-regional shared resource pool, according to the number of AGVs in four regions, the number was in the form of "A1-B1-C1-D1," indicating that regions A, B, C and D each had an AGV. The following is a principal analysis of the changes in the number of AGVs between different regions at the time of completion. Under the shared resource pool across regions, different AGV quantities affected the completion time (as shown in Table 10). Figure 10 shows a line chart of the maximum completion time under the different number of AGVs, where the horizontal axis is the number of AGVs and the vertical axis is the completion time.
With the increase in the number of AGVs, the completion time gradually decreased until the number of AGVs was 10. The completion time was stable, and adding AGVs did not significantly improve production efficiency. The process takes time to produce was the main reason for this, and in terms of process, the time it took to produce one batch per as was longer. Thus, increasing the number of AGVs could significantly improve productivity when AGV resources are scarce. Nevertheless, as the number of AGVs increased, the production efficiency of the process gradually became a new bottleneck, and it was not possible to further improve the production efficiency by merely increasing the number of AGVs.
From Table 10, another problem can be seen, i.e., different allocations still affected the completion time at the same AGV. The following are examples of processes with six and nine AGVs, and the effect of various AGV allocation schemes on the completion time is analyzed. In the programs mentioned above, the total number of AGVs for a total of 10 distribution programs was six, the total number of AGVs in 16 programs was nine, and Figure 11 shows the completion time for each application with a whole a production time of 6 h. The largest allocation of completion time was that of "A1-B1-C1-D3;" the smallest was that of "A2-B2-C1-D1," which showed that under the cross-regional shared resource pool, region A and C could connect, B and D could be combined, and A and B were connected. Under "A2-B2-C1-D1," it could be understood that region A and C had three AGVs, and B and D had three AGVs; the allocation resources was more uniform than the number of AGVs in region A, except in "A1-B1-C1-D3." The completion time difference between the remaining programs was not significant when the number of AGVs in region A was 2; there was a clear downward trend between the three schemes, where "A2-B1-C1-D2" was higher than "A2-B1-C2-D1," andA2-B2-C1-D1" was the smallest. The difference between "A2-B1-C2-D1" and "A2-B2-C1-D1" was that area D scheduling took too long, resulting in the consumption of too much time on empty-load operations, while A2-B1-C2-D1 tilted resources to the raw material area, i.e., for cotton and polyester production and processing. This indicated that in the current production environment, raw material production time accounted for the entire production time, which was longer and needed to tilt towards more capacity resources. "A2-B2-C1-D1" had the lowest completion time because of its uniform distribution of resources, which was flexible to dispatch. With the increase in the number of AGVs, the completion time gradually decreased until the number of AGVs was 10. The completion time was stable, and adding AGVs did not significantly improve production efficiency. The process takes time to produce was the main reason for this, and in terms of process, the time it took to produce one batch per as was longer. Thus, increasing the number of AGVs could significantly improve productivity when AGV resources are scarce. Nevertheless, as the number of AGVs increased, the production efficiency of the process gradually became a new bottleneck, and it was not possible to further improve the production efficiency by merely increasing the number of AGVs. Table 10, another problem can be seen, i.e., different allocations still affected the completion time at the same AGV. The following are examples of processes with six and nine AGVs, and the effect of various AGV allocation schemes on the completion time is analyzed. In the programs mentioned above, the total number of AGVs for a total of 10 distribution programs was six, the total number of AGVs in 16 programs was nine, and Figure 11 shows the completion time for each application with a whole a production time of 6 h. The largest allocation of completion time was that of "A1-B1-C1-D3;" the smallest was that of "A2-B2-C1-D1," which showed that under the crossregional shared resource pool, region A and C could connect, B and D could be combined, and A and B were connected. Under "A2-B2-C1-D1," it could be understood that region A and C had three AGVs, and B and D had three AGVs; the allocation resources was more uniform than the number of AGVs in region A, except in "A1-B1-C1-D3." The completion time difference between the remaining programs was not significant when the number of AGVs in region A was 2; there was a clear downward trend between the three schemes, where "A2-B1-C1-D2" was higher than "A2-B1-C2-D1," andA2-B2-C1-D1" was the smallest. The difference between "A2-B1-C2-D1" and "A2-B2-C1-D1" was that area D scheduling took too long, resulting in the consumption of too much time on empty-load operations, while A2-B1-C2-D1 tilted resources to the raw material area, i.e., for cotton and polyester production and processing. This indicated that in the current production environment, raw material production time accounted for the entire production time, which was longer and needed to tilt Figure 12 presents a comparison of the completion times under nine AGVs, with "A2-B3-C2-D2" having the longest completion time in the 16 distribution schemes, while "A3-B2-C3-D1" had the shortest completion time. In the 16 scenarios, the completion time varied little, with it being the same when 6 AGVs were present and the raw material stage required more resource tilt. In addition to the maximum completion time, the utilization rate of AGVs needed to be analyzed. Figure 13 is the AGV utilization rate under "A3-B3-C3-D3." From the Figure, we can see that the utilization rate of an AGV was not high. The time of AGV empty operation in regions A and B was relatively high, and so the utilization rate was relatively high as well; meanwhile, the time of AGV empty operation of areas C and D was low. The overall utilization rate of each AGV was only 10%-20%. In comparison, the utilization rate of AGV of region D was less than 10%, indicating that the number of AGVs was over-allocated and resources overflowed. In addition to the maximum completion time, the utilization rate of AGVs needed to be analyzed. Figure 13 is the AGV utilization rate under "A3-B3-C3-D3." From the Figure, we can see that the utilization rate of an AGV was not high. The time of AGV empty operation in regions A and B was relatively high, and so the utilization rate was relatively high as well; meanwhile, the time of AGV empty operation of areas C and D was low. The overall utilization rate of each AGV was only 10-20%. In comparison, the utilization rate of AGV of region D was less than 10%, indicating that the number of AGVs was over-allocated and resources overflowed. Table 11 shows the result of the simulation under a distinct pool of resources across regions. As with the cross-regional shared resource pool policy, the effect of changes in the number of AGVs in the different areas on the time of completion was analyzed. Table 11. Cross-regional independent resource pool simulation results Table 11 shows the result of the simulation under a distinct pool of resources across regions. As with the cross-regional shared resource pool policy, the effect of changes in the number of AGVs in the different areas on the time of completion was analyzed. Table 11. Cross-regional independent resource pool simulation results. Figure 14 shows a line chart of the maximum completion time under different AGV numbers under the cross-region independent resource pool strategy. The horizontal axis is the number of AGVs, and the vertical axis is the completion time. When the number of AGVs increased, the completion time gradually decreased. Until the number of AGVs was 10, the completion time tended to be stable, as in the cross-region shared resource pool. The reason for this is that the completion time under the cross-region independent resource pool was much shorter than the completion time under the cross-region shared resource pool. When ten units were configured with AGVs, the completion time was only about 2 h and 50 min. Increasing the number of AGVs did not significantly improve production efficiency for the same reasons as sharing resource pools across the regions. Moreover, the minimum completion time of the same ratio under the two resource allocation strategies was compared, and it was found that the cross-region shared resource pool required close to 5 h, and the cross-region independent resource pool had a minimum completion time of 2 h and 48 min, which was far less than the cross-region shared resource pool. The five hours required for the cross-region shared resource pool indicated that Increasing the number of AGVs did not significantly improve production efficiency for the same reasons as sharing resource pools across the regions. Moreover, the minimum completion time of the same ratio under the two resource allocation strategies was compared, and it was found that the cross-region shared resource pool required close to 5 h, and the cross-region independent resource pool had a minimum completion time of 2 h and 48 min, which was far less than the cross-region shared resource pool. The five hours required for the cross-region shared resource pool indicated that in the current simulation configuration, independent resource pools across regions could effectively improve production efficiency.
Complete
As well as sharing resource pools across regions, we analyzed the simulation results of the same number of AGVs under different configuration schemes. Here, for example, the number of AGVs was 7 and 10, and the effects of various AGV allocation schemes on the completion time were analyzed. In the above scenarios, the total number of AGVs for the 16 allocation programs was seven. There were 10 scenarios with a complete a number of AGVs of 10. Figure 15 shows the completion time for each program with total damage of 7 h. The largest allocation of completion time was for "A1-B3-C2-D1," and the smallest was for "A2-B1-C1-D3." Figure 16 shows a comparison of the completion times under 10 AGVs, for which A1-B3-C3-D3 had the longest completion time in the 10 distribution schemes, whereas "A3-B2-C2-D3" had the least completion time. In all ten scenarios, the completion time varied a little, same as each scenario with the 7 AGVs, and the raw material stage required more resource tilt. Figure 16 shows a comparison of the completion times under 10 AGVs, for which A1-B3-C3-D3 had the longest completion time in the 10 distribution schemes, whereas "A3-B2-C2-D3" had the least completion time. In all ten scenarios, the completion time varied a little, same as each scenario with the 7 AGVs, and the raw material stage required more resource tilt. Unlike shared resource pools across regions, the overall utilization of AGVs was higher because AGV cross-regional scheduling was not allowed under the cross-regional independent resource pool policy. Figure 17 presents a comparative analysis of AGV utilization under a separate resource pool across regions, with regions A and C corresponding to cotton and polyester raw material areas, Unlike shared resource pools across regions, the overall utilization of AGVs was higher because AGV cross-regional scheduling was not allowed under the cross-regional independent resource pool policy. Figure 17 presents a comparative analysis of AGV utilization under a separate resource pool across regions, with regions A and C corresponding to cotton and polyester raw material areas, respectively, while AGVs were waiting most of the time in regions B and D, indicating that AGVs were over-resourced. Based on the above comparison analysis (which originates in practice), due to the high load caused by cross-regional scheduling, the cross-regional independent resource pool was superior to the cross-regional shared resource pool in an actual situation, whether in the case of an AGV resource shortage or when the AGV resources are sufficient. With other unchanged parameters, the distance between regions will affect the time of cross-region scheduling. When the distance is more significant, cross-regional scheduling increases the load rate, which affects the completion time. When the distance is small, cross-regional scheduling can effectively improve the utilization of AGV resources and shorten completion time.
Comprehensive Analysis of Multi-AGV Scheduling in the Workshop
To further analyze the characteristics of the multi-AGV scheduling problem in the drawing shop, the number of required products was increased to analyze whether the AGV scheduling resources would still tilt toward the raw material area. Figure 18 shows the scheduling time when the number of two products was 100, and Figure 19 shows scheduling time when the number of two products was 200. The results of the analysis presented as complete working hours and the number of experiments according to their confidence interval. It can be observed that between Experiment 1 and Experiment 3, the average value of the completion time significantly declined, and it showed a periodic law. There was a regularity between the completion time and the AGV configuration. The reason for this is that the cycle amount when increasing the number of AGVs was 3, which means that increasing the number of AGVs could significantly improve production efficiency and shorten completion time. Additionally, this proved that when the number of products doubled, the AGV resource was insufficient, and the completion time nearly doubled, as well. However, when the AGV resources were relatively sufficient, the completion time did not increase proportionally, indicating that the production line was not in a balanced state and the number of products produced was very small. Not all the machines were working for a long time during production, and there was a substantial warm-up period. The effect of increasing the number of AGVs on the completion time was not very obvious, but when the number of products increased, the warm-up period gradually decreased during the process. In Based on the above comparison analysis (which originates in practice), due to the high load caused by cross-regional scheduling, the cross-regional independent resource pool was superior to the cross-regional shared resource pool in an actual situation, whether in the case of an AGV resource shortage or when the AGV resources are sufficient. With other unchanged parameters, the distance between regions will affect the time of cross-region scheduling. When the distance is more significant, cross-regional scheduling increases the load rate, which affects the completion time. When the distance is small, cross-regional scheduling can effectively improve the utilization of AGV resources and shorten completion time.
Comprehensive Analysis of Multi-AGV Scheduling in the Workshop
To further analyze the characteristics of the multi-AGV scheduling problem in the drawing shop, the number of required products was increased to analyze whether the AGV scheduling resources would still tilt toward the raw material area. Figure 18 shows the scheduling time when the number of two products was 100, and Figure 19 shows scheduling time when the number of two products was 200. The results of the analysis presented as complete working hours and the number of experiments according to their confidence interval. It can be observed that between Experiment 1 and Experiment 3, the average value of the completion time significantly declined, and it showed a periodic law. There was a regularity between the completion time and the AGV configuration. The reason for this is that the cycle amount when increasing the number of AGVs was 3, which means that increasing the number of AGVs could significantly improve production efficiency and shorten completion time. Additionally, this proved that when the number of products doubled, the AGV resource was insufficient, and the completion time nearly doubled, as well. However, when the AGV resources were relatively sufficient, the completion time did not increase proportionally, indicating that the production line was not in a balanced state and the number of products produced was very small. Not all the machines were working for a long time during production, and there was a substantial warm-up period. The effect of increasing the number of AGVs on the completion time was not very obvious, but when the number of products increased, the warm-up period gradually decreased during the process. In conclusion, increasing the number of AGVs could significantly affect completion time until the AGV resources are saturated.
AGV Quantity Decision Analysis
The factors that need to be considered in the decision of the number of AGVs are the completion time and the utilization rate of each AGV. Here, we analyzed a set of data regarding the unit time contribution of each AGV, i.e., the increase in the contribution of an AGV to the completion time.
AGV Quantity Decision Analysis
The factors that need to be considered in the decision of the number of AGVs are the completion time and the utilization rate of each AGV. Here, we analyzed a set of data regarding the unit time contribution of each AGV, i.e., the increase in the contribution of an AGV to the completion time. Under the cross-region shared resource pool strategy, increasing the number of AGVs could effectively shorten the completion time, but under different circumstances, increasing the reduction rate of AGVs had different performances. Figure 20 shows that when the number of AGVs increased from four to five, the impact on the completion time was massive, and the completion time reduction rate was close to 25%, before it gradually flattened. After the number of AGVs reached nine, the impact was small-completion time stabilized. Figure 21 shows the impact of the increase in AGV numbers on the completion time under the strategy of independent resource pools across regions. The completion time had a more significant impact at six AGVs, and when eight AGVs were used, the completion time stabilized.
Electronics 2020, 9, x FOR PEER REVIEW 30 of 34 Figure 20. Impact of AGV quantity on completion time under the cross-regional shared resource pool. CONTRIBUTION RATE AGV QUANTITY Figure 20. Impact of AGV quantity on completion time under the cross-regional shared resource pool.
Electronics 2020, 9, x FOR PEER REVIEW 31 of 34 Figure 21. Impact of AGV quantity on completion time under a cross-regional independent resource pool.
According to the number of AGVs based on the completion time and the above analysis of increasing the number of products, we could estimate that when the number of products was small, a relatively small number of AGVs could be selected to increase the utilization of AGVs. When the number of products was significant, we needed to increase the number of AGVs to shorten the completion time. Figure 22 shows multiple AGV scheduling scenarios by the Gantt chart using manual scheduling. By comparing the completion time, it could be found that, whether it was a cross-region shared resource pool or a cross-region independent resource pool, the completion time was less than CONTRIBUTION RATE AGV QUANTITY Figure 21. Impact of AGV quantity on completion time under a cross-regional independent resource pool.
Optimization Analysis
According to the number of AGVs based on the completion time and the above analysis of increasing the number of products, we could estimate that when the number of products was small, a relatively small number of AGVs could be selected to increase the utilization of AGVs. When the number of products was significant, we needed to increase the number of AGVs to shorten the completion time. Figure 22 shows multiple AGV scheduling scenarios by the Gantt chart using manual scheduling. By comparing the completion time, it could be found that, whether it was a cross-region shared resource pool or a cross-region independent resource pool, the completion time was less than the completion time of the manual scheduling job, which illustrates the effectiveness of the method. The introduction of the AGV transport of semi-finished products in a drawing shop can reduce the labor intensity of workers, improve transport efficiency, and shorten completion time.
Conclusions and Future Work
In this paper, we investigated the actual production process parameters and characteristics of highly distributed manufacturing system like textile ring-spinning combing section. This work was inspired to resolve the confronting problems of scheduling, real-time can distribution, and path planning in the continuous production of spinning by benefitting the mixed flow-shop predictive modelling. To effectively reduce the makespans and total completion time, it was vital to define the properties and features of the workshop, equipment, products, and AGVs. Based on the two AGVs scheduling strategies, a novel approach of handling both cross-regional shared resource and crossregional independent resource pools was analyzed. For the dissimilar cotton and polyester draw-out processing, we established an overall mathematical model of multi-AGV scheduling to solve the problems of can distributions and to prevent conflict and deadlock by assigning different tasks: AGV assignment, AGV sorting, and task source. Moreover, for the intended categories of scheduling tasks, an AGV transportation route strategy was also developed for mass production in a spinning CPPS. The extensive computational experiments were performed using 'Siemens Tecnomatix Plant Simulation software,' according to the production of a certain number of products and two scheduling strategies. The simulation results analysis of 81 groups of optimization targets with completion time showed a specific range of AGVs. As the number of AGVs increased, the completion time decreased. The number of AGVs reached a certain threshold, and the completion time stabilized. On this basis, the utilization rate and the completion time of the products were also analyzed. When the number of AGVs rose to a certain extent, the contribution of increasing AGV numbers to the completion time decreased sharply, thus reducing the utilization rate of AGVs. By comparing the results of 81 sets of simulations under the two strategies, it was found that the cross-regional independent resource pool strategy was better than the cross-regional shared resource pool strategy under the actual scenario. These results demonstrated the adequacy of the methods we used and proved that flow-shop predictive modeling for when multi-AGV resources are scarce also produces, for each AGV, a control mode and, if necessary, a preventive maintenance plan. Based on the comparison with our scheduling approach, it was found that the results of multi-AGV scheduling have distinct advantages that can significantly shorten completion time.
Conclusions and Future Work
In this paper, we investigated the actual production process parameters and characteristics of highly distributed manufacturing system like textile ring-spinning combing section. This work was inspired to resolve the confronting problems of scheduling, real-time can distribution, and path planning in the continuous production of spinning by benefitting the mixed flow-shop predictive modelling. To effectively reduce the makespans and total completion time, it was vital to define the properties and features of the workshop, equipment, products, and AGVs. Based on the two AGVs scheduling strategies, a novel approach of handling both cross-regional shared resource and cross-regional independent resource pools was analyzed. For the dissimilar cotton and polyester draw-out processing, we established an overall mathematical model of multi-AGV scheduling to solve the problems of can distributions and to prevent conflict and deadlock by assigning different tasks: AGV assignment, AGV sorting, and task source. Moreover, for the intended categories of scheduling tasks, an AGV transportation route strategy was also developed for mass production in a spinning CPPS. The extensive computational experiments were performed using 'Siemens Tecnomatix Plant Simulation software,' according to the production of a certain number of products and two scheduling strategies. The simulation results analysis of 81 groups of optimization targets with completion time showed a specific range of AGVs. As the number of AGVs increased, the completion time decreased. The number of AGVs reached a certain threshold, and the completion time stabilized. On this basis, the utilization rate and the completion time of the products were also analyzed. When the number of AGVs rose to a certain extent, the contribution of increasing AGV numbers to the completion time decreased sharply, thus reducing the utilization rate of AGVs. By comparing the results of 81 sets of simulations under the two strategies, it was found that the cross-regional independent resource pool strategy was better than the cross-regional shared resource pool strategy under the actual scenario.
These results demonstrated the adequacy of the methods we used and proved that flow-shop predictive modeling for when multi-AGV resources are scarce also produces, for each AGV, a control mode and, if necessary, a preventive maintenance plan. Based on the comparison with our scheduling approach, it was found that the results of multi-AGV scheduling have distinct advantages that can significantly shorten completion time.
In the future, it will be attractive to examine multi-AGV scheduling by applying different scheduling algorithms in order to investigate the potential mechanisms responsible for transportation task set scheduling decisions using a genetic algorithm.
Author Contributions: B.F. and Q.M. Wen mainly conceptualized the idea for the study and were responsible for project administration. Q.M. Wen was also responsible for preprocessing all the data and for the formal analysis by investigating the accuracy of the results. B.F. wrote the initial draft of the manuscript and was responsible for investigating resources, revising, and improving the manuscript according to the reviewer's comments. All the work was done under the supervision and guidance of J.B. All authors have read and agreed to the published version of the manuscript. | 2020-05-21T09:11:51.052Z | 2020-05-13T00:00:00.000 | {
"year": 2020,
"sha1": "5d2c82ac20ba32d17dd5919010e6c519c905fe52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/9/5/799/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "82c3adc15a5c0551187c6f395b5dec47527f6b79",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
101263904 | pes2o/s2orc | v3-fos-license | Synthesis and stereochemistry of some multi methyl-substituted 1,3-dioxanes
Several multimethyl-substituted 1,3-dioxanes [ trans -2,4,4,6-tetramethyl ( 1 ), r -2,4,4, c -5, t -6- pentamethyl-( 2 ), r -2,4,4, t -5, t -6-pentamethyl ( 3 ) and trans -2,4,4,5,5,6-hexamethyl-1,3-dioxanes ( 4 )] with 2,6-trans-disubstitution has been prepared via the Grignard reaction of the corresponding axial 2-methoxy-1,3-dioxanes. Inspection of their 13 C NMR chemical shifts in respect of different substituent effects showed that 1 and 3 attain exclusively the 1,4-twist form whereas 2 and 4 still favor clearly the chair form due to the very strong steric interaction caused by the pseudo axial methyl groups at position 5. We also manage to equilibrate 1 and its cis-epimer ( 5 ) although less than 1% of 4 was present at equilibrium. Thus only − G o = 12.9±0.5 kJ mol − 1 could be given and it compares well with some literature values. Since the conformational energy of 4-axial methyl group in 5 is 12.2 kJ mol − 1 the H(1,4-CT) is equal to 25 kJ mol − 1 again in good agreement with an earlier estimate.
Introduction
A great number of methyl-substituted 1,3-dioxanes have been prepared earlier. 1 However, only after Eliel and Nader 2 developed their special method for preparing trans-2,4,4,6-tetramethyl-1,3-dioxane it became possible to synthesize also other trans-2,6-methyl-substituted 1,3dioxanes.−5 It was concluded quite some time ago that if there is no pseudo axial substituent in the twist form, the 2,4-syn-diaxially substituted derivatives attain a 2,5-or 1,4twist form (Figure 1) depending on the location of the geminal substitution in position 2 or 4, respectively. 3,6The 1,4-twist form appears to be ca.3 kJ mol −1 more stable than the 2,5-twist form [H(1,4-CT) 25.0 kJ mol −1 ; H(2,5-CT) 28.7 kJ mol −1 ]. 3,5 By applying the method of Eliel and Nader 2 we prepared a few trans-2,6-methyl-substituted derivatives where one or the other of these substituents occupies an axial orientation (if being in a chair form) to get further insight into the chair-twist problem.
Results and Discussion
The experimental 13 C NMR chemical shifts for the compounds 1−4 are given in Table 1 together with those estimated for 2, 3 and 4 using the shift increments reported earlier by Pihlaja et al. [3][4][5] C(x) = Cp(x) In this equation C(x) is the C-13 chemical shift of the xth carbon, Cp(x) the shift of this carbon in the parent compound and SE(x) the sum of substituent effects influencing on the xth carbon.
It has already been shown that 1 attains the 1,4-twist form 2,3 and by comparing its chemical shifts with those of 3 it is easy to believe that also 3 is predominantly in the 1,4-twist form (Fig. 2) since 2-Me, 6-Me and 5-Me attain there pseudo equatorial positions and both methyl groups at C-4 are isoclinal thus being able to avoid any major interactions.Table 1 lists the chemical shifts estimated for 3 by adding to the chemical shifts of 1 additional increments based on the orientation of the 5-methyl substituent (pseudo-equatorial) in the 1,4-twist form in relation to the other substituents.Despite the fact that the additional increments were originally derived for the chair form the very good agreement between the calculated and estimated chemical shifts (Table 1) proves that 3 favors greatly the 1,4-twist form.The JH-5,H-6 = 10.4Hz fits also very well for this 1,4-twist structure where both H-5 and H-6 are pseudoaxial.In fact the sum of JH-4,H-5 couplings in the 1,4-twist form of 1 is 15 Hz (2x7.5 Hz 8 ) corresponding roughly an average of 5 and 10 Hz.
The equilibration of 1 and 5 (Fig. 5) pointed out that cis-2,4,4,6-tetramethyl-1,3-dioxane 5 is clearly more stable than 1 which attains the 1,4-twist form. 2,3,5Although we carried out the equilibrations at three temperatures (Table 3) the amount of 1 at equilibrium with 5 was so small that the integration of its peak ought to be carried out manually which caused a substantial deviation (Table 3) and therefore we just accept the average −G o = 12.9 kJ mol −1 to represent the standard Gibbs energy difference between 5 (having a 4-axial methyl, the conformational energy 3 of which is 12.2 kJ mol −1 ) giving a value ca. 25 kJ mol −1 for H(1,4-CT).The former value (−12.9 kJ mol −1 ) is in good agreement with the calculations of Burkert 6 based on molecular mechanical computations but is far from the estimate −G o 22.8 kJ mol −1 given by Eliel and Nader. 2 The value 25 kJ mol −1 has been reported for H(1,4-CT) also earlier.
Preparation of 2-methoxy-1,3-dioxanes.
A 250 ml three-necked bottle was equipped with a magnetic stirrer, a heating mantel and a distillation system.0.3 mol of trimethyl orthoformate together with an equivalent amount of an 1,3-diol was placed in the bottle.60 ml of cyclohexane (Merck, reinst) was purified by distillation and added in the bottle together with a catalytic amount of p-toluenesulfonic acid (Merck, pa).Thereafter the mixture was heated whereupon the atseotropic mixture formed by the methanol product and cyclohexane was distilled off.The heating was continued until the vapor reached a 353 K temperature.At this stage the reaction mixture was allowed to cool to room temperature.Then 1-2 g of K 2 CO 3 was added and the mixing was continued for two hours to neutralize the catalytic acid.Then the mixture was filtrated with mild suction.The precipitate was washed three times with ether and the solvent was removed from the combined liquid phase by evaporation at ordinary pressure.The product was distilled under reduced pressure through a short Vigreux column.The raw product boiled within a range of 12-15 degrees.The isomeric products (when possible) were separated with a Perkin Elmer 251 Auto Annular Still precision distiller at reduced pressure.The distillate was collected on anhydrous K2CO3 to avoid epimerization.2-Methoxy-4,4,5,6-tetramethyl-1,3dioxane decomposed in the precision distiller.A 90 % pure product was obtained by distilling it through a Hemppel column equipped with a vacuum mantle.
13 C NMR spectra.The noise-decoupled spectra were recorded on a Jeol GX-400 spectrometer operating at 100.53 MHz for 13 C ( and 399.78 MHz for 1 H).All spectra were recorded in 5 mm o.d.tubes using the solvent (CDCl3) deuterium signal for field locking.Internal TMS was used as the reference.
The reflux was ceased when the color of the Grignard reagent vanished and a dark oily precipitate formed.The reaction mixture was allowed to cool to room temperature before ca.3.5 ml of ice cold saturated ammonium chloride solution was added with vigorous stirring.The white precipitate formed was separated with a mild suction and washed with six 50 ml portions of warm ether.The combined ether extracts were dried with anhydrous MgSO4 and excess ether evaporated with rotavapor and the rest by distillation under normal pressure.The product was fractionated at reduced pressure and collected on anhydrous K2CO3 to avoid epimerization.
It has been shown earlier that 1 attains the 1,4-twist form and 5 exists in the chair form. 3,4The peaks of both isomers were well separated in gas chromatogram and therefore this technique was applied for the analysis of their equilibrium mixtures (Fig. 5) at three different temperatures.The equilibration was carried out in ether solution which was 0.1 molar in respect of both the substrate and the catalyst, trifluoroacetic acid (EGA Chemie, purum).The samples were sealed in glass ampoules and equilibration was carried out at 298, 313 and 333 K.The equilibrium mixtures were analyzed on a Perkin Elmer Sigma 2 B gas chromatograph using a 30 m XE 60 capillary column.The samples were neutralized before analysis with triethyl amine (Fluka AG, purum).The results of equilibrations after 100 days are shown in Table 4.
Table 2 .
13C NMR shift effects caused by the 4a5e6a-Me 3 and 4a5a6a-Me 3 substitutions
Table 3 .
Equilibrium constants and standard Gibbs energy differences between cis-5 and trans- | 2019-01-02T00:24:05.839Z | 2012-05-27T00:00:00.000 | {
"year": 2012,
"sha1": "ff08d314edd9eb2a1744ca79b5ea56ca6e896486",
"oa_license": "CCBY",
"oa_url": "https://www.arkat-usa.org/get-file/44178/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2add2ddbe7525e3e9aad8534407d008d1922c510",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
268287249 | pes2o/s2orc | v3-fos-license | Molecular mapping of neuronal architecture using STORM microscopy and new fluorescent probes for SMLM imaging
Abstract. Imaging neuronal architecture has been a recurrent challenge over the years, and the localization of synaptic proteins is a frequent challenge in neuroscience. To quantitatively detect and analyze the structure of synapses, we recently developed free SODA software to detect the association of pre and postsynaptic proteins. To fully take advantage of spatial distribution analysis in complex cells, such as neurons, we also selected some new dyes for plasma membrane labeling. Using Icy SODA plugin, we could detect and analyze synaptic association in both conventional and single molecule localization microscopy, giving access to a molecular map at the nanoscale level. To replace those molecular distributions within the neuronal three-dimensional (3D) shape, we used MemBright probes and 3D STORM analysis to decipher the entire 3D shape of various dendritic spine types at the single-molecule resolution level. We report here the example of synaptic proteins within neuronal mask, but these tools have a broader spectrum of interest since they can be used whatever the proteins or the cellular type. Altogether with SODA plugin, MemBright probes thus provide the perfect toolkit to decipher a nanometric molecular map of proteins within a 3D cellular context.
Introduction
Imaging neuronal architecture has been a recurrent challenge over the years.The use of Golgi technique by Ramón y Cajal paved the way for the first characterization of neuronal architecture using microscopy on fixed brains.Indeed, metallic impregnation with silver salts provided an opportunity to see and reconstruct the dendritic architecture of various types of neurons in the depth of the nervous tissue.Although Golgi staining is still used in widefield microscopy, its use in confocal microscopy remains limited for 3D analysis.The use of fluorescent labeling in conjunction with 3D microscopy led to the production of large amounts of published data that are on the way to being classified and accessible through various infrastructures or free repositories (eBrains, Zenodo, etc.).This huge amount of data and their accessibility raise the question of potential new automated, unbiased statistical analysis.
Colocalization and Coupling Analysis in Conventional and Super-Resolution Microscopy
To analyze the structure of synapses quantitatively, we recently developed, in association with statisticians, free software to detect the association of pre-and postsynaptic proteins.This software, called SODA for standard object distance analysis, makes it possible to identify and measure the spatial distribution of either clusters (conventional microscopy) or single molecules (single molecule localization microscopy) and provides the distance of association when those are statically found associated.From the mathematical point of view, SODA will analyze the cellular shape and spot density (Fig. 1) to evaluate the expected spatial distribution using Ripley's function.If clusters are more frequently associated than the expected random distribution, then they are identified as associated spots.As a proof of concept, we analyzed the distribution of three synaptic molecules named synapsin, homer, and PSD-95 using SIM microscopy.Using only 15 pictures, we were able to analyze about 50,000 synapses and identify that the distance between a synapsin and post-synaptic PSD95 cluster was 107 AE 73 nm while the PSD-95-homer was 64 AE 48 nm.Beyond raw distances, this system allows to detect in a quantitative manner any morphological variations that may occur in various mutants, physiological conditions or in certain synaptopathies.SODA can be used either with sparse labeling (>30 − 100 objects per image) 4 or with highdensity labeling as in single-molecule localization microscopy 1,2 where several thousands of localizations can be retrieved.The only limitation of SODA is the need to get the cell boundary to correctly evaluate the object's density.In contrast to methods using Voronoï tesselation 5
that
Fig. 1 Workflow of the Easy SODA protocol available in Icy Software.Colocalization or distant association can be analyzed using the user-friendly Easy SODA protocol that is freely available in Icy Software.This protocol is a graphical programming automatization routine that allows analysis of synaptic proteins' distribution within neuronal cell shape.Here, neurons are labeled with two proteins (green and cyan channels) that are distributed in clusters.Clusters are segmented using wavelet segmentation through "spot detector" plugin.Cell shape is extracted using a MAP2 stain, and segmentation is done using the "Hierarchical KMeans" plugin.Cluster distribution within the cellular mask is then analyzed through the "SODA" plugin using Ripley's analysis.Statistical associations are detected if any, and the proportion of associated clusters with their distance is provided with a p-value indicating the statistical robustness of the association.If many pictures are analyzed in batch mode, all results can be exported to Excel files.A molecular map is exported for each picture with an association color code.For example, if we take a red-green spot analysis: isolated green spots remain green, green spots associated with red are cyan, isolated red spots remain red, and red spots associated with green are pink.Localization of significant associations is thus visible at a glance over the cell mask (here in deep blue).
are limited up to now two-dimensional analysis, SODA can be used in 3D, which is an added value to analyze thick 3D volume STORM images.Because SODA does not rely on any overlap methods, it is far less sensitive to high-density false positive colocalization artifacts. 1,2ODA can also be used for all other associations (either direct or distant) in neurosciences 6-10 and even outside this field like, for example, in cell biology, [11][12][13][14][15][16][17][18] virology, 12 in bacteria 19 or plants. 20[22] 3 Imaging Plasma Membrane in Live or Fixed Cells
Optimizing Live Plasma Membrane Imaging with MemBright Probes
In order to be able to place the molecules within the cell shape, and in collaboration with chemists, we have selected new membrane probes capable of revealing the cell shape and imaging fine structures, such as dendritic spines. 23hese membrane probes, named MemBright, upon insertion in the membrane, emit fluorescence with narrow emission peaks, allowing correlation with other conventional fluorescence for multi-color labeling.A family of seven members is now available and can be used all over the fluorescence spectrum (from 480 and 750 nm).Upon incubation with living cells, these probes insert directly into the membrane through a lipid anchor and thus reveal the cell shapes without the use of any transfection or viral vectors (Fig. 2).This strategy thus makes it possible to reveal all neurons and/or glial cells in a few minutes without any toxicity.
The big advantage of MemBright probes is also its fluorogenic property.Indeed, MemBright probes are non-fluorescent within media and become fluorescent when reaching the plasma membrane.It means that the probe can be let within the cell culture media without having a fluorescence background under the microscope.Letting the probe in the cell chamber during live acquisition allows the perpetual replacement of the probes if bleaching occurs, thus leading to persistent bright labeling of the membrane.
MemBright probes are more efficient when incubated on cells in the absence of serum to avoid any titration of the probes by serum fat.In neurons and glial cells, we usually incubate MemBright probes in Krebs Ringer solution at a concentration of 200 nM at 37 deg under the microscope.The absence of serum optimizes the labeling, and the absence of phenol red lowers the fluorescence background.Fluorescence on the plasma membrane appears very fast within a few minutes.To avoid saturation of the plasma membrane with a huge amount of lipids, it is crucial not to use a high concentration of probes.MemBright probes are sufficiently bright to be used at the nanomolar range, whereas other commercial probes have to be used at the micromolar range.Moreover, it should be stressed that illumination of any fluorescent probes may induce the production of reactive oxygen species that can imbalance the intracellular redox state and be deleterious to the cell. 24Thus, it is also a good practice to minimize illumination time and frequency, to the minimal amount needed to the right sampling of a biological event.Previously, we could follow neuronal growth over time during 13 h, imaging every 2 min with low laser power (0.2% of an 561 laser line of an Elira PS1), without any detrimental effects. 23At last, we have shown that MemBright probes are resistant to permeabilization when fixed properly with a mixture of 4% paraformaldehyde-0.2%glutaraldehyde,and can thus be combined with conventional immunolabeling with primary and secondary antibodies. 23MemBright staining can be correlated either to live antibody staining or with immunochemistry on fixed samples.We could show double labeling of the plasma membrane and live L1-CAM endocytosed antibodies.We also showed that intracellular vesicular transporter VGLUT could be revealed by immunochemistry within 3D neuronal cell shape reconstructed using MemBright.This property thus allows identifying surface or internal protein locations using MemBright counterstaining to visualize cell shape.We have selected MemBright probes for their stability at the plasma membrane and their slow endocytosis.However, on long-term incubation, they will be finally endocytosed and can thus be used to label endocytic pathways [Fig.3(c)]. 25
Imaging Plasma Membrane of Various Cell Types
We originally showed that MemBright probes could be used in various cell types, such as epithelial cells in culture (HeLa cells or KB cells), 23,[26][27][28] and dissociated hippocampal neurons, 23 hippocampal astrocytes. 23We could also use MemBright probes to label live brain (hippocampus, cortex, and cerebellum) or liver slices, allowing the labeling in depth and imaging using confocal or two photons microscopy. 23Since our first paper in 2019, MemBright probes have been used by several other labs and cited in more than 60 articles and 39 reviews. It h been used in B lymphocytes, 58 in A431 cells, 59 and reused in neuronal cells to label growth cone and initial segments of hippocampal neurons, 60 presynaptic terminals, 29 and post-synaptic compartments. 53It has also been used to label apoptotic bodies (AB), microvesicles (MV), and small EV (sEV) isolated from MIN6 pancreatic beta cells exposed to inflammatory, hypoxic, or genotoxic stressors. 61Since we were asked several times if MemBright could be used on bacteria, we did recently E. coli live labeling with MemBright -CY3.5.As shown in Fig. 3(a), the bacteria 3D shape can be efficiently labeled in live.
Imaging Extracellular Vesicles
MemBright has been widely adopted by the extracellular vesicles community 51 to track extracellular vesicles both in vitro or in vivo in hippocampal 63 or cortical neurons, 79 zebrafish, 62,67,72 breast cancer cells or tumours, 66,71,80 myotubes, 82 and red blood cells. 74,76 Particle size distribution and zeta potential analysis of EVs derived from A375 cells using nanoparticle tracking analysis (NTA) showed that EVs labeled before and after labeling by MemBright have almost no change in size and only a slight shift of zeta potential.75 Due to its ease of use and brightness, MemBright has thus been widely used to label exosomes. Howevr, it should be stressed that MemBright is not specific to extracellular vesicle labeling. MemBrigh will be able to label any membrane in contact with the probe. That means that mrane debris trails left behind by migrating cells will be labeled, whatever the nature of the membrane (EVs or not).Any membranous organelles (tubules, endosomes, lysosomes, synaptosomes, etc.) that would have been retrieved by ultracentrifugation can be labeled when incubated with MemBright.Some controls are thus needed before labeling to ensure that the fraction is homogeneous and not contaminated by different organelles.
Hyenne et al. 62 show that MemBright can be used in pulse-chase experiments and that some CD63-GFP EVs can be labeled with MemBright, while others are not. 62Sung et al. concluded that MemBright can label exosomes as well as plasma membrane-derived EVs, but that MemBright does not label all exosomes. 47Using a pulse-chase experiment, it is expected that not all endosomal compartments will be labeled.Indeed, only those deriving from the plasma membrane exposed to the MemBright at a time t will be visible.It is likely that a proportion of EVs that were generated before or after incubation with MemBright and that are then stored in the cell will not be labeled by the MemBright wave.It is therefore essential to properly calibrate the labeling time incubation and the chase time to observe the desired events.
Imaging Plasma Membrane with Single Molecule Localization Microscopy
MemBright probes can be used in conventional and super-resolution microscopy (Fig. 4 and Video 1) and thus make it possible to observe the molecular distribution of synaptic proteins in 3) can be used to reconstruct the 3D shape of the spine in a wireframe.This 3D shape can then be used for volumetric estimation or fine measurements of the spine neck.
correlation with the structural morphology at the nanoscale using STORM.Altogether with the SODA plugin, MemBright probes thus provide the perfect tools to access a nanometric molecular map of various proteins within the 3D cellular context.These high-resolution techniques were set up on fixed samples, but it is expected that the need for super-resolution imaging with small molecular probes to allow imaging of living samples will be growing and will stimulate the development of new molecular probes.Indeed, chemical development will be required in the next years, since molecular probes are still up to now far beyond the recent development in instrumentation that can reach a resolution of 3 to 4 nm. 83One interesting track will be probably the development of self-blinking dyes [84][85][86] or the development of new convertible fluorophores 28 for live SMLM.We are now working on a photoswitchable version of the MemBright that would be able to be photoswitched without any reducing buffer, to be used in live single molecule localization microscopy. 87oreover since MemBright has been used a lot in the extracellular vesicle community, it indicates that membrane dyes are of great interest for people working on vesicular trafficking.One other challenge, in the next years, would be to develop new MemBright probes devoted to fast internalization, to be able to decipher different vesicular pathways with various colors.
biology and integrative neuroscience from Sorbonne University.Within the facility, he provides a unique offer of biological sample preparation ranging from cells to tissues.He is also working on several projects that involved live imaging and high resolution microscopy using STED and SMLM.
Fig. 4
Fig. 4 STORM imaging of hippocampal neurons labeled with Cy3.5-MemBright probe and imaged in conventional widefield microscopy [fire LUT in (a)] and in STORM microscopy (blue spheres in rectangle).(b) Magnification of the plasma membrane 3D STORM image shows the single molecule organization of MemBright all over the plasma membrane.Light blue localizations are closer and deep blue are deeper.(c)-(e) Example of stubby and mushroom dendritic spines in 3D STORM.All the localizations found in (c) (published previously in a different form in Ref.3) can be used to reconstruct the 3D shape of the spine in a wireframe.This 3D shape can then be used for volumetric estimation or fine measurements of the spine neck.
Danglot is a senior researcher at INSERM and scientific director of the NeurImag imaging facility at the Institut de Psychiatrie et Neurosciences de Paris.She received her BS degree in biochemistry from the University Pierre and Marie Curie in 1999 and her PhD in neuroscience in 2004.She is the author of 35 journal papers and 4 reviews.Her current research aims to understand the mechanisms of mammalian neurological synapse morphogenesis using multi-scale imaging and super-resolution techniques, notably SIM, STED, and single molecule localization microscopy. | 2024-03-10T05:08:42.299Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "3bf8b396b2e2d03dc70b66a8329714225591bad4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3bf8b396b2e2d03dc70b66a8329714225591bad4",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Medicine"
]
} |
235216614 | pes2o/s2orc | v3-fos-license | Antimicrobial Activity of Roselle-capped Silver Nanochip on Aggregatibacter actinomycetemcomitans
Abstract Objectives This article aimed to study the effects of the roselle-capped silver nanochip (SNP-Ro chip) against Aggregatibacter actinomycetemcomitans, and the toxicity of this film on fibroblast cells to develop this SNP-Ro chip into a local chemical for the treatment of periodontitis in the future. Materials and Methods Using a microwave-assisted synthesis method, silver nanoparticles (SNPs) were prepared from a silver nitrate solution and roselle extract as a reducing and capping agent. Then, SNP-Ro chips were fabricated by mixing a solution of SNP-Ro with alginate gel. The antimicrobial effect of the synthesized SNP-Ro chips was performed by the disc diffusion technique and time kill assay. The cytotoxic effect was also determined by the MTS assay. Statistical Analysis One-way analysis of variance (ANOVA) and Scheffe’s method were used to analyze the data for this experiment. Results All three ratios of the SNP-Ro chip produced inhibition zones ranging between 18.75 ± 2.08 and 19.03 ± 2.25 mm. In studying the killing time, the three groups of the SNP-Ro chips completely eradicated A. actinomycetemcomitans within 180 minutes. The percentage of the viable SNP-Ro chip-treated human gingival fibroblasts (HGFs) were significantly increased when compared with the alginate chip-treated cells (p < 0.05). Conclusion This study developed a new method for the deposition of SNPs in alginate gel to make a thin small chip for the sustained release of the SNPs in a periodontal lesion. Therefore, the SNP-Ro chip has the potential to be developed as an adjunctive locally delivered antimicrobial agent in periodontal therapy.
Introduction
Periodontitis is an inflammatory disease initiated by specific bacterial species in dental plaque resulting in periodontal tissue destruction, tooth mobility, and finally, tooth loss. [1][2][3][4] Because of the fact that conventional treatment, such as scaling and root planing (SRP), does not completely eliminate periodontal pathogens, especially in deep periodontal pockets, antimicrobial agents can be used as an adjunctive therapy. 5,6 Local drug delivery is also highly attractive due to the ability to deliver the antimicrobial agent within the periodontal pockets, and the therapy is targeted on specific pathogens. However, the local application must reach the intended site of action, achieve therapeutic concentration, and last for a sufficient amount of time to achieve a positive effect. Currently, available local delivery drugs to satisfy the above criteria can be obtained in various forms such as, a chip, gel, and fiber. [7][8][9] Silver in the form of nanoparticles has been widely used in the medical and dental fields. 10,11 Previous studies indicated that SNPs have the ability to kill bacteria without causing bacterial resistance. 12 The synthesis of the SNPs can be done in three ways: physical, chemical, and biological. The method for the synthesis of SNPs is commonly used in chemical extraction but this method causes biological poisoning. Recently, the biosynthesis of SNPs with biomaterials, such as plant extracts, has been widely used. Roselle (Hibiscus sabdariffa L.) was found to have antioxidant and antibacterial activities. [13][14][15] Moreover, previous studies have reported that synthesized SNPs using roselle extract exhibited a 99.94% Aggregatibacter actinomycetemcomitans reduction. 16 Therefore, SNP-Ro was molded into thin film by using an alginate gel. The antibacterial activity against A. actinomycetemcomitans and cytotoxicity against the human gingival fibroblasts (HGFs) were evaluated.
Thus, this research aimed to develop an SNP-Ro chip to use as a local chemical for the treatment of periodontitis. The SNP-Ro chip was developed at different concentration levels. Alginate solution was used for the formation. Then, the film was tested for the activities against A. actinomycetemcomitans by a disc diffusion assay and time-kill assay. Toxicity against fibroblast cells was also tested by an MTS assay.
Preparation of SNPs Capping with a Roselle Chip
A solution of silver nitrate (AgNO 3 ) was mixed with roselle extract to make final concentrations between AgNO 3 and roselle extract of 1:0.5, 1:1.5, and 1:2.5, respectively. All solutions were then heated in a microwave (800 W) for 2 minutes. 12 After 48 hours, the synthesized SNP-Ro was analyzed via ultraviolet-visible (UV-Vis) spectroscopy. The SNP-Ro chips were synthesized mixing each concentration of SNP-Ro and 10% of the alginate solution (w/v). The ingredients were added to the beaker and imported to the dryer at 60°C for 24 hours. The thin film was removed from the beaker then a punching machine was used to make a circular chip.
Analysis of the Antimicrobial Activities Disc Diffusion Assay
The individual colony of A. actinomycetemcomitans was suspended in brain-heart infusion (BHI) broth and incubated for 24 hours. The density of the bacterial culture was adjusted to a 0.5 McFarland standard and diluted 1:100 times in nutrient broth. A. actinomycetemcomitans was swabbed uniformly on the BHI agar disc. Different concentrations of the SNP-Ro chip were then gently pressed in the designated position. Also, 0.2% of chlorhexidine gluconate (CHX) was used as the positive control and the alginate chip was used as the negative control. The culture plates were incubated at 37°C, 5% of CO 2 for 24 hours. After incubation, the diameters of the inhibition zones for each well were measured.
Time-Kill Assay
Different concentrations of the SNP-Ro chip were added to 1,000 μL BHI broth. Then, 10 μL of the prepared bacterial suspension was added to the nutrient broth containing each ratio of the SNP-Ro chip, as well as 0.2% of CHX and the alginate chip. Serial dilutions of the sample were performed from 1/10 to1/10,000, and 10 μL of the diluted sample was dropped over the nutrient agar dishes. The culture plates were incubated at 37°C for 48 hours. Colonies on individual plates were counted and expressed as CFU/mL. The experiment was repeated by changing the incubation time of the SNP-Ro chips and A. actinomycetemcomitans to 30, 60, 90, 120, 180, 240, 300, and 360 minutes, respectively.
Cytotoxicity to Human Gingival Fibroblasts
To detect the effect of the SNP-Ro chip on the HGFs, each chip was added to 1 mL of a serum-free medium for 30 minutes. The HGFs (~5 × 10 4 cells/well) were seeded into 24-well plates. After 24 hours of incubation, the cells were then treated for another 24 hours with a prepared solution of each concentration of the SNP-Ro chips and alginate chip. The cytotoxicity of the SNP-Ro chips was evaluated by the CellTiter 96 Aqueous One Solution Cell Proliferation Kit (MTS assay; Promega, Wisconsin, United States) according to the manufacturer's protocol.
Characterization of the SNP-Ro Chips
The synthesized SNP-Ro showed a specific pattern at 350 to 450 nm, which indicated the formation of the SNPs (►Fig. 1A). Plasmon resonance band spectra of all mixture ratios also displayed specific peaks at a similar wavelength. The absorbance spectra increased, which corresponded to the concentration of the extract in the mixtures. 17,18 When the SNP-Ro of the three proportions was fabricated with the alginic acid into the chips, it was found that the chips had a circular shape with a 3-mm radius and 0.01 ± 0.005 mm thickness. The colors of the chips were yellow to dark brown depending on the quantity of the roselle (►Fig. 1B). Using the scanning electron microscope (SEM) to examine the SNP-RO chips, the morphology of the synthesized SNP-Ro chips had a flat surface with white circular particles diffused on the chips. In comparing the alginate chip without any SNPs, the surface was not flat and did not have any white circles appearing on it (►Fig. 2).
Antimicrobial Property of the SNP-Ro Chips
From the results of the disc diffusion screening, the SNP-Ro chips were shown to clearly possess antibacterial properties against A. actinomycetemcomitans. All three ratios of the SNP-Ro chip produced inhibition zones ranging between 18.75 ± 2.08 and 19.03 ± 2.25 mm with no statistically significant differences (p > 0.05) among the SNP-Ro chip groups. However, the inhibition zone diameters of each SNP-Ro chip showed a significant difference (p < 0.05) when compared with the alginate gel chip without SNPs 19-21 (►Fig. 3). In studying the killing time, the three groups of SNP-Ro chips completely eradicated A. actinomycetemcomitans within 180 minutes (►Fig. 4).
Cytotoxic Effect of the SNP-Ro Chips on Human Gingival Fibroblasts
The percentage of the viable SNP-Ro chip-treated HGFs was significantly increased when compared with the nontreated cells or alginate chip-treated cells (p < 0. 05). 13 The comparison of the cytotoxic effect between each concentration of the SNP-Ro chip showed no significant difference (p > 0.05; ►Fig. 5).
Discussion
At present, SNPs have been developed for many medical uses and are mostly being applied as an antibiotic substance. However, the synthesis that uses chemicals to reduce the agents causes a biological toxin. As a consequence, the synthesis of SNPs with natural extracts has been found to reduce this problem. Roselle can be used as a reducing agent and glazing agent in replacement of chemical applications. 16 Additionally, apart from being a natural herb, Jung et al found that roselle has antimicrobial properties against Bacillus subtilis and Staphylococcus aureus. 22 Furthermore, the use of roselle as a glazing agent to prevent the precipitation of the SNPs corresponded to the study of Rodríguez-León et al that used ginseng extract to synthesize silver nanoparticles to counter precipitation. 23 As for the absorption of the SNP-Ro using UV-Vis spectroscopy to measure the absorption of the solutions of the three proportions, the range was 350 to 450 nm which was the absorption of the SNPs. This conformed to the study of Prakash et al on the synthesis of silver nanoparticles using the leaf extract of Spanish cherry as a reducing agent which showed a result of 434 nm. 24 Additionally, the absorption in this research might be slightly different when compared with other studies due to the different types of extracts; however, the range of absorption was similar.
In addition, the SNP-Ro chip was able to form a thin chip by using alginic acid which is a natural substance that causes no harm to humans, resulting in the capability of carrying the SNPs before being released on the specific sites for a particular treatment. After the analysis of the SNP-Ro chip using SEM, it was discovered that the surface was flat and there were white circular particles spread on the chip. This corresponded to the experiment of Lee et al that made a chip from poly(ether sulfone) with the characteristics of hybrid nanocomposite membranes by adding silver nanoparticles. Using the SEM, it was found that white circular particles of silver nanoparticles were spread all over the membrane. 25 In the study of the antimicrobial effect of the SNP-Ro chips against A. actinomycetemcomitans, it was found that the SNP-Ro chips of the three proportions could release the SNPs which would destroy the bacterial cells. However, the antibacterial effect of the three chips displayed no statistical significance (p ≥ 0.05). Alternatively, an alginate chip without SNPs was incapable of resisting A. actinomycetemcomitans. The antimicrobial results of the SNP-Ro chips from the experiment confirmed the study of Bindhu and Umadhevi who examined the antimicrobial effects of SNPs with the synthesis of plants. In that study, SNPs were developed using beetroot, and their antimicrobial activities against Escherichia coli, Pseudomonas aeruginosa, and Streptococcus aureus were indicated. 26 Two main mechanisms seemed to be responsible for the nanoparticles antimicrobial activities. Bindhu and Umadhevi presented that the silver particles released a positive charge to pair with the negative charge on the plasma membrane of the microorganism resulting in the plasma membrane's structural changes, building of small holes, and losing the ability to control the substance's output and input. This mechanism resulted in cell death. 26 Dakal et al demonstrated another mechanism in which the silver particles reacted with the chemical composition of the DNA resulting in the inability of the cell to process the cell's division and later dying. 27 For the effect of the SNP-Ro chip on the HGFs, it was shown that all chips were not toxic to the HGFs. This corresponded to the experiment of Suwannakul et al who also found that synthesized SNPs, using Glycyrrhiza glabra root as a reducing agent to demonstrate an antimicrobial activity against Streptococcus mutans, were harmless to human gingival fibroblasts. 13 A standard initial treatment for periodontitis is scaling and root planing (SRP). This method can efficiently remove the primary etiological factor, dental plaque and a local contributing factor, calculus 28 by using hand and ultrasonic scalers. Nevertheless, the hand instrumentation would smoothen the root better than an ultrasonic scaler. As a result, the hand instrumentation is a better method for reducing the adherence of subgingival plaque. 29 However, it was shown that pocket depths deeper than 6 mm were more difficult to scale. Some bacteria, such as A. actinomycetemcomitans, can invade the gingival tissue, which makes it impossible to eliminate completely from the pocket. Therefore, adjunctive treatment with antibiotics and antimicrobial agents may be required to overcome these bacteria. Thus, it has been found that probing depth and gain of clinical attachment level were improve significantly following a combination of SRP and locally delivered antimicrobials. A single episode of subgingival irrigation with tetracycline HCL was significantly altered the subgingival bacterial morphotypes towards one of periodontal health. 30 In this research, the silver nanochip could be another choice for local antimicrobial periodontal therapy because of its activities against A. actinomycetemcomitans, a key pathogen of periodontitis, within 180 minutes and was nontoxic to fibroblast cells. Furthermore, the manufacturing cost of silver nanochips was not high. However, experiments in animal models and clinical trials should be conducted before they are introduced to clinical practice for treatment of periodontitis.
Limitations
For the limitation in this research, the activities of the silver nanofilm were only tested against A. actinomycetemcomitans. Therefore, further studies should conduct testing on other types of bacteria, for example, Porphyromonas gingivalis and Prevotella intermedia.
Conclusion
This study developed a new method for the deposition of SNPs in alginate gel to make a thin, small chip for the sustained release of SNPs in periodontal lesions. All synthesized SNP-Ro chips containing different ratios of roselle extract demonstrated antimicrobial activity against A. actinomycetemcomitans without exhibiting cytotoxicity to HGFs. These findings suggested that the SNP-Ro chip has the potential to be developed as an adjunctive locally delivered antimicrobial agent for periodontal therapy.
Funding
This project was supported by the Science Classroom in the University Affiliated School Project (SCiUS) under Naresuan University, Naresuan University Secondary Demonstration School, and the Faculty of Dentistry, Naresuan University. The funding of the SCiUS was provided by the Ministry of Science and Technology , Thailand which was highly appreciated. | 2021-05-28T06:16:57.964Z | 2021-05-26T00:00:00.000 | {
"year": 2021,
"sha1": "27c0eb765e307cfa1da2da1f7fcb936c5c61ddad",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1725574.pdf",
"oa_status": "HYBRID",
"pdf_src": "Thieme",
"pdf_hash": "a3d8799ed042b4fcb7346d7af054da6d03bcbc92",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9644886 | pes2o/s2orc | v3-fos-license | Handwritten Devanagari Script Segmentation: A non-linear Fuzzy Approach
The paper concentrates on improvement of segmentation accuracy by addressing some of the key challenges of handwritten Devanagari word image segmentation technique. In the present work, we have developed a new feature based approach for identification of Matra pixels from a word image, design of a non-linear fuzzy membership functions for headline estimation and finally design of a non-linear fuzzy functions for identifying segmentation points on the Matra. The segmentation accuracy achieved by the current technique is 94.8%. This shows an improvement of performance by 1.8% over the previous technique [1] on a 300-word dataset, used for the current experiment.
Introduction
Segmentation of documents into lines and words, and words into individual characters and symbols extracted from optically scanned document images of handwritten text, is one of the major problems of optical character recognition (OCR). Extraction and localization of candidate characters, different modified shapes of characters and character components from isolated word images is often significant enough to make a decisive contribution towards the overall performance of the system. The better is the segmentation process, the lesser is the ambiguity encountered in recognition of candidate characters or word pieces.
In this paper we have considered the problem of segmenting handwritten text in Devanagari. The problem of segmenting extracted words into constituent characters is difficult, especially for Devanagari, an important East Asian script widely used in India. Many Indian languages including Sanskrit, Marathi and Hindi (the official language of India) use the Devanagari script. Several other languages such as Gujarati, Punjabi and Bengali use scripts, which are very similar to Devanagari. Devanagari is a derivative of ancient of Brahmi, the mother of all Indian scripts. Devanagari is more complex than the familiar Roman script in several ways: (a) It has many more basic characters in its alphabet; (b) Vowels are written as medications of the consonants characters.
The work relating to OCR of Devanagari script is found to have few references in the literature. Such instances found in [1], [14], [15], [16], [17]. The problem of Devanagari text segmentation has been addressed in [1], [14], [17]. In one of our earlier works [1], a fuzzy technique was proposed for segmentation of handwritten Devanagari word images. Although the technique of Devanagari character segmentation as described in [17], has shown a high success rate by properly segmenting nearly 88% of characters of printed text, it will not be effective for hand written text, where the Matras are not strictly horizontal as those are in printed words. Compared to Roman script, some special features of Devanagari script make the task of character segmentation complex for words appearing in pieces of Devanagari text. There are some characters in Devanagari script, called Modified shapes, which are not positioned in a strict left to right non-overlapping sequence with adjacent characters in a word. For all of these, segmentation of Devanagari words just by observing the valleys in the histogram, drawn by adding the column wise pixel densities of the word image, is not possible for Devanagari script.
Appearance of consecutive characters in overlapping column positions over a text line makes the problem of Devanagari word segmentation more complex compared to segmentation of English words. The problem becomes compounded with handwritten Devanagari words because of variation in sizes and shapes of handwritten characters.
In comparison to our previous work on development of a fuzzy technique for handwritten Devanagari word segmentation [1], the current work concentrates on improvement of segmentation accuracy by re-addressing some of the key modules of our previous work. The most significant contributions of the present work are on development of a new feature based approach for identification of Matra pixels from a word image, design of non-linear fuzzy membership functions for headline estimation, identification of connected components for further segmentation and finally design of non-linear fuzzy functions for identifying segmentation points on the Matra. Following sections briefly describe the key functional modules developed for the current work.
Noise Elimination
Depending on the data acquisition type, the raw data is subjected to a number of preliminary processing steps to make the data usable. Preprocessing aims to produce data that are easy to operate accurately in document image processing. In the present work, we have used several computing metrics based on spatial attributes of pixels of the binary image. Therefore, noise pixels appearing at the background and along the contour of the word image may affect the segmentation accuracy. To remove a noisy pixel and to smooth the contours of data, we have used a sequence of erosion and dilation, two basic mathematical morphological operators [14], on the input handwritten word images.
Zone Determination in a Word Image
Word images, written in Devanagari script, can be partitioned horizontally into three adjacent zones as shown in Fig. 1. The portion of each word on and above the Matra is identified as the 'upper zone'. The main body of the characters in a word and the portion of the word below the main body are identified as the 'middle zone' and the 'lower zone' respectively. So the three adjacent zones of a word image, mentioned before, need to be identified before segmenting it into constituent characters. More specifically, the top row of the upper zone (R 1 ), the top row of the middle zone (R 2 ), the mid line (R 3 ) of the middle zone, the bottom row of the middle zone (R 4 ) and the bottom row of the lower zone (R 5 ) are to be identified first from the word image.
A horizontal pixel scan of the word image from top towards bottom identifies the first row with at least a single black pixel as the top row (R 1 ) of the upper zone. Similarly, another horizontal scan from bottom towards top identifies the first row again with at least a single black pixel as the bottom row (R 5 ) of the lower zone. Identification of the top row (R 2 ) and bottom row (R 4 ) boundaries of the middle zone is a difficult task in handwritten Devanagari words.
For each black pixel in each row of the word image, the length of the longest run of black pixels in horizontal direction is computed. Then the sum of lengths of the longest runs of all black pixels in each row is computed and plotted, as discussed in [1]. The row with the highest sum represents the row with maximum horizontalness of consecutive black pixels. This row signifies the upper boundary of the middle zone (R 2 ). To identify the lower boundary of the middle zone, the number of transitions from text to background pixels and vice versa is computed along each row, starting from R 2 . In each row, starting from the bottom of the lower zone to the top of middle zone the sum of all such transitions is computed. The average of these row wise sums, denoted as n ave , is then computed, considering all rows from middle and lower zones. Finally, the first row from bottom most line with the value of transition sum exceeding n ave ., is identified as the lower boundary of the middle zone.
Headline Estimation: Fuzzy Approach
Matra or the common headline of a word image may be identified as the continuous horizontal stripe of black pixels appearing at the top of most of the characters and some of modified shapes in the word. All the component characters and modified shapes appear to touch each other only through the Matra of the word. However, in a cursive handwriting the appearance of a Matra is often disjoint and wavy. This makes the identification of potential Matra pixels a challenging task. In the present work, we have developed two fuzzy measures to identify the membership value of each pixel for its potential of belongingness to Matra.
Horizontalness feature
The horizontalness feature of Devanagari Script is very unique and appears very prominently in a word images. This horizontalness property of the Matra may be extracted from the row wise sum of continuous run of black pixels. This value is normalized with respect to the maximum longest run value of any pixel within the word image.
Verticalness feature
Many characters and modified shapes in Devanagari script have vertical stripe of black pixels, as a part of their shapes. This vertical stripe often appears at the right side, middle or left side of the characters. These stripes touch the Matra of a word image and often extend till the bottom of the respective characters or modified shapes. In the present work, we have developed a technique to identify prominent vertical stripes in word image and identify their average top and bottom rows within the principal segments. This verticalness property of the Matra may be extracted from the column wise count of continuous run of black pixels. This value is normalized with respect to the maximum longest run value of any pixel within the word image.
Fuzzy Membership Function for Headline Estimation
We have designed a bell shaped membership functions to map the horizontalness feature values of each row to determine its belongingness in the Matra region. The generalized bell function depends on three parameters a, b, and c as given by where the parameter b is usually positive. The parameter c locates the center of the curve, i.e., R 2 and x is the row index for any black pixel P x y in the word image. For computation of the fuzzy feature values, we have designed a fuzzy function, viz, f h (x h ,f(x;a,b,c)) for horizontalness feature respectively. Such that, Where, x h is normalized horizontalness component of each pixel P x y under consideration and 0 x h 1 . Fig. 5 shows a diagrammatic representation of the bell shaped fuzzy membership function, designed for the present work.
A pixel P x y is identified as a headline pixel, if its value exceeds the mean of all such f h (P x y ) values within the region R 1 -R 3.
Fuzzy Segmentation Features
After Matra of a word is identified the next task becomes to identify certain column positions on the Matra from where the word can be vertically segmented into constituent characters. Such column positions are called terminal points of segments. One of the prominent features for identifying terminal points of segments is the number of black pixels along each vertical column position on the Matra. The less is the number of black pixels along a vertical column position on the Matra, the higher is its degree of belongingness (µ 1 ) to the set of terminal segment-points. On this basis a bell-shaped fuzzy membership function (µ 1 ), as discussed in section 4.3, is designed. Another feature (F 2 ) is considered here within the region (R 2 -R 3 ). Here again the more is the distance, the less is the degree of belongingness (µ 2 ) of the associated point to the set of segment terminal points. Detailed description of these three features is already given in [2]. The necessary membership functions (µ 1 , µ 2 ) for these features are shown in Fig. 6.
Results and discussion
To evaluate the performance of the technique, described here, for segmentation of word images a total of 300 word images are collected from different documents to include varieties in writing styles. The documents are digitized using a flatbed scanner at a resolution of 300 dpi. The digitized documents so prepared are finally binarized simply through thresholding. Due to non-availability of standard datasets for handwritten Devanagari word images, the performance of the current technique could not be compared with existing segmentation algorithms described in [11][12][13][14]. However, we have compared the performance of the present work with one of our previous works, described in [1].
For designing of the fuzzy function, as shown in section 4.3, the values of two positive constants a and b were chosen as 2 and 1 respectively. As discussed earlier, the row index of the lower boundary of the upper zone (R 2 ) is assigned to the third constant c in the said fuzzy function.
Some of the sample images of Devanagari words, which were properly segmented by the present technique, are shown in Fig. 7. Also, some, on which the technique fails at some points, are shown in Fig. 8. In the Figs. 7-8, the pixel positions, which are identified as potential segment points on the Matra, are shown with darker shading, where as the other pixel positions on the same Matra are shown with lighter shading. To evaluate the segmentation performance of the present technique the following expression is developed.
Success rate = (C t / (C t + C u ))*100 Where C t = the number of segment terminal points producing true segmentation and C u = the number of segment terminal points producing under segmentation. Whether a segment terminal point, identified by the present technique, produces true segmentation or under segmentation is determined through visual observation here. On the basis of this, the success rate of the present technique is computed to be 94.8% out of the 300 word images. On the same word dataset, our previous technique shows a success rate of 93%. This shows an improvement of 1.8% over the technique reported by us in [1]. The improved performance on the same dataset validates our choice of using non-linear fuzzy membership function over the previous choice of triangular membership functions. Also, we observed that the use of horizontalness and vertical ness features refine the selection of headline pixels over the previous technique. Finally, considering the enormous complexity of Devanagari script, the contribution of the present approach may be considered significant with satisfactory segmentation performances. | 2015-01-22T04:05:25.000Z | 2015-01-22T00:00:00.000 | {
"year": 2015,
"sha1": "34126af8f455399cb5a5e4d7e7b257e2aa47f11f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "34126af8f455399cb5a5e4d7e7b257e2aa47f11f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
136134116 | pes2o/s2orc | v3-fos-license | The Determination of Interfacial Shear Strength in Short Fiber Reinforced Poly Ethylene Terephthalate by Kelly-Tyson Theory
The interfacial shear strength value measuring by the modified Kelly-Tyson equation method was studied the measurement accuracy. The measuring accuracy by using the modified Kelly-Tyson equation method is compared to the nano-indentation testing method. The results and an influential factor are described. An error in the modified Kelly-Tyson equation is verified to avoid the incorrect measurement when the interfacial shear strength was measured by the modified Kelly-Tyson equation. To study the different interfacial shear strength behavior, short fiber reinforced PET composites were fabricated. In this study, an advance fabricating technique for short fiber reinforced composite as direct fiber feeding process is conducted to fabricate GF/recycled PET for studying the interfacial shear strength. The result indicates that the modified Kelly-Tyson equation method accurately provides the accurate interfacial shear strength value, if it is conducted with the sample without a horizontally aligned fiber. So the high fiber loading content sample should be avoided to get the more accuracy result. The large horizontally aligned fiber area into specimens extremely resulted in the incorrect measurement of the interfacial shear strength value by the modified Kelly-Tyson equation method. The fiber agglomeration factor and the sensitively horizontally aligned fiber area must be considered its influence on the measuring for improving the equation effectiveness.
Introduction
The fiber-matrix interfacial shear strength is the main factor determining the mechanical properties of composite materials.Practically, a stronger composite has the better of adhesive strength between fibers and matrix.The magnitude of the interfacial shear strength is based on the surface properties of both component [1]- [6].DiBenedetto [7] studied on the effects of different fiber surface treatments to interfacial shear strength of composites.Several researches have studied on the estimation of interfacial shear strength in different approaches, such as Takaku, A [8], to measure directly the interfacial shear strength (or critical fiber length) between fiber and matrix.Regarding the interfacial shear strength measurement, an alternative Accurate, accurate and economical measurement methods have been studied.To accurately measure the interfacial shear strength of composites, Kelly and Tyson [9]- [14] proposed the equation determine the interfacial shear strength.The well-known Kelly-Tyson model considers the effect of fibers with the fiber length.By assuming a constant shear stress, the interfacial shear strength can be determined through a simple force balance equation of the fragment as shown in Equation (1); where τ is the interfacial shear strength; f σ is the ultimate fiber strength at the critical length; d is the fiber diameter; c l is the fiber critical length.
The assumption of a constant shear stress was originally used by Kelly and Tyson.In this study, the Kelly-Tyson equation was conducted to find the fiber critical length in order to measure the interfacial shear strength.The equation is shown in equation below (2) [9]- [14]; The equation is included two main parts as the fiber with sub-critical length shorter than c l and fiber with sub-critical length longer than c l .In addition, in order to more accurately find the critical fiber length, the Kelly-Tyson equation, is modified by adding one more importance factor as the fiber orientation efficiency factor ( 0 f ).As the modified Kelly-Tyson equation is presented in equa- tion (3) [9]- [14]; In this study, the interfacial shear strength value measuring by the modified Kelly-Tyson equation method was studied the measurement accuracy.Short fi-ber reinforced poly polyethylene terephthalate (PET) composites were conducted to be measured its interfacial shear strength value.The measuring accuracy by using the modified Kelly-Tyson equation method is compared to the nano-indentation testing method.The results and an influential factor are described.An error in the modified Kelly-Tyson equation is verified to avoid the incorrect measurement when the interfacial shear strength was measured by the modified Kelly-Tyson equation.And in this study, an advance fabricating technique for short fiber reinforced composite as direct fiber feeding process are revealed the interfacial shear strength.
Materials
The high quality recycled PET pellets (RPET), which were recycled from PET plastic bottle, were supplied by Negoro Sangyo Co., Ltd. with IV of 0.65 dl/g and molecular weight of 12,600 g/mol.Glass fiber roving (EX-1844) is supplied by Nippon Electric Glass Co., Ltd., Japan.Glass fiber reinforced recycled-PET (GF/ RPET) composites fabricated by direct fiber feeding injection molding (DFFIM) process.
Specimen Preparation
To study the different interfacial shear strength behavior, the short fiber reinforced recycled-PET (GF/RPET) composites were fabricated.The dumbbell specimens of DFFIM GF/RPET composite samples were fabricated by using direct fiber feeding injection molding process.To fabricate the composites by using DFFIM process, fiber roving is directly fed into venting hole while the neat-recycled PET is fed into a hopper; in additional, matrix feeding speed are varied to fabricate a varied fiber loading content GF/RPET composites.By changing matrix feeding speed, the four different amount of fiber loading content of GF/RPET composites were provided to study the interfacial shear strength as 16 wt.%, 28 wt.%, 43 wt.% and 55.2 wt.%.The schematic of DFFIM is shown in Figure 1.The barrel temperatures the locations just before and after the venting hole were set at 260˚C -280˚C and 250˚C -275˚C, respectively.The mold temperature was set at 60˚C with 30 s of cooling time.The screw rotational speed was fixed at 150 rpm.
Tensile Test
For calculate the modified Kelly-Tyson equation, tensile strength of composite sample is required.Regarding to ASTM D683, the specimen is conducted under speed of 1 mm/min with 115 mm grip width.
Morphology Observation
The observations on phase morphology were carried out by the scanning electron microscope (JEOL/JSM-5200), which was set at 15 kV.Gold was sputtered
Morphology Observation
The observations on phase morphology were carried out by the scanning electron microscope (JEOL/JSM-5200), which was set at 15 kV.Gold was sputtered onto the specimens for electron conductivity.
The Fiber Length Measurement
Remained glass fiber were cast on glass slide then observed by optical microscope to measure using image-J software.The number average fiber length (L N ) was calculated using following Equation (4) [14]; where N i = The number of fibers of length L i
Fiber Orientation Index Measurement
The dumbbell specimens were cut at the middle part.To observe at the cross section, the cut dumbbell samples were prepared by using a polishing machine.
After polishing, the cross-section of dumbbell was photographed by using an optical microscope in order to observe the fiber orientation into its specimen.An example of optical photograph was shown in Figure 2(a).Two kinds of fiber can be found in the cross-section as a circle fiber shape and elliptical fiber shape.The circle fiber shape determines if fiber is perpendicular to the section plane.
On the other hand, the elliptical fiber shape indicates if fiber is at an angle to the section plane.The elliptical fiber shape was mark the cross section to know the θ as shown in Figure 2(b).Fiber orientation index can be calculated by using as following Equation ( 5) [14] [15]; When θ n = cos −1 (b/a)
Nano-Indentation Test
The dumbbell specimens was cut at middle part then it was polished to decrease the thickness as 200 µm.the prepared sample was measured the interfacial shear strength by Agilent Technologies Nano Indenter G200.The maximum load and pushed speed were 400 mN and 2 mN/s, respectively.
Results and Discussion
To verify the accuracy of the modified Kelly-Tyson equation, the nano-indentation test was carried out.The glass fiber reinforced recycle-PET composite made by direct fiber feeding injection molding process (DFFIM GF/RPET) was measured by using nano-indentation test.1.In this calculation, the fiber strength was assumed as 1500 MPa [17].In the four different fiber loading content measuring by the equation case, the calculated interfacial shear strength values are different.
Indeed using the same material, the calculated interfacial shear strength value supposed to be same.Thus, using the different fiber loading contents sample culated l c values increase when the calculation is conducted under the high fiber loading content sample.The average fiber orientation values of all samples exhibit to be almost similar; nevertheless, SEM photographs are contradictory to the average fiber orientation results.As SEM photographs of dumbbell's cross section at central position of DFFIM GF/RPET composites were shown in Figure 3.
The SEM photographs indicate that there are a horizontally aligned fiber area especially at the central position; furthermore, the size of the horizontally aligned fiber area tend to increase with the fiber loading content increasing.This horizontally aligned fiber area might be very sensitive on the strength of composite; furthermore, it must very affect to the measuring by the modified Kelly-Tyson equation.Because at this horizontally aligned fiber area, there fiber lay down on surface, so the fiber cannot provide a good reinforced efficiency.The fibers lay down at the horizontally aligned fiber area caused there is a strength pre area being very low.This cause must affects to the measured accuracy.Regarding this horizontally aligned fiber area, the DFFIM GF/RPET samples were more investigated the fiber orientation value on each parts of dumbbell specimen.To more observe the horizontally aligned fiber on dumbbell specimen, the thoroughly measured positions in dumbbell sample are shown in Figure 4, and the relationship between fiber orientation values on normalized thickness of specimens is shown in Figure 5.The results indicate that at the central position the fiber orientation values are poor as compared with skin position.Because in the central core region, the fiber orientation is normally forced by the orientation imposed by the gate.Usually the melt develops from the gate into the mold, and this disturbed flow tends to align fibers perpendicular to the flow direction.In the injection-flow edge, the orientation is caused by the fountain flow at the flow front, and this generally aligns fibers along the flow direction.This shearing flow between the wall and melted polymer results in preferential alignment of the fibers approximately along the flow direction [18].In addition, the fiber orientation values become to be wore when fiber loading content increase, respectively.Regarding to this results, it can said that the average fiber orientation value is not good enough for using in the modified Kelly-Tyson equation, if it is conducted with high fiber loading content sample.Because the poor fire orientation at central position resulted in the increasingly incorrect calculation by using the modified Kelly-Tyson equation.However, using the modified Kelly-Tyson equation method at the low fiber loading content sample provides the very accurate the interfacial shear strength value.From these results, it can be concluded that to measure the interfacial shear strength by the modified Kelly-Tyson equation, it should avoid a sample containing high fiber agglomeration or high fiber loading content.
Conclusion
The modified Kelly-Tyson equation method accurately provides the interfacial
Figure 1 .
Figure 1.Schematic of direct fiber feeding injection process
Figure 2 .
Figure 2. (a) An example of optical photograph at cross section of dumbbell specimen for measuring fiber orientation, and (b) Definition and determination of the fiber orientation [16].
should not change the interfacial shear strength values of composite.The calculated interfacial shear strength value tend to be decreased when amount of fiber loading content increase, respectively.The measured interfacial shear strength values of 16 wt.%,28 wt.%, 43.3 wt.% and 55.2 wt.% GF/RPET composite were compared to the result of its nano-indentation testing value, which there are percent difference in interfacial shear strength values as −1.2%, −8.0%, −35.7% and −31.6%, respectively.It showed that at 16 wt.%fiber loading content is the highest accurate result when compare with its nano-indentation result.The cal-
Figure 4 .
Figure 4.The measured positions of fiber orientation in dumbbell sample.
Figure 5 .
Figure 5.The relationship between Orientation factors on normalized thickness of DFFIM GF/RPET composite.
Table 1 .
The required and measured data of different fiber loading content GF/RPET composites for calculating the interfacial shear strength by the modified Kelly-Tyson equation.And the τ comparison between using the modified Kelly-Tyson equation and nano-indentation. | 2019-04-29T13:17:28.341Z | 2017-07-26T00:00:00.000 | {
"year": 2017,
"sha1": "ca4589c52333e094b225d45237abcac9f4133338",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=77990",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ca4589c52333e094b225d45237abcac9f4133338",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
219720877 | pes2o/s2orc | v3-fos-license | An almost Zoll affine surface
An affine surface is said to be an affine Zoll surface if all affine geodesics close smoothly. It is said to be an affine almost Zoll surface if thru any point, every affine geodesic but one closes smoothly (the exceptional geodesic is said to be alienated as it does not return). We exhibit an affine structure on the cylinder which is almost Zoll. This structure is geodesically complete, affine Killing complete, and affine symmetric.
geodesics will in general be different. The geometries are said to be strongly projectively equivalent if ω is exact, i.e.
In this setting, we shall say that φ provides a strong projective equivalence from M = (M, ∇) to φ M := (M,∇). We say that M is strongly projectively flat if M is strongly projectively equivalent to a flat connection. There is a question of which projective structures can be metrized, i.e. arise as the Levi-Civita connection of some metric; we refer to Bryant et al. [7] for further details concerning this question.
One says that an affine surface is an affine Zoll surface if all of the geodesics are simple closed curves; this is a projective question. LeBrun and Mason [13] (see also later related work in [14]) showed that the only compact surfaces which admit affine Zoll structures are S 2 and RP 2 . Consequently, it is natural to weaken the condition just a little. We say that an affine surface is almost Zoll if for every point of the surface, there is a single geodesic, which will be called the alienated geodesic, which does not close and such that all other geodesics thru that point are simple closed curves which return to the initial point. In this brief note, we present several examples of this phenomena. In Section 2, we discuss the quasi-Einstein equation and present its basic properties in Theorem 2.1. In section 3, we introduce the affine surfaces M(c) that will form the focus of our investigation. In Theorem 3.1, we show that M(c) is affine homogeneous, we determine the Lie algebra of affine Killing vector fields of M(c), and we determine the connected component G(c) of the 4-dimensional Lie group of affine diffeomorphisms. In Theorem 3.2, we show the geometries M(c) and M(c) are neither strongly projectively equivalent nor locally affine equivalent for c =c.
The quasi-Einstein equation
The solutions to the quasi-Einstein equation is a fundamental invariant in the theory of affine surfaces. We define the Hessian by setting: Let Q(M) be the solution space of the quasi-Einstein equation: There is a close relationship between strong projective equivalence and the solutions of the quasi-Einstein equation. We refer to Brozos-Vázquez et al. [6] and to Gilkey and Valle-Regueiro [8] for the proof of the following result as well as a further discussion of the quasi-Einstein equation and its use through the modified Riemannian extension to study Walker metrics on the cotangent bundle of an affine surface.
The affine structures M(c)
The following family of surfaces was introduced by D'Ascanio et al. [2] in their study of geodesically complete homogeneous affine surfaces. Let Theorem 3.1. Let G(c) be the set of all smooth maps from R 2 to R 2 of the form: Proof. We follow the discussion in Gilkey et al. [9]. A direct computation shows that {e cx 2 cos(x 2 ), e cx 2 sin(x 2 ), x 1 } ⊂ Q(M(c)). Consequently, by Theorem 2.1, M(c) is strongly projectively flat and is a diffeomorphism of R 2 preserving the affine structure. We show that G(c) is a Lie group with Lie algebra g(c) by computing: e αγ − βe cδ sin(δ) + γe cδ cos(δ), δ +δ) −e −α−cδ (β sin(δ) + γ cos(δ)), −δ).
Following the notation of Opozda [16], an affine structure on R 2 is said to be Type A if the Christoffel symbols are constant; work of Arias-Marco and Kowalski [1] and of Vanzurova [20] show that if the Ricci tensor of such a structure is non-zero, then it is not metrizable, i.e. it does not arise as the Levi-Civita connection of a pseudo-Riemannian metric. Let M 0 (c) be the affine structure on R 2 where the only (possibly) non-zero Christoffel symbols are Γ 11 1 = 1, Γ 22 1 = 1 + c 2 , and Γ 22 2 = 2c. This is a Type A geometric structure on R 2 in the notation of Opozda [16]; it is linearly equivalent to the surfaces M c 5 of [5] but is in a slightly more convenient form for our purposes. Let (u 1 , u 2 ) be coordinates on R 2 . A direct computation shows Results of Brozos-Vázquez et al. [5] show that if M is a Type A structure on R 2 and if Rank{ρ} = 1, (1) The geometries M(c) and M(c) are not locally affine equivalent for c =c.
(2) The geometries M(c) and M(c) are not strongly projectively equivalent for c =c.
Results of [5] show that α(c) is an affine invariant in this setting. One computes α(M(c)) = are the geodesics of M(c) with initial position (u, v) and initial velocity (a, b). M(c) is geodesically incomplete since these curves are only defined for the parameter range 1 + 2bct > 0.
Proof. One verifies directly the curves σ u,v;a,b are geodesics with the given initial conditions. Since they are defined for all time if c = 0, M(0) is geodesically complete. On the other hand, they fail to be defined when 1 + 2bct = 0 and thus |mathcalM (c) is geodesically incomplete for c > 0.
We give below a picture of the geodesic structure at the origin (u, v) = (0, 0); we emphasize, it makes no difference since the geometry is homogeneous. The geodesics fall into 2 families; those with b > 0 all meet at the point (0, π); those with b < 0 all meet at the point (0, −π), and those with b = 0 lie along the x 1 axis. We also present similar pictures for the geodesics that start at (u, v) = (1, 0) and at (u, v) = (2, 0).
Figure 1. Geodesic structure
Let φ(x 1 , x 2 ) = (e cx 2 x 1 , x 2 ). By Theorem 2.1, φ * M(c) is strongly projectively equivalent to M(0). Thus modulo the diffeomorphism φ, the picture of the geodesic structure for M(c) is the same as that of M(0).
3.1.
An affine geometry on the cylinder. Let Φ(x 1 , x 2 ) = (x 1 , x 2 +2π) generate a fixed point free action of Z on R 2 ; this corresponds to taking α = 0, β = 0, γ = 0, and δ = π in Theorem 3.1. We divide by this action to define an affine structure on the cylinder C := (R × S 1 , ∇). We verify immediately that if c = 0, then so Φ is in the center of the group G(0) and hence G(0) extends to a transitive affine action on C; this is a homogeneous geometry. All the geodesics with a nontrivial vertical component close smoothly and where the horizontal geodesic is the alienated geodesic. This is the desired quasi Zoll affine geometry. The exponential map is surjective on R 2 ; it is not surjective on R 2 . It is globally affine homogeneous, affine geodesically complete, and affine Killing complete.
3.2.
An affine geometry on the Möbius strip. Let Ψ(x 1 , x 2 ) := (−x 1 , x 2 + π); this generates a fixed point free action of Z on R 2 . Let L := R 2 /Ψ(Z) be the quotient; this is the Möbius strip. In a purely formal sense, this corresponds to taking α = π √ −1, β = γ = 0, and δ = π in Theorem 3.1. We compute Thus this is a homogeneous affine structure as well and we have a double cover Z 2 → C → L on which G(0) acts equivariantly. With this structure, the Möbius strip is affine homogeneous, geodesically complete, affine complete, and almost Zoll. Let τ (t) = log(1 + 2bct). Then This geometry is geodesically incomplete; it is only defined for the parameter range 1+2bct > 0. Still, all geodesics thru the origin either focus vertically above or below the x 1 -axis or are horizontal and the general pattern is the same. A geodesic σ is an alienated geodesic if and only ρ(σ,σ) = 0. Dividing by Z yields an affine quasi Zoll geometry. This geometry is locally homogeneous but not globally homogeneous since Φ is not in the center of G(c) for c = 0 owing to the presence of the exponential factor e cx 2 . LetŤ (α, δ)(x 1 , x 2 ) = (αx 1 , x 2 +δ) define an affine action of (R−{0})×R) on R 2 . This action commutes with Φ; there are 2 C orbits -the horizontal axis and everything else. Thus this geometry is "almost" affine homogeneous; the complement of the alienated geodesic thru (0,0) is homogeneous as is the alienated geodesic thru (0,0). We also obtain an almost Zoll geometry on the Möbius strip.
3.3. Speed. We have ρ = (1 + c 2 )dx 2 ⊗ dx 2 . We use ρ to define a positive semidefinite inner product and let ρ(X, X) be the "speed". We suppose c = 0. We compute Thus the alienated geodesics are the null geodesics in the geometry M(c). Although the remaining geodesics all return to the basepoint in the cylinder, the speed increases to ∞ as t → − 1 2bc ; the return to the basepoint occurs more and more rapidly.
3.4. The projective tangent bundle. We digress briefly to relate this example to the results of LeBrun and Mason [13]. If M is a smooth manifold, let P(M ) be the projective tangent bundle. If ∇ is an affine structure on M , the tangent of lifted geodesics defines a natural foliation on P(M ); the affine structure is said to be tame if this structure on P(M ) is locally trivial. LeBrun and Mason showed (see Lemma 2.7 and Lemma 2.8) (1) The universal cover of M with the induced affine structure is tame Zoll.
(2) M is compact and any two points of M can be joined by an affine geodesic. Our examples show that these results fail for almost Zoll structures. Let M is be an almost Zoll surface. Associating to any point of M the tangent to the alienated geodesic through that point defines a natural section to P(M ); letP(M ) be the complement of this section. We adopt the notation of Section 3.1 to define C; the alienated geodesics are the horizontal geodesics; σ u,v;a,b is not alienated if and only if b = 0. Since we are working projectively, we may set b = 1. Let χ(r, s, t) :=σ r,0,s,1 (t) parametrizeP(C); this identifiesP(C) with R × R × S 1 since, of course, σ r,0,s,1 (t) is periodic with period 2π in t. This shows the foliation of P(T M ) by lifted geodesics is a trivial circle bundle over R 2 and hence is tame. Similarly, if we lift to the universal cover, the foliation of P(T R 2 ) − S(R 2 ) by lifted geodesics is a trivial R bundle over R 2 and hence tame. Clearly, however, the affine structure on R 2 is no longer almost Zoll and we can not join any two points of R 2 by geodesics. And the cylinder does not have finite fundamental group. On the other hand, the cylinder is the oriented double cover of the Möbius strip.
3.5. Global topology. As noted above, the tangent to the alienated geodesic thru any point of an almost Zoll manifold is a section to P(T M ). Consequently, if M is compact, then the Euler-Poincare characteristic of M vanishes. Thus, in particular, the only compact surfaces which could potentially admit an almost Zoll structure are the torus and the Klein bottle. The example we have constructed passes to the cylinder and the Möbius strip; it does not pass to the torus or the Klein bottle. We do not know if these admit an almost Zoll structure but we suspect the answer is no.
Effect of the fundamental group
Up to affine equivalence, there is a unique surface with a Z 3 symmetry. Let T ∈ GL(2, R) satisfy T 3 = id and T = id. Let e 2 := T e 1 and set T e 2 = ae 1 + be 2 so that The equation T 3 = id forces a = b = −1 and we have | 2020-06-18T01:01:33.877Z | 2020-06-17T00:00:00.000 | {
"year": 2020,
"sha1": "1dced649bbcd516717fa760ac9156f47178b5414",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1dced649bbcd516717fa760ac9156f47178b5414",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
6703 | pes2o/s2orc | v3-fos-license | The borderline resectable/locally advanced pancreatic ductal adenocarcinoma: EUS oriented
Staging pancreatic cancer is mandatory for clinical practice. Endoscopic ultrasound (EUS) is a valuable technique with high accuracy in local invasion assessment. EUS can be considered as one stop shop for pancreatic diseases offering valuable information concerning diagnosis, staging, and therapy decisions. For an accurate staging of pancreatic cancer, clinicians have important imaging tools in clinical practice: computed tomography (CT) scan, magnetic resonance imaging (MRI), EUS as well as diagnostic laparoscopy. The aim of accurate staging is to establish the optimal therapy in these patients. Although surgery is the only curative option in resectable tumors, in clinical practice, it is often difficult to obtain an accurate staging due to inherent limitations of imaging procedures.
INTRODUCTION
Staging pancreatic cancer is mandatory for clinical practice. Endoscopic ultrasound (EUS) is a valuable technique with high accuracy in local invasion assessment. EUS can be considered as one stop shop for pancreatic diseases offering valuable information concerning diagnosis, staging, and therapy decisions. For an accurate staging of pancreatic cancer, clinicians have important imaging tools in clinical practice: computed tomography (CT) scan, magnetic resonance imaging (MRI), EUS as well as diagnostic laparoscopy. The aim of accurate staging is to establish the optimal therapy in these patients. Although surgery is the only curative option in resectable tumors, in clinical practice, it is often difficult to obtain an accurate staging due to inherent limitations of imaging procedures.
T STAGING
Some patients with pancreatic cancer are classified as borderline, with locally advanced disease. In this set of patients, imaging methods such as EUS seem to represent an accurate method for selecting patients undergoing curative surgery. The assessment of pancreatic cancer resectability is based mainly on the extent of the peripancreatic vasculature involvement with tumor mass. [1] According to the American Joint Committee on Cancer, [2] a pancreatic tumor is considered to be surgically resectable (curative) in a few situations: no involvement of the superior mesenteric vein (SMV) or SMV-portal vein (PV) confluence (defined as occlusion or encasement); no direct extension to the superior mesenteric artery (SMA); no direct extension to the inferior vena cava (IVC), aorta, or celiac trunk; no extensive peripancreatic or celiac lymph nodes involvement; no distant metastases (liver, peritoneal, etc.). There are some situations when the primary tumor is considered borderline resectable: SMV/PV impingement/ short-segment SMV occlusion; SMA abutment; encasement of the gastroduodenal artery up to its origin at the hepatic artery (HA); limited IVC involvement; and colon or mesocolon invasion.
There is variability in the definition of the tumor-vascular relationships. Thus, MD Anderson Cancer Centre classified locally advanced borderline pancreatic cancers (LAPC) in three types: Type A (local This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms. Commentary tumor-artery abutment), Type B (questionable distant metastasis), and Type C (patients with altered performance status). [3] A multidisciplinary approach is highly recommended in the treatment of patients with LAPC. [4,5] CT and MRI had similar sensitivities and specificities for both diagnosis and vascular involvement in patients with pancreatic cancer. [6] Multislice CT (MSCT) seems to have a very good sensitivity in detecting resectable pancreatic tumors reaching 100% in some studies. [7,8] However, CT staging was not predictive of resectability and pathological response in treated patients with neoadjuvant chemotherapy. [9] Resectability based on dual-source CT angiography showed higher sensitivity, specificity, and diagnostic accuracy than that obtained from MSCT angiography scanning. [1] According to a recent meta-analysis, EUS is a reliable and accurate diagnostic tool for the TN staging and evaluation of vascular invasion in pancreatic cancer [ Figure 1a-d].
Thus, sensitivity of EUS for vascular involvement is 87% with a very good specificity reaching 90%. The sensitivity of EUS for T1-T2 stages is 76% but is significant higher in patients with T3-T4 stages, reaching 90%. Accuracy of EUS in the nodal staging is lower, the sensitivity being 62% with a specificity of 74%. [10] EUS is a reliable method for selection of patients with borderline resectable pancreatic cancer due to its high sensitivity and specificity for staging T3-T4 tumors.
The main limitation of CT is the lack of sensitivity for early pancreatic lesions. EUS provides an excellent complement to CT for both diagnosis and staging of pancreatic cancer and allows easy access for needle aspiration and tissue diagnosis. [11] Although EUS is generally considered superior to CT for the diagnosis and local staging of pancreatic cancer, it is however limited by availability and inability to assess for distant metastases. [12] Thus, EUS is considered to be superior for the detection of clinically suspected lesions, especially if the results of other cross-sectional imaging modalities are equivocal. The major advantage of EUS is the high negative predictive value that approaches 100%, indicating that the absence of a focal mass reliably excludes pancreatic cancer. [13] In a study published in 2011, authors compared the tumor size measured by CT ± EUS before surgery and after surgery on resected specimen. 84% of patients had a primary tumor 7 mm larger on pathology than CT. EUS was somewhat more accurate, with pathologic tumor size being a median of only 5 mm larger compared with EUS size. [14] Nevertheless, a cost-minimization analysis strengthened the sequential strategy, MSCT followed by EUS, in potentially resectable cancers; [15] if both methods confirm resectability, there is general agreement between experts that the patient can proceed to surgery. [16] Newly developed EUS techniques such as contrast enhancement combined with three-dimensional (3D) acquisitions could conduct to a better accuracy of the method for assessment of vascular involvement. [17] The technique has some disadvantages: it is time-consuming and the examiner should be experienced in EUS and novel techniques. The newest refinements such as contrast-enhanced EUS, EUS elastography, and tridimensional EUS slowly become important tools for staging pancreatic tumors. [13] Anyway, new CT-based techniques also improved the T staging. Thus, a peripancreatic 3D vascular reconstruction can reveal the vascular anatomy, variations of peripancreatic vascular, and tumor-induced vascular changes. [18] N STAGING EUS is useful as a complementary method to MSCT for N staging in pancreatic cancer. Peripancreatic and distant lymph nodes (mediastinal) can be evaluated by EUS [ Figure 2]. Moreover, fine-needle aspiration (FNA) comes to improve the accuracy of the method, representing a major advantage as compared to (positron emission tomography [PET]) CT or MRI. Sensitivity and specificity of EUS are only 62% and 74%, respectively. [10] Contrast-enhanced EUS (CE-EUS) and real-time elastography (EUS) show potential to improve the accuracy of EUS for the differential diagnosis of benign and malignant lymph nodes. [19] Computer-enhanced dynamic analysis based on hue histograms of the EUS elastography movies represents a promising method that might allow the differential diagnosis of benign and malignant lymph nodes [20,21] [ Figure 2]. Coagulation necrosis has also been described in malignant lymph nodes. EUS features for coagulation necrosis as marker for malignant invasion have a sensitivity of 54% but a very good specificity of 91%. [22] M STAGING EUS can be useful for M staging if the distant metastases are located nearby the digestive tract. Thus, left lobe liver metastases can be evaluated and EUS-FNA is possible in this situation [ Figure 3]. Distant lymph nodes (mediastinal) can be also assessed and punctured.
EUS has the ability to detect much smaller volumes of ascites than traditional CT or MRI, and EUS-guided FNA might be a useful modality for the standard metastatic workup of any newly diagnosed or suspected malignancy. [23] Patients with pancreatic cancer may also develop remote malignant thrombi (RMT), defined as a malignant intravascular thrombus noncontiguous to the primary tumor. Intravascular FNA is a potential safe procedure to detect radiographically occult RMT, which has impact on staging and resectability. [24] European Society of Gastrointestinal Endoscopy consequently suggests performing EUS-guided sampling from distant lymph nodes, left liver lobe metastases, and ascites in patients with digestive cancers. [25] CONCLUSION EUS is a complementary method to CT/MRI for TNM staging of pancreatic cancer having the advantage of tissue sampling by EUS-guided FNA. The newly developed techniques (3D, contrast enhancement, or elastography) conduct to a better and accurate diagnostic and staging. | 2018-04-03T03:03:23.625Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "12e704594a6776e6f6da74aa7ae60df6e3afc915",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5774081",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b33c06a9d07e88d7f48f622a1b5907a8b0d395e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67817690 | pes2o/s2orc | v3-fos-license | Synthesis and Investigation of Pseudo Binary System CaTiSiO 5-YFeSnO 5
The research is devoted to the multicomponent system CaTiSiO5-YFeSnO5. The synthesis of solid solutions Ca1−xYxTi1−xSnxSi1−xFexO5 (x = 0 1.0, Δx = 0.1) was conducted in low-temperature plasma of hydrogen-oxygen flame. It was found that ions Ca2+, Ti4+ and Si4+ in the molecule of titanit may be substituted with t ions Y3+, Fe3+ and Sn4+. In this case, the system produces two phases of variable composition with broad regions of homogeneity. There were defined the boundaries of formed phases, crystallographic and electrical parameters of the solid solutions. All solid solutions have a semiconductor conductivity type, whose value is linearly dependent on the temperature and composition of the sample.
Introduction
Metal oxides containing transition elements possess unique physical properties, they are subject to numerous scientific studies and are successfully used in modern electronics.Particularly complex oxides containing delements, including oxides having the structure of sphene (titanite CaTiSiO 5 ) are of great practical interest.
Synthesis and study of solid solutions with the structure of sphene containing transition elements is not only of theoretical but also of practical interest.Propensity compounds with sphene form solid solutions with wide homogeneity region containing transition elements, indicates the perspective of their application as radio materials.
Possibility of full or partial replacement of the atoms in the crystal lattice of sphene leads to a natural change in the physical properties, representing both theoretical and practical interest.It was shown possibilities to re-place the titanium atoms in the sphene structure with the atoms of tin [1] [2].Gradual replacement of titanium atoms of tin atoms leads to the formation of a continuous series of solid solutions and homogeneous to a natural reduction of conductivity samples [3].A significant change in the crystal lattice was not observed also at full replacement of silicon germanium atoms, leading to the formation of the germanium analogue of titanite (CaTiGeO 5 ) [4].However it has been found that simultaneous replacement of atoms calcium and silicon with two atoms of trivalent elements such as Cr, Mn, Fe, Eu, etc. leads to a significant change in the lattice structure: the monoclinic crystal lattice of sphene becoming to rhombic structure, with the formation of the psedobrukit [5]- [7].As a result, of the complete replacement of Ca 2+ ions and Si 4+ in sphene by ions Fe 3+ and Nd 3+ there was obtained the single crystal composition NdFeTiO 5 [6].The investigation of CaTiSiO 5 -Fe 2 TiO 5 [7] has shown that replacing Ca 2+ + Si 4+ → 2Fe 3+ leads to a significant increase in electrical conductivity of samples, but replacing more than 50% calcium and silicon by iron, result is the formation of solid solutions with the structure of pseudobrookite.
The present communication is devoted to the study of simultaneous heterovalent substitution of all three cations in sphene: Ca 2+ + Ti 4+ + Si 4+ → Y 3+ + Sn 4+ + Fe 3+ .Simultaneous substitution of one divalent and two tetravalent (Ca, Ti, Si) atoms in a lattice of CaTiSiO 5 by two trivalent and one tetravalent (Y, Fe, Sn) represents not only the theoretical but also practical interest.To solve this problem, it has been investigated previously undescribed pseudo-binary system CaTiSiO 5 -YFeSnO 5 .There was built the diagram of the system's state and defined the crystallographic and electrical parameters of the samples of compositions (CaTiSi) 1−x (YFeSn) x O 5 .
Experimental Procedure
Synthesis of solid solutions Ca 1−x Y x Ti 1−x Sn x Si 1−x Fe x O 5 (0 ≤ x ≤ 1, Δx = 0.1) was carried out in parallel in the low-temperature plasma of hydrogen-oxygen flame (LP), and ceramic technology (CT) [8].As starting substances were used the ultrapure oxides of relevant elements: CaO, TiO 2 , SiO 2 , Y 2 O 3 , SnO 2 and Fe 2 O 3 .Samples were synthesized by methods LP and TM then were heated at 1170 K for 6 hours followed by rapid cooling on a copper substrate.The calculations were performed using the software package [9].The density of the samples was determined by pycnometric.
Conductivity of the samples was measured by the compensation in the air, using four silver electrodes [10].The temperature dependence of the electrical conductivity was determined in the temperature range 293 -900 K. Samples for measurement were prepared from powder passed through a laboratory CO-200 sieve (63 microns) after compression to form pills (d = 20 mm, l = 2.1 mm) at a pressure of 400 MPa.The permittivity of the samples was measured using the flat capacitor (20 V, 800 Hz).Molar polarization of the sample is calculated by the equation of Clausius-Mossotti.
Results and Discussion
Comparison of x-ray diagrams of the same composition synthesized by (LP) to (CD) revealed their identity.Following is the data obtained for the samples synthesized by (NP).
Based on X-ray diagrams, it was found that in these conditions the system CaTiSiO 5 -YSnFeO 5 form two phases (α and β) of variable composition with a wide homogeneous regions: α-phase 0 ≤ x ≤ 0.45 and β-phase 0.70 ≤ x ≤ 1.00.Samples of the composition 0.45 ≤ x ≤ 0.70 contain two phases (Figure 1).α-phase.Solid solutions of compositions from CaTiSiO 5 to Ca 0.55 Y 0.45 Ti 0.55 Sn 0.45 Si 0.55 Fe 0.45 O 5 crystallize in the lattice sphene.Thus, replacement of up to 45% of the ions Ca 2+ , Ti 4+ and Si 4+ ions on the Y 3+ , Sn 4+ and Fe 3+ did not result in a significant rearrangement of the crystal lattice sphene (Table 1).As seen from the data intoduction of ions Y 3+ , Sn 4+ and Fe 3+ ions instead of Ca 2+ , Ti 4+ and Si 4+ in the α-phase leads to a significant increase in the unit cell parameters.However, as can been expected, the densities of obtained solid solutions are also growing (Figure 2).Input Y 3+ ions occupy seven vertex polyhedral voids that were previously occupied by ions Ca 2+ .The radii of these ions differ little from each other: r(Y) = 0.116 nm, r(Ca) = 0.130 nm [11].Such substitution also contributes to the similarity of the electron shells of these ions: 3s 2 3p 6 (Ca 2+ ) and 4s 2 4p 6 (Y 3+ ).A Fe 3+ ions, thus, occupy tetrahedral interstices previously occupied by ions Si 4+ .
Despite the greater tendency of Fe 3+ ions to occupy the octahedral interstices, There are numerous examples where the Fe 3+ ions are in tetrahedral cavities oxygen environment such as in the crystal lattice of Fe 3 O 4 where in the tetrahedral voids occupied 1/3 of the ions Fe 3+ [12].As a result, even replacement of 45% of calcium ions and silicon in the α-phase does not result in significant changes in the lattice of sphene.As the authors of [13], belonging of the sphene to the lower-monoclinic system is a consequence of the presence of calcium ions in the lattice, occupying polyhedral (semivershinnye) voids formed by distorted octahedra TiO 6 .As a result, even replacement of 45% of calcium ions and silicon in the α-phase does not result in significant changes in the lattice of sphene.Hence, we expect that the replacement of ions Ca 2+ on the Y 3+ ions will lead to the gradual streamlining lattice, turning seven deformed lattice to the octahedral polyhedral.In fact, samples containing more than 70 at % Yttrium forms a crystal lattice of higher symmetry.
β-phase.When administered 55 at % and more ions Y 3+ , Fe 3+ and Sn 4+ , instead of Ca 2+ , Si 4+ and Ti 4+ solid solutions crystallize in the orthorhombic symmetry in lattice of the psevdobrookit.The border of the homogeneity of β-phase extends in the region x = 0.70 -1.0, which corresponds to the composition of Ca 0.30 Y 0.70 Ti 0.30 Sn 0.70 Si 0.30 Fe 0.70 O 5 -YFeTiO 5 .
The value of the lattice parameters of solid solutions of β-phase are shown in Table 2. Like the α-phase values of the elementary cells of solid solutions, as well as their densities are in the substantially rectilinear function of the composition (Figure 3).As in the α-phase the increase of the value of "x" in the β-phase leads to an increase in cell volume.However, the transition from the monoclinic to the rhombohedral structure observed abrupt decrease in cell volume (Figure 4).In contrast to the α-phase, increasingthe ion content Y 3+ , Fe 3+ and Sn 4+ in the β-phase leads to a noticeable increase in the lattice parameters.
The synthesized solid solutions both α-and β-phases, are dielektriks with character of semiconductor conductivity.The determination results of conductivity, dielectric constant, the band gap, the molar polarizability and polarization of the synthesized solid solutions are shown in Table 3.
As can be seen from the data presented in Table 3, the introduction of tin, yttrium, and iron (p-, d-, and delements) in substituting calcium, silicon, and titanium (s-, p-and d-elements) leads to narrowing of the band gap and improves the conductivity of the solid solutions.Dielectric constant and band gap samples thus reduced.Increased content of yttrium, tin and iron leads to an increase in the values of the molar polarization of samples in areas of both phases.Simultaneously, there is a sharp, abrupt decrease in these parameters during the transition phase (αβ), which is due to a sharp decrease in cell volume and an increase in density of the samples in the transition phase.Concentration dependences of the all parameters are linear in the fields of both phases (Figure 5).
Conclusion
The multicomponent system CaTiSiO 5 -YFeSnO 5 was the subject of the research.It was found that ions Ca 2+ , Ti 4+ and Si 4+ in the molecule of titanit can be substituted with ions Y 3+ , Fe 3+ and Sn 4+ which leads to the formation of two phases of compositions Ca 1−x Y x Ti 1−x Sn x Si 1−x Fe x O 5 with broad regions of homogeneity.There were defined the boundaries of the uniform phases at 1170 K.The samples α-phase (x = 0.0 -0.45) crystallize in the structure of titanite.Samples β-phase (x = 0.70 -1.0) crystallize in the orthorhombic crystal system structure pseudobrookite.The electrical properties of solid solutions formed were also investigated.All samples are insulators with a semiconductor character of electrical conductivity.Complete replacement of the ions Ca 2+ , Ti 4+ and Si 4+ increases the conductivity of the samples by a factor of two.
Figure 1 .
Figure 1.Phase composition of Ca 1−x Ti 1−x Sn x Si 1−x Y x Fe x O 5 system.
Figure 2 .
Figure 2. Dependence of the unit cell parameters (a-, b-, c-) and the density (d-) of solid solutions of α-phase system Ca 1−x Ti 1−x Sn x Si 1−x Y x Fe x O 5 from the composition.
Figure 4 .
Figure 4. Dependence of the density of solid solutions (d-) and the unit cell volume (∎) of αand β-phases system Ca 1−x Ti 1−x Sn x Si 1−x Y x Fe x O 5 .
Table 1 .
Characteristics of unit cell of solid solutions of α-phase system CaTiSiO 5 -YFeSnO 5 . | 2018-12-27T20:03:02.806Z | 2015-01-05T00:00:00.000 | {
"year": 2015,
"sha1": "cf146ff5728bfc725baf6156dfad2e090cbe95f1",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=52921",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cf146ff5728bfc725baf6156dfad2e090cbe95f1",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
214716007 | pes2o/s2orc | v3-fos-license | Review article: Gastrointestinal features in COVID-19 and the possibility of faecal transmission.
Summary Background There is little published evidence on the gastrointestinal features of COVID‐19. Aims To report on the gastrointestinal manifestations and pathological findings of patients with COVID‐19, and to discuss the possibility of faecal transmission. Methods We have reviewed gastrointestinal features of, and faecal test results in, COVID‐19 from case reports and retrospective clinical studies relating to the digestive system published since the outbreak. Results With an incidence of 3% (1/41)‐79% (159/201), gastrointestinal symptoms of COVID‐19 included anorexia 39.9% (55/138)‐50.2% (101/201), diarrhoea 2% (2/99)‐49.5% (146/295), vomiting 3.6% (5/138)‐66.7% (4/6), nausea 1% (1/99)‐29.4% (59/201), abdominal pain 2.2% (3/138)‐6.0% (12/201) and gastrointestinal bleeding 4% (2/52)‐13.7% (10/73). Diarrhoea was the most common gastrointestinal symptom in children and adults, with a mean duration of 4.1 ± 2.5 days, and was observed before and after diagnosis. Vomiting was more prominent in children. About 3.6% (5/138)‐15.9% (32/201) of adult and 6.5% (2/31)‐66.7% (4/6) of children patients presented vomiting. Adult and children patients can present with digestive symptoms in the absence of respiratory symptoms. The incidence of digestive manifestations was higher in the later than in the early stage of the epidemic, but no differences in digestive symptoms among different regions were found. Among the group of patients with a higher proportion of severe cases, the proportion of gastrointestinal symptoms in severe patients was higher than that in nonsevere patients (anorexia 66.7% vs 30.4%; abdominal pain 8.3% vs 0%); while in the group of patients with a lower severe rate, the proportion with gastrointestinal symptoms was similar in severe and nonsevere cases (nausea and vomiting 6.9% vs 4.6%; diarrhoea 5.8% vs 3.5%). Angiotensin converting enzyme 2 and virus nucleocapsid protein were detected in gastrointestinal epithelial cells, and infectious virus particles were isolated from faeces. Faecal PCR testing was as accurate as respiratory specimen PCR detection. In 36% (5/14)‐53% (39/73) faecal PCR became positive, 2‐5 days later than sputum PCR positive. Faecal excretion persisted after sputum excretion in 23% (17/73)‐82% (54/66) patients for 1‐11 days. Conclusions Gastrointestinal symptoms are common in patients with COVID‐19, and had an increased prevalence in the later stage of the recent epidemic in China. SARS‐CoV‐2 enters gastrointestinal epithelial cells, and the faeces of COVID‐19 patients are potentially infectious.
| INTRODUC TI ON
Up to the submission date, a novel coronavirus (severe acute respira- published in China were reviewed in this paper with a view to providing reference for prevention and control, as well as diagnosis and treatment of the disease.
| Inclusion and exclusion criteria
We included data on COVID-19 patients who have confirmed in case reports and retrospective clinical studies relating to the digestive system that were published in English or Chinese from the end of December 2019 to the end of February 2020. Studies that did not mention digestive symptoms were excluded. Most of the patients were from China, including Wuhan city and areas outside Wuhan.
| Data extraction
We reviewed eligible studies and extracted data on province or city, study time period, patient age group range, study size, severity of illness, symptom categories and the incidence of symptoms. We also extracted sensitivity of faecal PCR test and time window between faecal and respiratory PCR test, if mentioned. When extracting information from the studies, pairs of researchers conferred to compare findings and reach consensus. Where consensus was not reached, an independent researcher was consulted.
| Gastrointestinal pathological findings
The first autopsy report was of an 85-year-old man with COVID-19.
This showed segmental dilatation and stenosis of the small intestine. 21
| Faecal test for SARS-CoV-2
Substantial evidence from previous studies of SARS supported the gastrointestinal tract tropism of SARS-CoV, which was verified by viral detection in biopsy specimens and stool. 22 Similarly, SARS-CoV-2 was first reported in stool samples of the first case in the United States. 23 drugs. However, the assessment of loss of appetite was difficult because of its subjective nature; diarrhoea was a more objective find- Interestingly, they pointed out that patients with digestive symptoms were inclined to have a worse prognosis than those without digestive symptoms (34.3% discharged vs 60% discharged). We noticed that there were 74 (36%) critically ill patients in this paper and the severe and critical rate was much higher than the large-scale statistics rate in CDC report, which was 18.5%. 20 The results supported our finding that critical patients with high severe rate were more likely to manifest digestive symptoms. We may speculate that the high rate of severe cases indicated a high density and virulence of virus, which damaged the digestive system. The reason for the phenomenon is unclear, and should be verified by a larger clinical data in future research.
| D ISCUSS I ON
The proportion of children with vomiting was higher than that of adults. The vast majority of children with gastrointestinal symptoms were noncritically ill, only one of 57 children in the literature we reviewed was critically ill. 7,11,17 Gastrointestinal symptoms were also present in critically ill children, 28 Early studies indicated that individuals infected with SARS-CoV-2 might shed and spread the virus while they were pre-symptomatic or asymptomatic. [31][32][33] Considering that viral shedding might last for more than a month, 34 we should pay attention to minimise the risk of faecal transmission. The latest treatment protocol in China stipulates that two RT-PCR tests of respiratory specimens carried out more than 24 h apart should be negative before a patient is discharged from the hospital, and that the patient should be isolated for 14 days after discharge. 19 In view of the possibility that stool samples of the discharged patient could still be positive, we suggest that the patient should implement a more thorough protocol for hand hygiene during isolation, thoroughly disinfect toilets and sinks, and try to avoid sharing toilets with family members. Meanwhile, we recommend a test for faecal nucleic acid before a patient is released from isolation.
Medical staff who perform gastrointestinal endoscopy for isolated convalescent patients should consider all patients to be confirmed cases and take strict protective measures. Proper disinfection of toilets is crucial in endemic regions; otherwise, sanitation facilities can turn into 'virus traps'.
ACK N OWLED G EM ENTS
We are grateful to the authors of literature involved in this article for their contribution to the fight against COVID-19. We also want to thank our family for supporting and understanding in this particular period. And best wishes to the health workers around the world who fight in the front lines of this pandemic.
Declaration of personal and funding interests: None. | 2020-03-31T13:02:59.730Z | 2020-03-29T00:00:00.000 | {
"year": 2020,
"sha1": "e82e59d29537869cfc53306b5b105e35e326d6c4",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/apt.15731",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "44ebe26be742ce126ac5597fd055e33c92a13ec6",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
809113 | pes2o/s2orc | v3-fos-license | 2D ultrasonography and contrast enhanced ultrasound for the evaluation of cavitating mesenteric lymph node syndrome in a patient with refractory celiac disease and enteropathy T cell lymphoma
Background The cavitating mesenteric lymph node syndrome (CMLNS) is a rare manifestation of celiac disease, with an estimated mortality rate of 50%. Specific infections and malignant lymphoma may complicate its clinical course and contribute to its poor prognosis. Diagnosing the underlying cause of CMLNS can be challenging. This is the first report on contrast enhanced ultrasound (CEUS) findings in enteropathy associated T-cell lymphoma (EATL) complicating CMLNS in a gluten-free compliant patient with persistent symptoms and poor outcome. Case presentation We present the case of a 51-year old Caucasian male patient, diagnosed with celiac disease and CMLNS. Despite his compliance to the gluten-free diet the symptoms persisted and we eventually considered the possible development of malignancy. No mucosal changes suggestive of lymphoma were identified with capsule endoscopy. Low attenuation mesenteric lymphadenopathy, without enlarged small bowel segments were seen on computed tomography. CEUS revealed arterial rim enhancement around the necrotic mesenteric lymph nodes, without venous wash-out. No malignant cells were identified on laparoscopic mesenteric lymph nodes biopsies. The patient died due to fulminant liver failure 14 months later; the histopathological examination revealed CD3/CD30-positive atypical T-cell lymphocytes in the liver, mesenteric tissue, spleen, gastric wall, kidney, lung and bone marrow samples; no malignant cells were present in the small bowel samples. Conclusions CEUS findings in EATL complicating CMLNS include arterial rim enhancement of the mesenteric tissue around the cavitating lymph nodes, without venous wash-out. This vascular pattern is not suggestive for neoangiogenesis, as arteriovenous shunts from malignant tissues are responsible for rapid venous wash-out of the contrast agent. CEUS failed to provide a diagnosis in this case.
Background
Cavitating mesenteric lymph node syndrome (CMLNS) is a rare, poorly understood complication of celiac disease characterized by central necrosis of mesenteric lymph nodes [1]. Multiple areas of necrotizing cavitation in reticulo-endothelial tissues are frequently associated with splenic hypo-function, the presence of Howell-Jolly bodies, monocytosis, lymphocytosis and increased platelet count [1,2]. The estimated mortality rate is 50% [1]. Enteropathy associated T cell lymphoma (EATL) is a possible complication of the necrosis of mesenteric lymph nodes [2,3]. Hepatic failure is a rare complication of celiac disease of unknown pathogenesis; the predisposition to autoimmunity may, in part, explain the involvement of the liver [4]. In patients associating CMLNS, septicemia or malignant T-lymphocyte infiltration of the liver may explain the liver insufficiency [2,3].
Diagnosing the underlying condition of the cavitating mesenteric lymph node syndrome (either infectious or malignant) can be challenging; a lymphoma can still be notoriously difficult to diagnose, despite multiple biopsies [5]. Even lymphomas with T-cell immunophenotypic features have been reported in extra intestinal sites (liver, spleen), complicating celiac disease, but without intestinal involvement [2].
Computed tomography (CT) or magnetic resonance imaging (MRI) can detect fat-fluid levels in mesenteric cystic masses, considered as specific findings for CMLNS [6,7]; furthermore, these techniques can identify a segmental enlargement of the small bowel, suggestive for lymphoma. More recently developed, contrast enhanced ultrasound (CEUS) can depict the blood flow in small vessels, under 40 microns, useful to describe the tumor angiogenesis patterning [8]. The intense, anarchic arterial enhancement with rapid venous wash-out of the contrast agent is highly suggestive for malignancy, since the malignant tissue is characterized by numerous arteriovenous shunts [8].
We report a case of celiac disease complicated with CMLNS, with persisting symptoms despite adherence to the gluten-free diet. The imaging and histological examinations failed to document the presence of malignancy during the patient's lifetime. A fulminant liver failure caused the patient's death 14 months after the initial diagnosis. The postmortem histopathological examination revealed abnormal T-cell lymphocytes in the liver, mesenteric tissue, spleen, gastric walls, kidney, lungs and bone marrow; no malignant cells were present in the examined small bowel samples.
Case presentation
A 51-year old male was investigated for chronic diarrhea, episodes of moderate diffuse abdominal pain and 10-kg weight loss. On physical examination, the patient presented muscle wasting, without any fever, hepatosplenomegaly or jaundice. The stool studies were positive for steatorrhea. The laboratory workup revealed moderate iron deficiency anemia, signs of hyposplenism: Howell-Jolly bodies on the peripheral blood smear, elevated platelet count, hypocalcaemia and an elevated alkaline phosphatase level. The serum endomysial and tissue transglutaminase IgA antibodies were positive in high titre. Total villous atrophy, crypt hyperplasia, increased intraepithelial lymphocytes and increased plasma cells and lymphocytes in the lamina propria were found on the duodenal biopsy ( Figure 1) performed by upper digestive endoscopy. The intraepithelial lymphocytes were small, without atypical features. The immunohistochemistry testing found intraepithelial lymphocytes positive for CD3 ( Figure 2) and few lymphocytes positive for CD8 in lamina propria ( Figure 3); CD30 staining revealed isolated positive cells in lamina propria ( Figure 4). The abdominal ultrasound revealed fluid-distended small bowel loops with an enlarged, hyper echoic mesentery ( Figure 5), anechoic cysts corresponding to mesenteric lymph nodes and a reduced spleen size. We established a diagnosis of celiac disease complicated with CMLNS. A gluten-free diet was recommended and a three-month monitoring schedule proposed.
Three months later, the patient complained of persistent symptoms. The stool examination revealed a Yersinia enterocolitica infection. The patient received adequate antibiotic therapy, resulting in stool sterilization.
After six months of gluten-free diet, the clinical manifestations were similar, despite diet adherence, confirmed by the decline in tissue transglutaminase IgA titre. An intestinal lymphoma was suspected and capsule endoscopy performed, which investigated the entire small bowel. This examination revealed an atrophic villous pattern in the proximal jejunum, without mucosal changes suggestive of lymphoma; a "bulging" mass with Figure 1 Duodenal biopsy: villous atrophy, crypt hyperplasia, intraepithelial lymphocytes without atypical features. normal mucosal surface was described ( Figure 6) and was interpreted as compression from a mesenteric lymph node. The entire small bowel was investigated by capsule endoscopy. Mesenteric cystic masses with central low attenuation and a thin enhancing rim were found on oral and IV-contrast enhanced computed tomography (CT) (Figure 7). No small bowel wall segmental enlargements were present on CT enteroclysis. As the clinical suspicion of malignancy was still high, a contrast-enhanced ultrasound (CEUS) was considered in order to describe the vascular pattern of the mesenteric tissue. After peripheral venous injection of 4.8 ml of ultrasound contrast agent (Sonovue), an arterial rim enhancement ( Figure 8) was seen around necrotic lymph nodes, without washout of the contrast agent in the venous phase ( Figure 9). Some of the investigated masses had septa exhibiting the same vascular pattern, suggesting an anarchic vascularization. A diagnostic laparoscopy was performed with removal of two lymph nodes. These cystic masses were found to contain a milky fluid. The histopathological examination of the samples revealed central homogeneous acidophilic material, fibrotic walls with a rim of normal lymphocytes at the periphery of necrotic lymph nodes and no signs of malignancy or infection. The gluten-free diet and monitoring was continued.
After another five months, the patient presented with fever (39°Celsius) and severe liver failure. The blood cultures were negative. The serological markers for viral or autoimmune hepatitis and leptospirosis were also negative. The already severe clinical status worsened, with the development of hepatic encephalopathy and severe upper gastrointestinal bleeding. Despite intensive supportive treatment, the patient died 48 hours after admission. On necropsy, many nodular grayish-white masses, either fluctuant or firm, containing a milky fluid, were found in the mesentery ( Figure 10). The microscopic examination of necrotic mesenteric lymph nodes revealed a central homogeneous acidophilic material, fibrotic walls and rare lymphocytes and plasmocytes in the periphery (Figure 11). Infiltration of abnormal T-cell lymphocytes, with atypical nuclear features, was present in the surrounding adipose tissue (Figure 12). The immunohistochemistry testing was positive for CD3 ( Figure 13) and CD30. The same infiltrative tumor cells were present in the liver, spleen, gastric walls, kidney, lungs and bone marrow; no malignant cells were present in the small bowel samples examined.
Discussion
The pathogenesis of CMLNS is not fully understood at the moment. One of the proposed theories involved an excessive antigenic exposure of the immune system via a damaged intestinal mucosa, leading to depletion of cellular lymphoid elements in the mesenteric lymph nodes and spleen, causing a cystic change or 'cavitation' in some patients with celiac disease. An alternative hypothesis is the necrosis of mesenteric lymph nodes triggered by localized immune-mediated complement activation and intravascular coagulation [7]. There were several reports of lymph node necrosis associated with intestinal infection with Mycobacterium spp, Yersinia spp [1] or Tropheryma whippelii [9]. Our patient had an infection with Yersinia enterocolitica during the course of his illness. However, this infection cannot be regarded as the cause of CMLNS, because the symptoms persisted even after adequate antibiotic treatment leading to stool sterilization.
Half of the CMLNS patients have a poor prognosis, but some reports mentioned a good outcome if a strict gluten-free diet was observed [10,11]. In celiac disease, however, a malignant lymphoma may also be the cause of mesenteric lymph node necrosis, being in part responsible for the poor prognosis [1].
The diagnosed of CMLNS is based on imaging and typical histopathological examination. The necrotizing mesenteric lymph nodes are described as anechoic cysts on an abdominal ultrasound. Computed tomography studies reveal central low attenuation with enhancing rims [6]. Sometimes fat-fluid levels within the masses may be apparent on CT [6]; this feature is considered unique to the cavitated mesenteric adenopathies associated with celiac disease. The specific fat-fluid levels were also found in one CMLNS case on an MRI examination [7].
The diagnosis of complicated cavitating mesenteric lymph node syndrome (either infectious or malignant) can be challenging; a lymphoma is still notoriously difficult to diagnose, despite multiple biopsies [5]. Lymphomas with T-cell immunophenotypic features have been reported in extra intestinal sites (liver, spleen), as a complication of celiac disease, but without intestinal involvement [2]. In our case, the initial biopsies did not reveal abnormal T-cell lymphocytes, but the virtual lack of immunopositivity for CD8 intraepitelial lymphocytes might have been suggestive of a diagnosis of refractory Figure 10 Necropsy gross findings -many nodular grayish-white masses in the mesentery; a milky liquid was observed after section. celiac disease type II [12]. Also the presence of isolated large CD30+ lymphocytes in the lamina propria in early biopsies could have represented minimal infiltration by the patient's lymphoma [13].
The capsule endoscopy and CT did not find any small bowel mucosal changes suggestive of enteral lymphoma and no other small bowel biopsies were taken. The contrast-enhanced ultrasound was performed in order to obtain more specific information about the vascular pattern of the lymph node "walls" and the surrounding mesenteric tissue.
The assessment of neovascularization by CEUS is based on its ability to depict the blood flow in small vessels. In malignant tumors a rapid and intense enhancement is seen in the arterial phase, with rapid wash-out of the contrast agent in the venous phase. This vascular pattern is explained by arteriovenous shunts. The timing of hypo enhancing on CEUS may be correlated with tumor cell differentiation; well-differentiated tumors wash out more slowly than poorly differentiated ones [14]. The quantitative CEUS parameters in assessing neoangiogenesis processes were documented on hepatocarcinoma, ovarian and breast malignant tumors [15][16][17]. In malignant lymph nodes, CEUS depicts vessels penetrating the node's capsule away from the hilum; reactive adenopathies have a singular vascular pedicle at the hilum with regular branches towards the periphery. Based on these findings, CEUS can improve the results of Doppler ultrasound in the differential diagnosis of lymph nodes with a sensitivity, specificity and accuracy rate of up to 84%, 79% and 80% respectively [18]. These studies were performed on superficial lymph nodes. However, in several studies, lymphomas seem to have a benign vascular pattern [19,20]. To date there is not enough evidence for the accuracy of CEUS in assessing different types of lymphoma [21]. In our case, intense enhancement was observed in the arterial phase in the surrounding mesenteric tissue of necrotic lymph nodes, without rapid vascular washout in the venous phase, but it was not suggestive of malignancy. CEUS failed to provide a diagnosis of tumor neoangiogenesis in this case. This lack of venous wash-out may be due to arborizing venules found in some peripheral T-cell lymphomas [22].
In this case, 18 F-FDG PET scan could have detected early EATL; previous studies documented its role in patients with refractory celiac disease, being more sensitive than CT in detecting sites affected by lymphoma [23]. This investigation was not available in our institution.
In patients associating CMLNS, liver failure may be due to septicemia or to malignant T-lymphocyte infiltration of the liver [2,3]. In our case, the microscopic examination revealed abnormal T-cell lymphocytes in the liver, spleen, mesenteric tissue, gastric walls, kidney, lung and bone marrow. No malignant cells were observed in the small bowel samples examined. It is possible that the EATL was unsampled, despite multiple biopsies, as it was previous reported [5]. Other authors described the same features of a rare peripheral T-cell lymphoma, associated with celiac disease, characterized by a rearrangement of the gammadelta T-cell receptor, responsible for the aggressiveness of this tumor [2,24]. T-cell receptor PCR or flow cytometry were not performed in our case.
Conclusions
This is the first report of a CEUS examination in CMLNS complicated with EATL that revealed an arterial rim enhancement around necrotic lymph nodes without venous wash-out. As this vascular pattern is not suggestive for tumor neoangiogenesis, the investigation failed to provide a diagnosis on this case.
Consent
The consent for publication was signed by the patient's son. | 2017-06-21T12:46:15.586Z | 2013-02-11T00:00:00.000 | {
"year": 2013,
"sha1": "d17380540941649cfa3cebde98d9d3e04efb8026",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-13-26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d0285437597a8081691d6ec01c4995c17691d3c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86416439 | pes2o/s2orc | v3-fos-license | Palmitoylethanolamide and Cannabidiol Prevent Inflammation-induced Hyperpermeability of the Human Gut In Vitro and In Vivo-A Randomized, Placebo-controlled, Double-blind Controlled Trial.
Background and aims: We aimed to examine, for the first time, the effect of cannabidiol (CBD) and palmitoylethanolamide (PEA) on the permeability of the human gastrointestinal tract in vitro, ex vivo and in vivo. Methods: Flux measurements of FD10 and FD4 dextran across Caco-2 cultures treated for 24hr with IFNγ and TNFα (10ng·mL-1) were measured, with or without the presence of CBD and PEA. Mechanisms were investigated using CB1, CB2, TRPV1, and PPARα antagonists, and PKA, NOS, PI3K, MEK/ERK, adenylyl cyclase and PKC inhibitors. Human colonic mucosal samples collected from bowel resections were treated as above. TRPV1, PPARα, PPARδ, PPARγ, CB1, CB2, GPR55, GPR119, and claudins -1,-2,-3,-4,-5,-7, and -8 mRNA were measured using multiplex. Aquaporin 3 and 4 were measured using ELISA. A randomised, double blind controlled-trial assessed the effect of PEA or CBD on the absorption of lactulose and mannitol in humans taking 600mg aspirin. Urinary concentrations of these sugars were measured using liquid chromatography mass spectrometry. Results: In vitro, PEA and CBD decreased the inflammation-induced flux of dextrans (p<0.0001), sensitive to PPARα and CB1 antagonism respectively. Both PEA and CBD were prevented by PKA, MEK/ERK and adenylyl cyclase inhibition (p<0.001). In human mucosa, inflammation decreased claudin-5 mRNA, which was prevented by CBD (p<0.05). PEA and CBD prevented an inflammation-induced fall in TRPV1 and increase in PPARα transcription(p<0.0001). In vivo, aspirin caused an increase in the absorption of lactulose and mannitol, which was reduced by PEA or CBD (p<0.001). Conclusion: CBD and PEA reduce permeability in the human colon. These findings have implications in disorders associated with increased gut permeability, such as inflammatory bowel disease.
Introduction
The gut provides a barrier between the external and internal environment. This selectively permeable barrier allows absorption of nutrients and water from gastrointestinal contents, whilst preventing the transfer of noxious material such as bacteria and lipopolysaccharide. During episodes of inflammation, the barrier becomes compromised, allowing transfer of noxious material into the systemic circulation, leading to disease states such as inflammatory bowel disease (IBD) and septic shock 1 .
The use of cannabis sativa for its analgesic and anti-inflammatory effects have been well described 2 . Interest in the psychoactive properties of cannabis sativa lead to the characterisation of its active major compounds Δ 9 -tetrahydrocannabinol (THC) and cannabidiol (CBD) and approximately eighty other compounds 3,4 . Discovery of the receptors for these ligands followed, with cloning of the CB1 and CB2 receptors 5,6 . The endogenous ligands anandamide and 2-arachiodonoylglycerol which act at these receptors have been well described [7][8][9] . Endocannabinoid-like compounds acting alongside these such as palmitoylethanolamide (PEA) have emerged, with affinity for receptors such as the GPR55 receptor, and the nuclear peroxisome proliferator activated receptors (PPARs) [10][11][12] . PEA is widely used as a food additive and is being assessed for clinical use in pain and inflammation 13,14 . Similarly, CBD is already in use the treatment of multiple sclerosis (as part of the medicine Sativex), and by itself in childhood epilepsy 15,16 .
We previously demonstrated that inflammation causes an increase in the permeability of fully confluent Caco-2 monolayers, measured by trans-epithelial electrical resistance (TEER) 17 . We identified that application of CBD or PEA rescued falls in TEER (i.e. reduces inflammation-induced increases in permeability). The effects of CBD were blocked by antagonism of the CB1 receptor, while PEA was blocked by antagonism of the PPARα receptor 17,18 . Our recent meta-analysis demonstrated that preclinical studies with both CBD and PEA show promise in animal models 19 . 5 In light of this background, we hypothesised that CBD and PEA reduce inflammationinduced permeability in the human gut, and here examined the effects of PEA and CBD on the human colon using Caco-2 cells and ex vivo human colonic tissue, identifying their mechanisms of action. Based on these positive findings, we went on to examine the potential of CBD and PEA to modulate intestinal permeability in humans in vivo using a randomised controlled trial. 6
In vitro permeability studies
Caco-2 cells were purchased from European Collection of Cell Culture (Wiltshire, UK; passages . Cells were cultured in Eagle's minimum essential medium supplemented with L-glutamine, 10% foetal bovine serum (FBS) 1% penicillin/streptomycin and 1% non-essential amino acids mixture (all Sigma-Aldrich).
mRNA and Protein expression
Caco-2 cells were cultured in 6-well plates (Corning Incorporated, ME, USA) for 14 days.
Following 24 h treatments with CBD or PEA, cells were washed twice with ice-cold phosphate buffered saline (PBS), and treated with radioimmunoprecipitation assay (RIPA) buffer containing phosphatase inhibitor and protease inhibitors (Sigma-Aldrich) at 4°C for one h on a rocking platform to cause cell lysis. Cell lysates were then collected and stored at -80 °C until analysis.
Experiments on ex vivo human colonic tissue were performed on healthy colonic tissue taken from colon removed at elective resection for bowel cancer at Royal Derby Teaching Hospital NHS Trust, Derbyshire (n=8). After informed consent, samples of macroscopically normal colon at least 10cm proximal or distal to any bowel tumour were obtained immediately after resection within the operating theatre. Ischaemic times following pedicle clamping were not recorded. Sections of tissue 2cm x 2cm were excised and transferred on ice to the laboratory within ten minutes, in pre-chilled Eagle's minimum essential medium supplemented with 1% FBS, 1% penicillin/streptomycin and 1% non-essential amino acids mixture (Sigma-Aldrich). Mucosa with submucosa was dissected free from the underlying muscularis layer. Samples were further dissected into 8 ~2mm x 2mm sections and placed in individual wells of 24-well polystyrene plates (Corning Incorporated, USA) at 37°C in 5% CO2 and 95% humidity, and treated with PEA and CBD. After 24 h, the tissue was washed with ice cold PBS and stored frozen at -80 °C until homogenisation and analysis. Samples were cryohomogenised as previously described by von Zeigler 20 . Collected homogenates were then dissolved in 100µl of RIPA buffer containing phosphatase inhibitor and protease inhibitor (Sigma-Aldrich), incubated on an oscillating thermomixer for 30 minutes at 60 °C, and then centrifuged at 10,000G for 15 minutes.
Randomised, placebo controlled, double-blind controlled trial
In healthy human subjects we induced a state of increased gut permeability with aspirin, measuring the urinary concentrations of sugar probes. Participants were treated with oral PEA and CBD, and the change in gut permeability measured in a randomised double-blind, controlled trial. All experiments and procedures received prior approval of the University of Nottingham Ethics Committee (approval number J16122016), this trial was not registered on a trials registry. Healthy male participants between the ages of 18 9 and 50 were recruited after informed consent. Participants were screened to exclude any gastrointestinal medical conditions or symptoms, regular medications or recreational drugs, heavy alcohol use, previous abdominal surgery, or personal or family history of IBD. Participants were asked to refrain from the use of any pro or prebiotics the week before the study, and refrain from alcohol or non-steroidal anti-inflammatory use or heavy exercise for three days before the study. Participants separately attended fasted at 08:00. Aspirin 600mg was administered orally with 400mls water with CBD 600mg, PEA 600mg or placebo. At 09:00 1g lactulose and 1g mannitol in 600ml water were further administered, then a baseline urine sample was collected. Urine was collected hourly for 5 hours until 14:00. A further 400ml water was administered at 12:00.
Urinary samples were immediately centrifuged at 1,000G for 7 minutes at 3°C and then frozen at -80°C until analysis. 21 . Based on α=0.05 using a paired t-test (effect size = 1.0) it was calculated that to detect a difference in LMR of 0.02 between treatment and placebo arms with a power of 80% and α of 0.05, a sample size of 10 patients was required, should a difference exist between the treatment and placebo groups. Using the block method in a 1:1:1 ratio participants were assigned to receive CBD, PEA or placebo. Both participants and investigators were blinded to the group assignments. Participants were numbered sequentially and were assigned to the treatment groups in the order of recruitment. Code assignment was sealed within an envelope until after the samples were analysed. Participant assignment was only revealed at the end of the study.
CBD was gifted from Artelo Biosciences (CA, USA), extracted from cannabis sativa, and analysed at an independent laboratory to assure >99.65% purity. PEA was obtained from Russel Science (Nicosia, Cyprus). Placebo were manufactured and gifted by Artelo Biosciences using the base cellulose used in the preparation of CBD. Aspirin was obtained from Aspar Pharmaceuticals (London, UK). Lactulose was obtained from TEVA (Castleford, UK). D-mannitol was obtained from Sigma Aldridge (MO, USA). All products used in the test solution were intended for human use and tested safe for oral consumption. The purity of these sugar probes were reported to be >99.0%.
Concentrations of urinary lactulose and mannitol were determined using a liquid LC-MS method. 20µl aliquots of urine were thawed and diluted to precipitate any excess salt with 980µl 90% acetonitrile to which internal standards xylitol and raffinose were premixed at 0.5µg/ml final concentration. These were vortexed, incubated at -20⁰C for 4hours. Previous work demonstrated that it is necessary to monitor sucrose to ensure the assay is specific, correctly identifying lactulose, and therefore included as an analyte 22 . Calibration standards were made as a dilution series from 2.5 to 250µg/ml of the analytes mannitol and lactulose. The method had been previously validated by creating 6 independently prepared dilutions of 5, 50 and 500µg/ml as above 22 . The LC method was based on that of Kubica 23 . The LC column was a Sequant ZIC-pHILIC pHILIC (150 x 4.6mm, 5µm) kept at 40⁰C. The mobile phases were A, acetonitrile and B, 5mM ammonium acetate, adjusted drop-wise to pH6.85 with 5mM ammonium hydroxide solution. B Samples were kept in a chilled auto-sampler and 2µl volumes were injected for analysis. The detector was a Sciex 4000 QTrap operating in -ve ion electrospray mode at with the source at 450⁰C with curtain, nebuliser and auxiliary gases set to 10, 40 and 20 respectively. The ion-spray voltage was -4200V. A minimum of 5 points were used for each analyte. Xylitol was used as the internal standard to normalise the lactulose signal but the raw peak area was used to calculate the mannitol concentrations. Raffinose had been included as an internal standard as it has been used in previous studies 22,24 . A standard prepared at 50µg/ml was injected every 20 samples was used to monitor precision and accuracy. The CVs were 14.1 and 7.3 for lactulose and mannitol respectively. The precisions were 107 and 89%. Calibration lines are linear over a range greater than 2.5 to 250µg/ml with R 2 >0.9995 for both analytes.
11
For in vitro and ex vivo experiments data are presented as mean (or mean percentage) change from baseline where indicated ± standard error of the mean (SEM). Permeability study results were compared using two-way ANOVA. Caco-2 and human tissue group results were compared using one-way ANOVA. The calculated concentration of each urinary sugar was assessed by repeated-measures ANOVA, with probabilities of post hoc comparisons subject to the Bonferroni correction. The differences in excretion of lactulose and D-Mannitol over the study period were compared using two-way ANOVA.
Normality was assessed using the D'Agostino & Pearson normality test. A p-value of <0.05 was considered statistically significant. All statistical analyses were performed using GraphPad Prism 7.01 for Windows (GraphPad Software, San Diego, USA). All authors had access to the study data and reviewed and approved the final manuscript.
Effects of CBD and PEA on membrane-bound AQP channels
In Caco-2 cultures IFNγ and TNFα increased the presence of AQP3 (figure 6 A), and this was prevented by PEA (apical, p<0.05), or CBD (apical, p<0.05). PEA or CBD alone did not affect AQP3 compared to control. In human colonic tissue, inflammation did not change AQP3, though AQP3 levels were increased in the presence of inflammation and PEA, and also inflammation and CBD (p<0.01 and <0.001 respectively, figure 6 C). PEA or CBD alone did not affect the levels of AQP3 in human mucosal tissue compared to vehicle.
In Caco-2 cells the inflammatory protocol did not affect the presence of AQP4, though treatment with CBD or PEA alone did cause an increase in the presence of AQP4 (p<0.05, figure 6 B). IFNγ and TNFα increased the presence of AQP4 in human colonic mucosa (p<0.001), which was not affected by treatment with PEA or CBD, however 14 treatment of colonic mucosa with PEA or CBD alone decreased the presence of these proteins compared to vehicle (p<0.01 and <0.05 respectively, figure 6 D).
Absorption and excretion of urinary sugars in vivo
30 male participants aged between 22 and 51 years (median age 28.7) successfully completed the study. No exclusions were made on the grounds of fitness. No participants reported any side effects in experimental sessions. Urinary concentrations of mannitol excreted over the 6-hour study period were normally distributed.
In participants receiving placebo only, aspirin administration caused an increase in the urinary concentration of mannitol and lactulose over the 6-hour study period (p<0.0001 and P<0.001 respectively, table 1). Maximal increases in LMR were found at 2 hours following aspirin and placebo administration (p<0.0001 compared to baseline).
In participants administered both CBD and aspirin, urinary lactulose and mannitol concentrations were also increased across the study period (p<0.001 and p<0.01 respectively, table 1). LMR across the experimental period was also increased (p<0.001 compared to baseline), however compared to the placebo and asprin group LMR was reduced reaching maximal difference at 2 hours post-administration (p<0.0001, figure 7).
In the PEA and aspirin group, urinary lactulose, but not mannitol concentrations were also increased (p<0.01, table 1). In 6 patients in this group mannitol levels were undetectable at baseline and subsequent timepoints, and hence LMR was unable to be calculated. These patients were excluded from the results. LMR across the study period was increased (p<0.01 compared to baseline). Compared to the placebo and aspirin group LMR was reduced, reaching maximal difference at 3 hours post-administration (p<0.001, figure 7). There was no difference in significance between PEA and CBD in reduction of LMR compared to placebo.
Discussion
The aims of this study were to assess the effect of PEA and CBD on the permeability of the human gastrointestinal tract and identify underlying mechanisms of action. In vitro and ex vivo, PEA and CBD prevented inflammation-induced permeability, and these effects were mediated by different receptors but similar intracellular pathways, and associated with changes in claudin, AQP and receptor expression. We then measured the excretion of sugar probes in vivo using an aspirin-induced pro-inflammatory model, with or without 600mg PEA or 600mg CBD, in healthy volunteers. Aspirin increased the LMR suggesting an increase in intestinal permeability, and this was prevented by both CBD and PEA.
Together, these findings suggest that CBD and PEA decrease intestinal permeability, and may have future therapeutic use in IBD.
We chose to examine the effect of CBD and PEA on the transfer of FD4 and FD10 dextrans as these molecules are of a similar size to that of lipopolysaccharide from Escherichia Coli and pseudomonas species (20-40Å) 25,26 . PEA and CBD both reduced FD4 and FD10 transfer across Caco-2 membranes after an inflammatory protocol, blocked by PPARα and CB1 antagonism respectively, in line with previous reports 18,27 .
We hypothesized that there may be additive effects if applied simultaneously, but found this not to be the case. Looking at the intracellular mechanisms of action, we found that inhibition of PKA, ERK/MEK and adenylyl cyclase prevented the actions of both drugs, suggesting that, although the membrane receptors of CBD and PEA are different, they exert their actions through similar intracellular pathways and hence demonstrate no additive effects. This is the first report of the signalling pathways through which PEA acts in human colonic mucosa , and is in line with those explored in murine colitis models where PEA in acts through phosphorylation of ERK 28 , and in neuronal tissue where it acts through PKA 29 . Similarly, our findings match previously described CBD actions at CB1 through adenylyl cyclase 5 .
We hypothesized that permeability changes caused by inflammation may be due to transcriptional changes in tight junction (TJ) proteins such as claudin, as changes in the presence of these paracellular proteins have been shown in active Crohn's disease 30 .
TJs are composed of two transmembrane proteins, occludin and claudin, with a third adjacent protein, the junctional adhesional molecule, within the inter-cellular space.
These proteins are fused to identical molecules on neighbouring epithelial cells at which point the intercellular space is sealed around a charged pore. The flux of material through this paracellular pore is determined by TJ structure, dependent on claudin type 31 . Claudin-2 increases the permeability of the TJ, whereas claudin-5 and -8 decrease permeability 32 . We found inflammation had no effect on mRNA for claudin proteins in Caco-2 cultures, and this was uninfluenced by PEA or CBD treatment. We compared these findings with experimentally inflamed human colonic tissue, finding a decrease claudin-5 mRNA in response to inflammation and that this change was prevented by treatment with CBD. Claudin-5 is highly expressed in the human colon, and acts to strengthen the mucosal barrier by decreasing permeability though cysteine residues 33 .
Inflammation is known to cause a decrease in the presence of claudin-5 and promotes increased permeability across mucosal types, hence this may be a mechanism by which CBD affects permeability 30,34 . We also found PEA decreased the transcription of claudin-3, a protein providing a barrier function by decreasing permeability to charged ions in the healthy colon 35 . The finding that PEA may decrease this transcription in human colon, yet decreasing hyperpermeability in Caco-2 cultures suggests that either the role of claudin-3 in the gut is incompletely understood, or that PEA does not simply affect permeability in terms of pore formation. In support of our findings, Zeissig and colleagues (2007) found that claudin-3 expression is unaffected by Crohn's disease 30 , however in cell culture experiments, expression is reduced by TNFα treatment 36 , but not by IL-13 35 . As no other reports have examined the effect of cannabinoids on claudin expression in human tissue, we are unable to determine if the effect of PEA on claudin-3 is a mechanism by which permeability may be affected, and this requires further study.
AQPs have been found to have an increasingly important role in both permeability to water and the immune response, therefore we hypothesized that changes in the expression of two AQPs, AQP3 and AQP4 may be a mechanism by which cannabinoids affect permeability. These proteins allow transport of water and solute through epithelial barriers via a transcellular route. Similarly to TJs, epithelial AQP populations may change dynamically in response to varying physiological environments 37 . AQP3 expression in the ileum is reduced in IBD, which was suggested as a mechanism to reduce oxidative stress through limiting water loss 38 , although it has been shown that knockdown of AQP3 paradoxically impairs gut barrier function and increases permeability 39 . We found that in Caco-2 cells, inflammation increased levels of membrane-bound AQP3, which was prevented both by PEA and CBD. Conversely, in human tissue, the inflammatory protocol alone had no effect on AQP3 levels, whereas in the presence of the inflammatory protocol and PEA or CBD, AQP3 levels were increased. Previous work demonstrated that PEA and CBD are not anti-inflammatory in Caco-2 cultures, but do prevent the increased secretion of pro-inflammatory cytokines in human colonic tissue 40 . A potential mechanism therefore for PEA and CBD on the inflammatory response may be through upregulation of functional membrane-bound AQP3 and glycerol uptake, although this may not be the direct mechanism through which permeability is affected in Caco-2 cultures 41 . Further study is required examining the effect of CBD and PEA on glycerol uptake in conjunction with AQP3 expression in the human colon.
AQP4 is known not to contribute to water transfer in the gut, as knock-down of the channel does not affect permeability 42 , however there is evidence suggesting AQP4 plays a role in the immune response of the colonic mucosa as AQP4 is upregulated in colonic mucosa of IL-10 knock out mice 43 . In Caco-2 cells we found no change in membrane bound AQP4 expression in response to inflammation, but up regulation in expression caused by PEA and CBD treatment. In comparison with experimentally inflamed human colonic tissue, inflammation increased the presence of AQP4, which was not affected by PEA or CBD, although in the absence of inflammation these levels were reduced compared to vehicle alone. The absence of an immune cell-mediated response to inflammation in Caco-2 cells may explain the difference between the two culture models' response to TNFα and IFNγ. As AQP4 expression was reduced in human tissue compared to vehicle, it may be possible that prophylactic administration of CBD and PEA may change the response to inflammation. We were unable to examine the effect of PEA and CBD on other AQP subtypes in this study, however this does pose a future avenue of experimentation for cannabinoid and cannabinoid-like compounds.
It has been previously demonstrated that the expression of CB1 and CB2 receptors on the gut epithelium, immune cells and enteric nervous system change with inflammation 44,45 .
As PEA has been shown to alter the expression of these receptors in mice, we hypothesized that PEA and CBD might affect their expression (and of other molecular targets of cannabinoids) in experimentally inflamed human colon 11,28 . Surprisingly we did not find any effect of PEA or CBD on the expression of these two receptors both in Caco-2 and human explant models. However, in both Caco-2 cultures and human colon we found a significant decrease in the expression of TRPV1 in response to inflammation, in line with previous reports 11 . In both cell culture and explant tissue models CBD prevented these falls, whereas PEA only prevented falls in TRPV1 expression in Caco-2 cultures. We also found that PPARα expression was increased by inflammation, but this was not affected by PEA or CBD treatment. No other receptors were affected by inflammation. The absence of change in CB1 or CB2 transcription is interesting, and contradicts existing evidence that CB1 and CB2 both are upregulated in biopsies from IBD patients 45 . One possible explanation for this difference is the role of the enteric nervous system. Peripherally restricted cannabinoid agonists have been shown not to prevent inflammatory changes in murine colitis, suggesting that cannabinoid action at the central nervous system is crucial to their effect on gut inflammation 46 . This was supported by a study from Esposito et al., demonstrating that PEA may act directly on enteric glial cells rather than on mucosal immunocytes 28 . We may suggest that our normal tissue, which is no longer innervated by the enteric nervous system, did not therefore undergo any nerve-mediated changes in receptor expression. Alternatively, these differences could also be explained by the presence of a secondary immune 20 response in ex vivo tissue, compared to in vitro Caco-2 cultures, possessing no specialised immune response, such as macrophages. This presents further evidence that the gut is dependent centrally on endocannabinoid tone for the immune response and barrier function.
Clinically, permeability in vivo is not measured, however experimentally, it may be estimated by the ingestion of probe molecules which undergo urinary excretion.
Administering two sugars which are absorbed and excreted at differential rates avoids confounding factors such as delayed gastric emptying, differing volumes of distribution , intestinal transit time and varying renal clearance 47 . D-mannitol is passively absorbed by the small intestine at a steady rate in health. Lactulose, a larger molecule, is not normally absorbed in health, but is passively absorbed during inflammatory intestinal episodes 48 . As both compounds undergo similar degradation in the gut, and regular excretion, the urinary lactulose to mannitol ratio (LMR) gives a measure of intestinal permeability. This ratio has previously been used to calculate intestinal permeability in Crohn's disease, where lactulose absorption proportionally increases to the absorption of D-mannitol 49 . We have shown that aspirin causes an approximate 20-fold increase in LMR, in line with previously published reports 48 . It has been hypothesized that the mechanism by which increased permeability to sugars occurs is through inhibition of cyclooxygenase production, and therefore a decrease in mucosal prostaglandin production 50 . We found that PEA and CBD both decreased the concentrations of these sugars, and therefore may have achieved this through inhibiting cyclooxygenase production or by manipulation of membrane-bound proteins or receptors. Although we have not proven the mechanism in this study, we have shown for the first time in humans, that these compounds can reduce intestinal permeability in vivo and may potentially be of clinical use in IBD.
There are several limitations to this study. Although high dose aspirin was used to increase in the absorption and excretion of ingested probes, this may not be a reliable simulation of gut inflammation. Therefore, the effects of PEA and CBD may not translate clinically.
Alternative models have been used to induce a state of hyperpermeability which may resemble clinical disease more closely. One study administered LPS to healthy human subjects, successfully increasing the absorption of orally administered polyethylene glycol 51 . However, all 14 participants developed the systemic inflammatory response syndrome, becoming haemodynamically unstable and requiring medical care. In light of these risks, small phase 2 clinical trials examining the effect of CBD and PEA in IBD may now be considered. Secondly, 6 of 10 samples in the PEA cohort were found to contain mannitol levels which were below the level of quantification using our LC-MS method. Therefore, ratios of lactulose to mannitol were not possible to calculate for these 6 participants and were excluded, which may exaggerate group differences. It is not clear if mannitol levels were undetectable because of PEA administration or due to an error in the administration of sugar probes or quantification of sugars at LCMS. If this were due to a PEA effect this would have implications for the clinical use of PEA, as this would mean that it is highly permeability-decreasing. This will be the subject of further study in our research group.
In conclusion, we have demonstrated for the first time in humans that PEA and CBD prevent increases in permeability in the inflamed gut, and may do so through changes in AQP, TJ and receptor expression. These data add the growing body of data demonstrating the anti-inflammatory and permeability-reducing effects of PEA and CBD in the gastrointestinal tract 19 . This holds significant promise for the development of future intestinal therapies treating disorders of increased intestinal permeability such as IBD.
Their clinical effects should now be assessed in phase 1 and phase 2 clinical trials. Figure 3: The effects of CBD (apical, A and C) and PEA (basolateral, B and D) on the permeability of Caco-2 monolayers in response to 24 hr exposure to TNFα and IFNγ in the presence of various protein inhibitors (KT5720, L-NAME and LY294002: A and C, and PD88059, SQ22536 and G06983: B and D), measured by transfer of fluorescent dextrans (FD10). Raw data is expressed as the mean fluoescence per group +/-SEM. N=8 per condition. Data was analyzed by two-way ANOVA using Dunnett's post hoc test comparing against the vehicle control (**** p<0.0001). Results are expressed as mean ratios +/-SEM. Time points between groups were compared using two-way ANOVA using Dunnett's multiple comparisons test comparing to placebo at the same time point (*p<0.05, **p<0.01, ***p<0.001). Table 1. Urinary concentrations of lactulose and mannitol in healthy volunteers receiving aspirin and placebo (n=10), CBD (n=10) and PEA (n=4) therapy. Results are expressed as mean concentrations +/-standard deviation. Concentrations were compared to baseline using repeated measures ANOVA and Dunnett's multiple comparison test (*<0.05, **<0.01, ***<0.001 ****<0.0001). LMR -Lactulose: mannitol ratio. | 2019-03-28T13:33:48.108Z | 2019-05-04T00:00:00.000 | {
"year": 2019,
"sha1": "652801d07ce2ef5aa15ccbefca0aab7dc9c66662",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/1612243/Accepted%20final%20PDF%20Palmitoylethanolamide.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0ded34095107a8e01523bbc1c499e7aa0c5d9659",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
148571654 | pes2o/s2orc | v3-fos-license | Enhanced stability of the focus obtained by wavefront optimization in dynamical scattering media
: Focusing scattered light using wavefront shaping provides interesting perspectives to image deep in opaque samples, as e.g. in nonlinear fluorescence microscopy. Applying these technics to in vivo imaging remains challenging due to the short decorrelation time of the speckle in depth, as focusing and imaging has to be achieved within the decorrelation time. In this paper, we experimentally study the focus lifetime after focusing through dynamical scattering media, when iterative wavefront optimization and speckle decorrelation occur over the same timescale. We show experimental situations corresponding to a broad distribution of decay rates of the scattering paths, where the focus presents significantly higher stability than the surrounding speckle.
Introduction
In recent years, several wavefront shaping techniques were developed to partially compensate for the scattering induced by a disordered medium and to form a diffraction limited focus using the scattered light [1], possibly at depth and non-invasively [2].However, a major limitation to the application of these techniques to the imaging of real biological systems is the temporal decorrelation induced by minute changes of the optical index inhomogeneity: the decorrelation time of biological tissues can be in the millisecond range [3][4].As a consequence, fast wavefront shaping systems are required for focusing light in these systems [4][5] and the lifetime of the formed focus is also limited by this decorrelation time [6].Two main approaches have been developed to focus within this decorrelation time.On the one hand, digital optical phase conjugation (DOPC) relying on the phase conjugation of a measured wavefront is a fast noniterative technique capable of focusing in the millisecond range [7][8].On the other hand, iterative optimizations can focus almost as fast through a scattering medium [9][10][11][12].While DOPC methods are very appealing to tackle fast decorrelating media, optimization methods remain inescapable in many bio-imaging scenarios, particular based on two-photon fluorescence [13][14][15][16].
So far, the lifetime of the focus obtained after wavefront shaping has only been studied for DOPC [6][7].In particular, it has been shown experimentally and theoretically, that the temporal correlation function in intensity g2(t), used to quantify the decorrelation dynamics of the speckle, also gives the temporal decay of the focus intensity after phase conjugation.An analogous experiment has also been conducted with iterative optimization in the frequency domain [17], but in a stationary medium, where the decorrelation is induced by tuning the incident wavelength rather than by a physical displacement of the scatterers.In this paper, the authors demonstrated experimentally and theoretically that the focus intensity degradation is proportional to the spectral correlation function in intensity of the speckle after shifting the laser frequency.In all these works [6,7,17], the medium can be considered as static during the wavefront shaping procedure, resulting in a proportionality between the timescales of speckle decorrelation and focus degradation.At the opposite, if the wavefront correction procedure is much slower than the decorrelation time, focusing of the scattered photons cannot be achieved [9].
However, an interesting question remains: what happens if we perform an iterative optimization through a dynamical scattering media, where wavefront correction and decorrelation occur over similar timescales?In this paper, we investigate which scattering medium properties (mean decorrelation time, width of the decay rate distribution) impact the characteristic lifetime of the focus obtained in such a scenario.In particular, we show that there are experimental situations where the focus can be significantly more stable than the surrounding speckle pattern, when for instance when the medium is heterogeneous.Finally, we experimentally demonstrate that this phenomenon can be observed in acute brain slices.
Analytical model
The output speckle pattern formed after a scattering media can be seen as a coherent sum of fields resulting from different scattering sequences (i.e. the diffusion path of a photon in the medium) with random phases and amplitudes [18].Inside a dynamical scattering medium, each scattering sequence dephases which induce a decorrelation of the output speckle pattern.Individual scattering sequence decorrelates with a given decay rate Γ (see SI). Considering all scattering sequences provides the distribution of decay rates G(Γ) of the scattering medium [19].This distribution can't be directly measured unlike the temporal intensity correlation function: with I(t) the intensity of the speckle pattern measured on a camera at a time t [20][21]; g2(t) is used to characterize the temporal decorrelation of these sequences.G(Γ) and g2(t) can be linked by the following equation [19] (see SI): For example, the inverse of the slope at the origin of this function (up to a prefactor) gives the mean decorrelation time τ, which is the mean time over which the scattering sequences change [19,22].A full analysis of the autocorrelation function can provide higher moments of the distribution of decay rates.
In the specific case of a monodisperse colloidal solution, the decay rate of a scattering path is proportional to the number of scattering events encountered.So the distribution of decay rates is directly proportional to the path length distribution [18].For more complex scattering media, for example where some parts are static and others are moving or when the distribution of scatterer size isn't known (biological tissue), there is no simple relation between these two quantities.
Performing wavefront shaping with a spatial light modulator (SLM) means adding the appropriate phase to the incident wavefront to sum constructively some sequences at a desire target to form a sharp focus.Inside a dynamic scattering media, the scattering sequences dephase which induce a degradation of the focus after ending wavefront shaping.The normalized degradation of the focus can be expressed in a similar way than equation 2 (see SI): (3) With Gm(Γ) the distribution of decay rates of the focus.If the sequences selected to focus through a dynamical scattering medium present the same decay rate distribution as the full set of sequences forming the speckle, the focus and the speckle should have the same decorrelation dynamics, as it was observed in DOPC.Hypothetically, if more stable scattering sequences could be favored during wavefront shaping (by selecting either more stable scatterers or shorter sequences), the focus should present a higher stability than the initial speckle before optimization.
For a continuous iterative optimization, if decorrelation and optimization occur over similar timescales, it is not clear which scattering sequences will be used to form a focus.We experimentally investigate this question with a custom fast wavefront shaping system [9].We use this setup to focus through synthetic homogeneous scattering media of various stabilities and scattering strengths, but also stratified media with a "static" and a "dynamic" part, as it can be the case for biological samples.In this latter case, the width of the decay rate distribution of the different scattering sequences in the medium can be broad.We study the degradation of the focus in these various scenarios and investigate under which conditions the focus could present an enhanced stability.A second microscope objective images the output speckle using a beamsplitter on a CCD camera and on a PMT.The PMT collects the intensity of one speckle grain through an optical fiber.An iris controls the aperture size to match the speckle grain size with the diameter of the fiber.A polarizer selects one polarization state of the output speckle.The PMT signal is acquired by a DAQ board and sent to a FPGA board.During the optimization algorithm, the FPGA board computes the optimal phase for a given Hadamard mode, adds it to the current phase mask of the SLM and applies the new mask to the SLM.Optimization of one mode takes 243 µs.(B) A stack of speckle patterns is recorded over time to characterize the decorrelation dynamics of the speckle.(C) After ending the optimization, the focus degrades in time due to the speckle decorrelation.
Experimental setup
Fig. 1 describes the experimental wavefront shaping setup.A phase only spatial light modulator (Kilo-DM segmented, Boston Micromachines) shapes the incident wavefront of a CW laser λ=532 nm (Coherent Sapphire).The SLM is conjugated to the back focal plane of a microscope objective (10x, 0.25), which illuminates a scattering sample.The polarized output speckle is simultaneously imaged onto a CCD camera (Allied Vision Technologies Manta G-046B) and on a mono-detector (PMT, Hamamatsu H10721-20).A continuous iterative wavefront optimization algorithm is implemented to maximize the intensity of one speckle grain collected by the PMT.In short, the optimization is obtained using the Hadamard input basis.At each iteration, half of the pixels are modulated in phase, while the PMT signal is monitored, and the optimal phase is added to the correction mask.We combined a fast acquisition card (NI PXIe-6361) and a fast FPGA board (NI PXIe-7962R) to reach a speed of 4,1 kHz per mode [9].For all of experiments, the full Hadamard basis is successively optimized 5 times in 1.25 s.
Our experimental system is capable of measuring successively the temporal correlation function in intensity g2(t) and the focus degradation after ending the optimization Ifocus(t).Indeed, the PMT signal provides Ifocus(t) and g2(t) is measured with the CCD camera [20][21].
Fitting model
Establishing a complete theory describing which dynamic scattering sequences are selected during an iterative optimization to focus remains difficult and outside of the scope of this article.Instead, we use a simple analysis of our experimental results.As was done previously to quantify the g2 function [19], we express both decorrelation functions (of the speckle g2 and of the focus Ifocus) as a function of the moments of their decay rate distribution: ) × (1 + 8 ; < .
8
) 8 (4) with τ the mean decorrelation time, α a correlation term at large times that results from the presence of static sequences and σΓ the standard deviation of the decay rate distribution.Each of these 3 quantities are defined respectively for the decorrelation of the speckle and of the focus.Additional moments could be added to this expression but they are, experimentally, harder to extract due to noise.
Therefore, we can compare for all experiments the decay rate distribution (through the estimate of its mean value and its standard deviation) for the speckle and the focus.If the focus uses statistically the same scattering sequences as the speckle, these distributions (and therefore their first moments) should be identical.On the other hand, if more stable scattering sequences are favored during wavefront shaping the mean value of the decay rate distribution of the focus should be higher and the width of its distribution should be narrower.Moreover, if the medium contains static scattering sequences, and if they are favored by the wavefront shaping optimization, the correlation term at large times α will be larger for the focus.
Results
We have used different samples in order to understand in which situation a stable focus can be formed.We studied first the case of a colloidal solution in multiple scattering regime, where difference of sequence decay rate results from difference in their length.The width of the path length distribution of such a medium (and therefore its decay rate distribution) is not tunable.To study the impact of the width of the decay rate distribution, we then designed a second category of samples, composed of a thin dynamical layer above a static layer in multiple scattering regime.In this medium, part of the light travels ballistically through the dynamical layer.Therefore, static scattering sequences exist through the sample.By varying the size of the scatterers inside the colloidal solution, the decay rate distribution of the dynamical scattering sequences could be tuned.For small polystyrene beads, scattered light was highly unstable.In this situation our optimization scheme was only capable of compensating for the static sequences.On the other hand, for large polystyrene beads, the decorrelation was slower.In this last situation, our system was both capable of compensating static and dynamical scattering.Finally, we achieved qualitatively the same result through acute brain slices from the brainstem.
Monodisperse colloidal solution
The first sample used was a 500 µm thick solution of TiO2 (Sigma Aldrich 224227) in glycerol with a mass concentration of 20 g/l (ℓs = 70 µm and l* = 200 µm) [9].Light propagation through the sample is therefore in a regime of multiple scattering.A schematic of a few dynamical sequences is illustrated in fig 2.a.The decorrelation time of a scattering sequence in a monodisperse solution is directly related to the number of scattering events [18].Furthermore, tuning the temperature modifies the viscosity of the sample, thus allowing to tune its mean decorrelation time.
The temperature of the sample was first adjusted at 16 ° C to obtain an average decorrelation time of the speckle of τspeckle = 70 ms.The resulting mean focus degradation and its standard deviation are shown on fig 2.b.For this dynamical sample, a mean focus decorrelation time of τfocus = 70 ms with a standard deviation of 21 ms was measured over 500 realizations.
We then measured the average value of the focus decorrelation time and its standard deviation for different media, obtained by changing the viscosity of the solution via the sample temperature, such that the decorrelation time of the speckle was ranging from 50 ms to 150 ms.In fig.2.c, the ratio τfocus / τspeckle is shown.This ratio is constant and close to unity for all tested stabilities.Interestingly, an individual realization of the focus may have a characteristic decorrelation time different from the one of the speckle, but on average they are identical.We also didn't measure any significant difference between the standard deviation of the decay rate distribution of the focus and of the speckle for these experiments.For those samples, this value was ranging from 1 10 -3 ms -1 to 3 10 -3 ms -1 .Considering all experiments, the mean R 2 obtained by fitting g2 was 0.99 +/-0.01.The mean R 2 obtained by fitting each individual focus decorrelation was 0.96 +/-0.01.This lower value originates from the dynamic speckle background that overlaps with the focus.A residual offset α of the order of 1% can be measured in those experiments.It results from imperfection in the measurements: finite FOV on the camera to measure g2 and insufficient averaging to remove the speckle background for the focus.This residual offset is of the order of the fit accuracy and was therefore neglected.
So far, using monodisperse colloidal solutions, we didn't find any optimization procedures that may favor stable scattering sequences.The decay rate distribution might be too narrow to observe different stabilities between the focus and the speckle.To investigate further, we synthetized dynamical scattering media with a wider decay rate distribution of the different scattering sequences.
Combination of layers of static and dynamic scatterers
In a second experiment, we synthetized dynamical scattering media that exhibits a larger decay rate distribution (see Figure 3.a for a diagram of the scattering sequences existing in our media).By superimposing two scattering media, a thick static layer and a thin dynamical layer, we were able to control the percentage of dynamical scattering sequences exiting the sample.In this experiment, we wanted to investigate which sequences (fixed or dynamic and which ones among the dynamic) will form a focus after optimization.In a first experimental situation, we designed a sample where the dynamical sequences were decorrelating too fast to be corrected by our wavefront shaping system.In a second situation, we designed a sample where the dynamical sequences were slower and may be compensated by our system [9].The mean speckle decorrelation time, in presence of the colloidal solution, is below 1 ms.The optimization process isn't fast enough to compensate for this dynamical scattering.Therefore, the sequences contributing to the focus are mostly static sequences.(C) Mean value of the plateau α after decorrelation for the focus (red diamonds) and the speckle (blue square) for different scattering mean free path of the colloidal solution.Error bars represent the 95% confidence bounds of the fit.(D) CCD images (320*320 µm 2 ) of the speckle pattern measured before optimization (left) and after a stable focus is obtained for different scattering samples (from left to right: ℓs = ∞, 2.1 mm, 1.4 mm, 1.2 mm).
The first medium was a solution of polystyrene beads (Polybead® Carboxylate Microspheres 0.35 µm) in water positioned over a thick static scattering medium.The thickness of the layer of the dynamical scattering solution was 1 mm.
The percentage of ballistic photons through the scattering solution was controlled by adjusting the polystyrene beads concentration.Using Mie theory, the concentration required to obtain a given mean free diffusion path in the dynamical medium can be computed.This percentage ranges from 46% (ℓs= 1.3 mm) to 100% (no scattering solution).The addition of a strongly scattering static layer ensures that we are overall in the multiply scattering regime.
For each solution, the measurement of g2 confirms that the speckle resulting from dynamically scattered photons decorrelates in less than a millisecond (see Figure 3.b, dashed lines).We also observe that the g2 function reaches a plateau (αspeckle) for large decorrelation times, indicating that static sequences contribute to the speckle pattern.For each sample, the decorrelation of the focus averaged over 100 realizations is plotted in Figure 3.b (solid line).Yet, for all tested concentrations of colloidal solutions (figure 3.b), the wavefront correction system was able to form a focus, which was decorrelating slower than the speckle and was eventually reaching a plateau.As our system is not fast enough to compensate for the dynamically scattered photons, the value of the plateau (αfocus) was larger than the one measured for the speckle (αspeckle), showing that the focus contains a larger amount of static sequences (figure 3.b).As the scattering mean free path decreases, more and more photons were scattered by the dynamical layer (figure 3.c).Nevertheless, in all cases, the focus obtained by wavefront shaping was mostly formed by static scattering sequences.Interestingly, some slowly decorrelating (in the order of the second) scattering sequences are also contributing to the focus.These more stable sequences are probably snake-like sequences that encounter only very few forward scattering events.Finally, we have seen that, through these samples, the mode distribution used to form a focus is very different from the one of the speckle.Moreover, figure 3.d shows that the enhancement of the focus intensity by optimization (as defined in Vellekoop et al [23]) was larger for smaller bead concentrations in the dynamical samples.Indeed, the dynamical scattering speckle can be seen as an extra measurement noise that reduces the enhancement [24].
We then synthetized a similar dynamical medium but with larger colloidal beads (Polybead® Carboxylate Microspheres 1 µm).The larger beads sustained higher viscosity forces, which decreased the decay rates of the dynamical scattering sequences.By tuning the concentration (therefore ℓs), we simultaneously controlled the percentage of fixed sequences that exited from the sample and the mean decorrelation time of the dynamical sequences.For all prepared samples, the mean speckle decorrelation times ranged from 50 ms to 250 ms and the proportion of fixed scattering sequences ranged from almost 0 to 80%.Our wavefront shaping system should therefore be capable of optimizing the phase of the wavefront travelling through any of these sequences [9].The standard deviation the decay rate distribution ranged from 2 10 -3 ms -1 to 11 10 -3 ms -1 .For large ℓs, most of the scattering sequences inside the scattering medium are static.Therefore, there is not much possibility of increasing the number of static scattering sequences contributing to the focus.For small ℓs, the proportion of static sequences increases at the focus.In all cases, the focus exhibits, in average, a two-fold higher mean decorrelation time as compared to the speckle, which also results in a narrower width of the decay rates distribution for the focus.In a monodisperse colloidal scattering solution, these more stable sequences should correspond to snake-like sequences that encounter only few forward scattering events.The intensity enhancement as defined in Vellekoop et al [23], follows a linear trend similar to the one previously reported [9] ranging from 20 for τspeckle =25 ms to 120 for τspeckle = 225 ms.
To conclude, the key element to form a focus with stable sequences seems to be the width of the decay rate distribution of the different scattering sequences.The broader the distribution of the different scattering sequences is, the more stable sequences the focus contains and the larger its lifetime.
Biological samples
As a last experiment, we investigated whether our optimization algorithm allows achieving in biological tissues a focus more stable than the speckle [25].This would of course be very beneficial to perform non-linear imaging after wavefront correction, since an increased focus stability would provide additional time for the formation of a fluorescence image., more stable degradation (Focus+) and less stable degradation (Focus-).The scattering sequences show a fast decorrelation (~100 ms) followed by a slow one (~5 s).In average the optimization process promotes the most stable sequences at the focus.In the best case, the focus is only formed with stable sequences.On the contrary, in the worst case, the focus degradation follows the speckle decorrelation.
Our sample was a 300 µm thick acute slice of mouse brain (ℓs ~ 40 µm [26]).To keep the slices alive, a stream of a solution of 125 mM NaCl, 2.5 mM KCl, 2 mM CaCl2, 1 mM MgCl2, 1.25 mM NaH2PO4, 26 mM NaHCO3 and 25 mM glucose, bubbled with 95% O2 and 5% CO2, was imposed around the wafer [27].Every effort has been made to keep the brain slices alive for the duration of the experiment.The scheme of the system used to maintain the slice acute is shown on fig. 5 Figure 5.c shows (solid blue line) the temporal correlation function of the speckle.A first rapid decorrelation (slope at the origin of ~100 ms) is followed at long times by a slower decorrelation with a typical timescale of the order of 5 s.We did not study here the microscopic origin of these different decorrelation times.As in the previous experiment, we observed that the focus obtained by optimization (solid red line) is on average (over 500 realizations) more stable than the speckle and presents an average enhancement of 33 ± 10.The red dotted line and the red crossed line show respectively the least stable focus and the most stable focus.These two optimizations led to identical enhancement of the order of 30.It seems that there are no benefits (or drawbacks) in term of enhancement to favor stable sequences.Interestingly, the most stable focus generated here seems to be almost perfectly stable (over tens of seconds).
Discussion and conclusion
We have shown that, in contrast with previous wavefront shaping experiments, the focus does not always have the same decay rate distribution as the surrounding speckle.We have shown in particular that in specific conditions an increase of the focus mean decorrelation time by a factor 2 (Fig. 4) up to several orders of magnitude (Fig. 3) is obtained, as compared to the speckle stability.
Our interpretation of this results is the following.At each iteration of the optimization, some stable and dynamic sequences are corrected to interfere constructively to the focus.Rapidly, the dynamic sequences decorrelate and do not contribute to the focus anymore, while the stable sequences still do.Therefore, iteration after iteration, more and more stable sequences accumulate leading to an enhanced stability of the focus.Ultimately, all the SLM mode available could compensate only for the stable sequences (as observed in Fig. 5.c).The key to achieve a more stable focus seems to be the width of the decay rate distribution of the scattering sequences.The wider this distribution, the easier it is to promote stable diffusion sequences by optimization.On the contrary, a narrow distribution (as through a monodisperse solution) did not allow, for an iterative optimization of the wavefront, the selection of more stable sequences, at least for the range of parameters investigated.Another key element to obtain a more stable focus is the speed at which the optimization is done.If the optimization is too fast compared to the decorrelation time of the medium, this effect does not appear (as in DOPC).In our case, we observed that if the optimization time is of the order of the mean decorrelation time of the medium, a focus more stable than the speckle can be formed.The influence of these two parameters (width of the decay rate distribution of the scattering sequences, speed at which the optimization is done) remains difficult to analyze experimentally and further numerical studies (beyond the scope of this paper) may be required to fully describe their respective role.Additional studies of the impact of the optimization algorithm on the stability of the focus could highlight optimization strategies that can further promote the emergence of a more stable focus.
We believe these results are of great interest particularly for biomedical imaging.For instance, during in vivo imaging of a mouse brain (the skull having been removed and replaced by a glass coverslip), part of the light propagates through or around blood vessels, thus imposing a very rapid decorrelation of the speckle.Despite this, a wavefront correction system should be able to focus the photons scattered by static structures or having a slow dynamic (cells, myelinated axons, …), if the fraction of dynamically scattered light remains low.Another important scenario is imaging through the skull.The skull would then act as a nearly static scatterer and the brain tissue would be the dynamical one.In this case, we expect the wavefront correction system to preferentially correct the scattering by the skull.An interesting case is the correction of the wavefront in the presence of a ballistic but aberrant wavefront inside a tissue.The scattered light rapidly decorrelates whereas the aberrated light will be relatively stable.A correction of the wavefront would then preferentially correct aberrations.One last perspective could be to extend this study to the broadband regime.Mounaix et al demonstrated a selection of short scattering sequences by exploiting the short coherence length of a pulsed laser through a homogeneous scattering media [28].By exploiting this effect, one could obtain a further increase in the stability of the focus.
Fig. 1 .
Fig. 1. (A) Experimental setup.P: polarizer; A: aperture; L: lens (focal length = 150mm); BS: beamsplitter; MEMS-SLM; MEMS-based spatial light modulator.The wavefront of a collimated laser beam (532 nm) is modulated by a phase-only spatial light modulator.The phase mask is imaged on the back aperture of a microscope objective and focused into a scattering sample.A second microscope objective images the output speckle using a beamsplitter on a CCD camera and on a PMT.The PMT collects the intensity of one speckle grain through an optical fiber.An iris controls the aperture size to match the speckle grain size with the diameter of the fiber.A polarizer selects one polarization state of the output speckle.The PMT signal is acquired by a DAQ board and sent to a FPGA board.During the optimization algorithm, the FPGA board computes the optimal phase for a given Hadamard mode, adds it to the current phase mask of the SLM and applies the new mask to the SLM.Optimization of one mode takes 243 µs.(B) A stack of speckle patterns is recorded over time to characterize the decorrelation dynamics of the speckle.(C) After ending the optimization, the focus degrades in time due to the speckle decorrelation.
Fig 2 .
Fig 2. Focus stability through a monodisperse colloidal solution.(A) Scheme of the scattering process.When propagating inside a scattering medium, light will travel through many scattering sequences and will then interfere to form a speckle pattern.For a dynamical medium, sequences will change in time leading to the decorrelation of the speckle.(B) Evolution in time of the focus after ending the optimization through a medium with a mean decorrelation time of 70 ms: average (solid line) and standard deviation (blue region) over 500 realizations.Dotted line: intensity correlation function (g2) of the speckle.Averaged over a large number of realizations, the focus presents the same stability as the speckle, even if each individual realization doesn't.Inset: Residuals of the fit of g2 (dark line) and of an individual focus degradation (blue) (C) Ratio of the focus mean decorrelation time and of the speckle decorrelation time for different speckle
FIG 3 .
FIG 3. Focusing through two layers of scattering media (a static scattering medium and a fast dynamical scattering medium) (A) Scheme of the scattering sample.Light is first multiply scattered by a fixed scattering layer (in grey).Then light encounters only few scattering events by propagating through a dynamical scattering layer.Part of the light propagates ballistically through this dynamical layer (black arrows); the other sequences (in red) decorrelate due to the motion of the scatterers.(B) Comparison between focus degradation (solid lines) and speckle decorrelation (dotted lines) for different scattering mean free paths of the colloidal solution.The
Fig 4 .
Fig 4. Focusing through two layers of scattering media (a static medium and a slow dynamical scattering medium).The scheme of the experiment is similar to the one shown on Fig. 3.The dynamical scattering medium is an aqueous colloidal solution of polystyrene beads.Due to larger polystyrene beads, the mean scattering decorrelation time is slower ranging from 55 ms to 405 ms, as compared to the case shown in Fig. 3. Here, the optimization procedure is fast enough to compensate for the dynamical scattering.(A) Example of a focus degradation (red line) and speckle decorrelation (blue line) for a colloidal solution with ℓs = 0.6 mm.Inset Residuals of the fits.(B) Mean decorrelation of the focus (red diamonds) and mean decorrelation time of the speckle (blue squares) for different scattering mean free paths of the colloidal solution.Error bars represent the 95% confidence bounds of the fit.In average, the focus is two times more stable than the focus.(C) Mean position of the plateau a after decorrelation for the speckle (blue squares) and for the focus (red diamonds), for different scattering mean free path of the colloidal solution.Error bars represent the 95% confidence bounds of the fit.(D) Standard deviation of the decay rate distribution for the speckle (blue squares) and for the focus (red diamonds).The scattering sequences contributing to the focus are the more stable one.Their distribution is also narrower compare to the initial speckle decay rate distribution.The results are presented in figure 4 for ℓs ranging from 0.6 mm to 2.3 mm.Examples of focus and speckle decorrelation are presented in figure 4.a for ℓs = 0.6 with their fits and residuals.Considering all experiments, the mean R 2 obtained was 0.97 ± 0.01.The blue squares and the red diamonds indicate the average proportion of static sequences (fig 4.b) the mean decorrelation time (fig 4.c) and the standard deviation of the decay rate distribution (fig 4.d) respectively for the speckle and the focus.The data obtained for the focus were averaged over 100 realizations.For large ℓs, most of the scattering sequences inside the scattering medium are static.Therefore, there is not much possibility of increasing the number of static scattering sequences contributing to the focus.For small ℓs, the proportion of static sequences increases at the focus.In all cases, the focus exhibits, in average, a two-fold higher mean decorrelation time as compared to the speckle, which also results in a narrower width of the decay rates distribution
Fig 5 .
Fig 5. (A) Scheme of the setup to maintain acute brain slices.A slice is immersed in an oxygenized buffer which is renewed with a flux.(B) Oblique wide field image (4.6*4.6 mm 2 ) of a typical acute brain slice (cerebellum).(C) Focusing through an acute mouse brain slice (brainstem) of 300 µm.Intensity correlation of the speckle: blue; average focus degradation (red, <Focus>), more stable degradation (Focus+) and less stable degradation (Focus-).The scattering sequences show a fast decorrelation (~100 ms) followed by a slow one (~5 s).In average the optimization process promotes the most stable sequences at the focus.In the best case, the focus is only formed with stable sequences.On the contrary, in the worst case, the focus degradation follows the speckle decorrelation.
.a and a typical widefield image of a slice is shown on fig.5.b. | 2019-05-09T17:11:48.000Z | 2019-05-09T00:00:00.000 | {
"year": 2019,
"sha1": "79478ca5b56533392f5bf23509e6b67497150e50",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/optica.6.001554",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "97184c25c90cfc965d3e2f560e9dc1486a6f4013",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
245306710 | pes2o/s2orc | v3-fos-license | Creating Virtual Hematoxylin and Eosin Images using Samples Imaged on a Commercial CODEX Platform
Multiparametric fluorescence imaging through CODEX allows the simultaneous imaging of many biomarkers in a single tissue section. While the digital fluorescence data thus obtained can provide highly specific characterizations of individual cells and microenvironments, the images obtained are different from those usually interpreted by pathologists (i.e., hematoxylin and eosin [H&E] slides and 3,3′-diaminobenzidine-stained immunohistochemistry slides). Having the fluorescence data plus coregistered H&E or similar data could facilitate the adoption of multiparametric imaging into regular workflows, as well as facilitate the transfer of algorithms and machine learning previously developed around H&E slides. Since commercial CODEX instruments do not produce H&E-like images by themselves, we developed a staining protocol and associated image processing to make “virtual H&E” images that can be incorporated into the CODEX workflow. While there are many ways to achieve virtual H&E images, including the use of a fluorescent nuclear stain and tissue autofluorescence to simulate eosin staining, we opted to combine fluorescent nuclear staining (through 4′,6-diamidino-2-phenylindole) with actual eosin staining. We also output images derived from fluorescent nuclear staining and autofluorescence images for additional evaluation.
than a microscope slide. The coverslip is sandwiched between two gaskets (or between a gasket and microscope stage adapter in the updated version) and serves as the bottom of a well, through which fluids are passed during the cyclical staining and imaging process that makes highly multiparametric imaging possible. A coverslip is required since the tissue is imaged on an inverted microscope, and microscope slides would be too thick for the optical objectives used in the usual system. In our experiments, roughly half of the coverslips crack in the process of removing them from the gaskets. Other investigators have reported staining and imaging the coverslips with eosin at the end of the regular CODEX imaging process, while the coverslips are still in place (i.e., still between the gaskets). [3] However, their report lacked further details regarding their approach.
We, therefore, developed our own protocol for creating images using CODEX coverslips that closely resemble traditional H&E-stained tissue sections ("virtual H&E" images, see for examples). [5][6][7][8][9][10][11] Similar to another report, [3] we leave the coverslip in place on the instrument and use the fluid well created by the coverslip, gaskets, and coverslip holder to stain with eosin. Since eosin is fluorescent and compatible with our Cy3 filter set, [12] it is simple to image within the existing system. Unfortunately, hematoxylin does not have similar properties. However, since hematoxylin is primarily used to stain the nuclei, we felt that substituting 4′,6-diamidino-2-phenylindole (DAPI) staining for hematoxylin would be sufficient for our purposes (and compatible with our existing DAPI filter set). We then apply relatively straightforward mathematical transformations, similar to those reported elsewhere, [5,8,13] to the fluorescence images of the eosin and DAPI staining to create our virtual H&E images.
Methods
Eosin and 4′,6-diamidino-2-phenylindole staining protocol An 8.2% eosin Y working solution was prepared using 390 ml 95% ethanol, 50 ml 1% eosin Y, 5 ml 1% phloxine, and 2 ml glacial acetic acid, from which 300 ml were then mixed with 100 ml water and 2 ml glacial acetic acid to arrive at the final eosin Y working solution. A 50% solution of ethanol diluted in purified water was also prepared before staining. After completion of a normal CODEX imaging run and with the coverslip still in place on the microscope stage, the solution within the coverslip well was removed and replaced three times with 1 ml of 190 proof ethanol, waiting 1 min per wash, and being careful to pipet gently so as to not cause tissue to separate from the glass coverslip. This was then replaced with 500 µl of 190 proof ethanol solution with 0.5 µl of DAPI solution (Akoya Nuclear Reagent; Akoya cat. no. 7000003) and 0.5 µl of eosin working solution added. After incubating for 3 minutes, the solution within the well was then replaced three times with 1 ml of 190 proof ethanol. The well solution was then replaced twice with 1 ml of CODEX buffer (Akoya cat no. 7000001, diluted 10x in water), then replaced with 700 µl of CODEX buffer. The sample was imaged soon thereafter on a Keyence BZ-X800 microscope using DAPI and Cy3 filter sets (Chroma cat. no. 49000 and Chroma cat. no. 49004, respectively) and 0.75 NA Nikon 20x and 0.95 NA Nikon 40x Plan Apo lambda objectives.
Imaging
Imaging was performed on a Keyence BZ-X800 microscope (with built-in camera) with a Nikon 20x Plan Apo lambda NA 0.75 objective using "high resolution" camera mode. Alternatively, a Nikon 40x Plan Apo lambda NA 0.95 objective using "high sensitivity" or "high resolution" camera mode could be used. Using "high sensitivity" camera mode has the effect of binning four pixels (2 × 2) into one, which has the effect of reducing image resolution due to pixelation. In a prior iteration of our method, we created virtual H&E images using 20x magnification and "high sensitivity" camera mode, but the results suffered due to (1) pixelation effects and (2) overly sensitive imaging (i.e., 4x more sensitive due to binning), which made it necessary to use very short camera acquisition times (due to the brightness of the dyes' fluorescence) and unnecessarily increased sensitivity to possible variations in reagent dilutions.
Conversion of raw fluorescence images to virtual hematoxylin and eosin images
Fluorescence images demonstrate signal intensity that is generally linear with respect to the number of fluorescent molecules present, while normal brightfield imaging of true H&E-stained slides is essentially a measurement of light absorbance by the dye molecules and follows a logarithmic relationship with respect to concentration of dye molecules. [13] We, therefore, modeled the conversion of fluorescence images to absorbance images using the following equations: where R, G, and B correspond to the red, green, and blue channels of the resulting virtual H&E image; i and j are pixel indices; c values correspond to a postulated expected minimum amount of light (brightfield out of focus light, etc.) that is transmitted for a maximally stained specimen for the given channels; a and b vectors correspond to the expected decrease in brightfield light intensity for increasing brightness seen by fluorescence microscopy for DAPI and eosin staining, respectively; k D and k E are scalars that are added for convenience for adjusting impact on image output based on DAPI and eosin input images, respectively, and to avoid modifying component values of a and b; D is the DAPI monochromatic fluorescence image; and E is the monochromatic eosin fluorescence image. Constants were adjusted until there was satisfactory correspondence with corresponding H&E images, after which only k D and k E were modified from sample to sample. The resulting RGB channels form a final virtual H&E image. By omitting the eosin image (E), a virtual hematoxylin-only stained image is seen, similar to what is used for conventional immunohistochemistry.
Software for the conversion was developed in Python using a Jupyter notebook and a minimal set of modules that included NumPy, pandas, tifffile, and matplotlib. Updated source code and associated example data are available for download at https://github.com/SimonsonLab/VirtualHE_examples.
In addition to the conversion of images obtained through eosin and DAPI combination staining, image conversion was also performed on DAPI and autofluorescence images. This was performed for comparison with the DAPI/eosin staining images. The DAPI/autofluorescence images were collected using an Atto 550 filter set (Cy3 filter set) on the first cycle of the regular CODEX imaging (when autofluorescence is expected to be highest) with equivalent mathematical transformations, albeit with different chosen constants k D and k E .
results
Virtual H&E images were considered qualitatively good surrogates for regular H&E-stained images, as determined by two board-certified pathologists (PDS and JRF), very similar to that expected by regular H&E staining. An example application is shown in Figure 1, which demonstrates the utility of the virtual H&E staining in helping to identify eosinophilic inclusions/nucleoli in Hodgkin and Reed-Sternberg cells in a case of classic Hodgkin lymphoma [see also Supplementry Figures 1-3]. The protocol was time-sensitive, with gradual loss of eosin staining over time as it dissociated from the tissue specimens and increased generalized background fluorescence from the imaging buffer. Hence, imaging was performed directly after completion of the staining protocol. After imaging with eosin and DAPI, we experimented with tissue clearing by the CODEX instrument, followed by reimaging. Reimaging demonstrated that virtually all of the eosin was removed by the CODEX tissue clearing cycle.
Vi r t u a l H & E i m a g e s c r e a t e d u s i n g D A P I a n d autofluorescence (captured through an Atto550 filter set using the first CODEX imaging cycle data) were qualitatively very similar to those captured in our post imaging DAPI-eosin staining protocol, but the autofluorescence images showed significant raster scanning photobleaching artifacts that were evident in the final images, more than that seen for eosin staining [see Supplementry Figure 4].
dIscussIon
As brightfield H&E staining is predicted to remain the gold standard by which pathologists are trained to interpret histology images, presenting imaging data acquired through other methods in a virtual H&E representation is desirable. [5][6][7][8][9][10]13] Here, we have presented our method for use with a commercial CODEX imaging system [3,4] that allows us to use the same tissue section to create a virtual H&E image for direct comparison with the fluorescence imaging of many tissue markers. A similar process could be (and is often) followed for other new imaging systems when the samples cannot be (or cannot conveniently be) stained using H&E and brightfield imaging.
There are multiple methods that could be applied to create virtual H&E images from tissue imaged on a CODEX system. In addition to the method we have presented, another option includes using images of the tissue autofluorescence (collected, for example, using a GFP or Atto 550 filter set) for virtual eosin staining and DAPI fluorescence for virtual hematoxylin staining. [5] This approach eliminates manual staining at the risk of a mismatch between the autofluorescence signal and what would be the true eosin staining pattern. Despite this risk, for many applications, the difference will be small and not affect tissue interpretation. Nevertheless, we determined that actual eosin staining would be preferable, which prompted the development of a staining procedure and associated software. When both imaging approaches were directly compared, we found the eosin-stained imaging to be of higher quality, primarily due to the strong effects of photobleaching seen in the autofluorescence images, which were less pronounced in the eosin images (which is to be expected given the generally short photobleaching lifetime for most autofluorescence).
Given that the transformation to virtual H&E images involves applying exponential-decay transforms, some image detail can actually appear to be lost after transformation, which was particularly noticed in comparing transformed and nontransformed images of cell nuclei. By making the option available to switch between exponential-transformed and linear-transformation images, it is likely that pathologists will quickly adapt to and possibly even prefer having the nontransformed DAPI images, despite being less similar to true H&E images. The virtual H&E image, created using the DAPI and eosin fluorescence images, helps demonstrate eosinophilic nuclear inclusions, which by CODEX imaging costain with CD20. CODEX image: blue = DAPI, magenta = CD30, yellow = MUM1, cyan = CD20, and white = CD68. Images were captured on a Keyence BZ-X800 microscope with 20x Nikon Plan Apo lambda 0.75 NA objective with "high resolution" camera setting An important limitation of this approach is the lack of visualization or altered visualization of pigments present in the specimen since these will be displayed as some mix of H&E coloring. Examples include hemoglobin (appearing as magenta color rather than red) and melanin. For evaluation of pigments intrinsic to the sample, additional tissue sections with traditional H&E staining will likely be required. Furthermore, as eosin might have different fluorescence properties in different molecular environments (e.g., shifted emission spectrum when bound to protein, self-quenching effects for high molecule densities, etc.), we also recognize that, even with using the same molecule, the correlation between brightfield and fluorescence imaging may be more complex than our simple transformation allows, and more sophisticated transformations (e.g., nonlinear or deep learning GAN models) might ultimately yield better virtual H&E images. Finally, the images produced through fluorescence are a combination of eosin fluorescence and autofluorescence, so care must be taken to ensure sufficiently strong eosin staining is present to ensure the autofluorescence contribution is negligible.
In summary, we have presented a straightforward staining protocol and analysis algorithm that should be of interest to investigators who are using CODEX systems and wish to create virtual H&E images of the same tissue section used for imaging biomarkers. The images will be helpful for trained pathologists who are asked to interpret the images, whether for research questions or eventual clinical use.
Financial support and sponsorship
Financial support was provided through the Department of Laboratory Medicine and Pathology at the University of Washington and the Department of Pathology and Laboratory Medicine at Weill Cornell Medicine.
Conflicts of interest
There are no conflicts of interest.
SUPPLEMENTARY MATERIAL
Updated source code and associated example data are available for download at https://github. com/Simonson Lab/ VirtualHE_examples. Figure S4: Virtual H&E images of a tonsil created using autofluorescence image instead of eosin fluorescence image, ×20, "high sensitivity" camera mode. (a and c) 4′,6-diamidino-2-phenylindole fluorescence, eosin fluorescence, and virtual H&E brightfield images, respectively. (d) Photobleaching of the autofluorescence signal contributes to artifacts due to raster scanning often being present in final H&E images | 2021-12-19T17:11:14.684Z | 2021-12-16T00:00:00.000 | {
"year": 2021,
"sha1": "7acb04e78c0e782e4112feb155344ae18f34871b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jpi.jpi_114_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "656546d7bd0901e38ea7c1424c59ee4114d5a5e7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18260047 | pes2o/s2orc | v3-fos-license | Event-Triggered Consensus for Linear Continuous-time Multi-agent Systems Based on a Predictor
In this paper, the problem of event-triggered consensus for linear continuous-time multi-agent systems is investigated. A new event-triggered consensus protocol based on a predictor is proposed to achieve consensus without continuous communication among agents. In the proposed consensus protocol, each agent only needs to monitor its states to determine its event-triggered instants. When an event is triggered, the agent will update its consensus protocol and sent its state information to its neighbors. In addition, the agent will also update its consensus protocol and the predictor when it receives the state information from its neighbors. A necessary and sufficient condition that the consensus problem can be solved is derived. Moreover, it is proved that Zeno behavior does not exist. Finally, a numerical example is given to illustrate that the protocol proposed in this paper can make the multi-agent systems achieve consensus through much fewer event-triggered times.
Introduction
In the 1970s, the definition of agent was proposed in the field of intelligence [1]. Then more and more researchers began to pay their attention to agents and rich results have been obtain. To mention a few, the consensus problem of multiagent systems with the directed communication topology and the one-order integrator dynamics was investigated and the theoretical framework for the consensus problem of multi-agent systems was built in [2]. The consensus problem of multiagent systems with one-order integrator dynamics, active leaders and variable interconnection topologies was considered in [3]. For the multi-agent systems with second-order integrator dynamics, an necessary and sufficient condition for the consensus was proposed in [4]. The leader-following consensus problem of second-order nonlinear multi-agent systems with general topologies was studied without assuming that the interaction diagraph was strongly connected or contained a directed spanning tree in [5]. And for the multi-agent system with high-order integrator dynamics, a necessary and sufficient condition was proposed for the consensus problem in [6]. The consensus for high-order linear multi-agent systems with time delays in both the communication channel and control inputs was investigated in [7]. The consensus problem of multi-agent systems with fixed/switching communication topology was investigated in [8] using the Lyapunov method. The existence of consensus protocols for linear continuous-time/discrete-time multi-agent systems with fixed communication topology was proved in [9] and [10]. And other results about multi-agent systems can be seen in [11,12] and references therein.
It should be noticed that all the above publications assumed that there exists continuous communication between agents to implement the consensus protocol. However, it was well known that the continuous communication between agents was impossible in practice since the network bandwidth and the energy of agents were limited. And the continuous communica-tion between agents would also result in the waste of communication resources [13][14][15][16]. In order to avoid continuous communication and save the communication resources, the eventtriggered strategy has received more and more attention. The consensus protocol was designed for multi-agent systems with the one-order integrator dynamics based on a self-triggered strategy in [17]. Event-triggered consensus protocols were designed for multi-agent systems with the one-order/secondorder integrator dynamics in [18]. Two event-triggered consensus protocols were designed for multi-agent systems with the general linear dynamics in [19], but both the protocols were only effective for the undirected communication topology. For multi-agent systems with the general linear dynamics and the directed communication topology, the event-triggered consensus problem was investigated in [20]. The consensus protocol in [20] could make multi-agent systems achieve consensus without continuous communication, but the state differences between agents would merely converge to the neighbourhood of 0. In [21] a distributed consensus protocol was designed to make the state differences between agents converge to 0 ultimately based on an event-triggered strategy and a necessary and sufficient condition was proposed for the consensus.
In this paper, a new event-triggered consensus protocol is proposed for the multi-agent systems with general linear continuous-time dynamics based on a predictor. The communication topology among agents is assumed to be general directed. Under the consensus protocol and the triggering function proposed in this paper, the multi-agent systems can achieve consensus without continuous communication. Then, the Zeno behavior is proved to be nonexistent. In addition, the method proposed in this paper can make the multi-agent systems achieve consensus with much fewer event-triggering times than the existing methods.
The rest of this paper is organized as follow. Some useful notations and the graph theory are introduced in Section 2. The design of the consensus protocol based on the event-triggered strategy is given in Section 3. In Section 4, the analysis of the consensus protocol is presented. A numerical example is given in Section 5 to illustrate the efficiency and the advantage of the event-triggered consensus protocol presented in this paper. At last, Section 6 concludes the paper.
2 Notation and graph theory The notation and the graph theory used in this paper are introduced in this section. Let R m×n denote the set of m × n real matrices. 0 m×n denotes the m × n matrix with all zeros. I m×n and I n denote the m × n and n × n identity matrix respectively. 1 n denotes the n × 1 column vector of all ones. A diagonal matrix with x i (i = 1, 2, · · · , n) is denoted by diag(x 1 , x 2 , · · · , x n ). A ⊗ B denotes the Kronecker product of matrices A and B. Let * denote the Euclidean norm for vectors and the induced 2-norm for matrices, respectively. Re( * ) denotes the real part of a complex number and λ i ( * ) denotes the ith eigenvalue of a matrix.
The communication topology among the N agents is represented by a weighted graph G = (V, ε, A). N agents in a multi-agent system are regarded as nodes V = 1, 2, · · · , N of the graph G. A directed graph contains a directed spanning tree if there are directed paths from one node to every other ones. The adjacency matrix is defined as A = [a ij ] ∈ R N ×N associated with the directed graph G. Assume that for all i ∈ V, a ii = 0, a ij > 0 if e ij ∈ ε and a ij = 0 otherwise. The directed edge e ij ∈ ε denotes that agent i can receive information from agent j. So agent i can be called as agent j's in-neighbor agent and agent j can be called as agent i's outneighbor agent. L = [l ij ] ∈ R N ×N denotes the Laplacian matrix of the directed graph G, where l ii = N j=1 a ij and l ij = −a ij (i = j).
Design of the event-triggered consensus protocol
A linear continuous-time multi-agent system is consisted of N agents, where the dynamics of agent i is described bẏ where x i (t) ∈ R n×1 and u i (t) ∈ R m×1 are the state and the control input, respectively. A ∈ R n×n , B ∈ R n×m are constant matrices. The communication topology among the N agents can be described by a directed weighted graph G. Assumption 1 is necessary to obtain the main result.
Assumption 1 The matrix pair (A, B) in (1) is stabilizable and the graph G contains a directed spanning tree.
The well-known consensus protocol for the multi-agent system (1) is In order to apply the protocol (2), a continuous communication between agent i and j is needed. For the purpose of saving the communication costs among agents, the event-triggered strategy is applied to design the consensus protocol. Under the event-triggered strategy, an event is designed for each agent in the multi-agent system. And the agent broadcasts its current information to its out-neighbor agents only when its event is triggered. The following consensus protocol is designed where K ∈ R m×n is the feedback controlled gain matrix to be determined. t i ki is the most recent triggering instant of agent i, k i = 1, 2, 3, · · · represents the sequence number of the triggering instant of the agent i. x i (t i ki ) is the last broadcast state of agent i.x i (t) andû i (t) represent the estimation of the state and the control input of the agent i, respectively.
Then the measurement error is defined as And the triggering function is defined as where c 1 > 0, 0 < α < −maxRe(λ i (Π)) and Π is defined after (20). From the triggering function (5), it can be seen that when the triggering function f i (t) ≥ 0, agent i's event is triggered. Then agent i sends its current information including its state and state differences between agent i and its in-neighbor agents to its out-neighbor agents and updates its consensus protocol. At the same time, the measurement error e i (t) is reset to 0. If the triggering function f i (t) < 0, it means that the communication from agent i to its out-neighbor agents is unnecessary until the next event is triggered. On the other hand, agent i will update its consensus protocol as soon as it receives the information from its in-neighbor agents. And the events of all agents are assumed to be triggered at the initial instant.
From (3), it can be seen that the main challenge of the eventtriggered consensus protocol proposed in this paper is how to obtain the estimation of the control input. Next, a method of estimating the control input is presented.
Define (2) can be rewritten as If applying the protocol (2) to system (1), it is clear thaṫ where a * i = [a i1 , a i2 , · · · , a i(i−1) , a i(i+1) , · · · , a iN ]. Then (7) can be rewritten aṡ where And from (6), it can be known that the estimation problem of agent i's control input can be transformed into the estimation problem of the state differences between agent i and its neighbor agents. On the basis of (8), the following predictor is designed to estimate the state difference between agent i and its neighbor agents.
Remark 1 It should be noted that (9) utilizes the artificial closed-loop system (7) to predict the future state. Such a kind of predictor was first proposed in our previous work [22] and got further studied in [23,24].
From (3) and (9), it can be seen that if agent j is triggered, then agent j will send its current state x j (t j kj ) and state differencê θ j (t j kj ) to agent i at the triggering instant t j kj . At the same time, agent i updates the state difference between itself and agent j using the state information x j (t j kj ). Soθ i (t)(t ≥ t j kj ) can be obtained based on the updatedθ i (t j kj ) and the triggering instant t j kj . The estimation of the control inputû i (t) andû j (t) in (3) can be obtained where t i ki and t j kj are the most recent triggering instants of agent i and agent j respectively.
Definition 1 For the linear continuous-time multi-agent system (1), if lim t→∞ x i (t) − x j (t) = 0 holds, it can be said that the protocol (3) can solve the consensus problem or the multi-agent system (1) can achieve consensus under the protocol (3).
Lemma 1 [25] If the graph G contains a directed spanning tree, zero is the simple eigenvalue of the Laplacian matrix L and all the other eigenvalues have positive real parts. Otherwise, 1 N is a right eigenvector associated with the zero eigenvalue.
Lemma 2 [26] For the Hurwitz matrix M ∈ R n×n , when t ≥ 0, there exist a c M > 0 such that e Mt ≤ c M e µM t holds, where max{Re(λ i (M ))} < µ M < 0.
Lemma 3 For the linear continuous-time multi-agent system (1) with the event-triggered consensus protocol (3) and the triggering function (5), if all the matrices A + λ s (L)BK (s = 2, 3, · · · , N ) are Hurwitz, then all the matrices Ω i = Proof An invertible matrix can be taken as ×N is a matrice which is derived by inserting −1 N −1 before the ith column or after the i − 1th column of the identity matrix I N −1 , i.e.
Therefore, the following equation can be obtained where I N −1 ⊗ A + J i ⊗ BK is an upper triangular block matrix.
According to the properties of Kronecker product [27], it can be known that the eigenvalues of I N −1 ⊗ A + J i ⊗ BK are given by the eigenvalues of A + λ s (L)BK(s = 2, 3, · · · , N ), i.e. the eigenvalues of the matrix Ω i are the same as the ones of A + λ s (L)BK(s = 2, 3, · · · , N ). As a result, if all the matrices A + λ s (L)BK(s = 2, 3, · · · , N ) are Hurwitz, the matrice Ω i is surely Hurwitz. The proof is completed.
Analysis of the event-triggered consensus protocol
The following theorem presents the main results of this paper.
Theorem 1 Under the event-triggered consensus protocol (3) and the triggering function (5), the consensus problem of the linear continuous-time multi-agent system (1) with a directed topology G can be solved without continuous communication if and only if all the matrices A + λ i (L)BK (i = 2, 3, · · · , N ) are Hurwitz, where λ i (L) = 0. In addition, the Zeno behavior does not exist.
Proof (Sufficiency) From the measurement error (4), it is clear that where l i = [l i1 , l i2 , · · · , l iN ] represents the ith row of the Laplacian matrix L, T and e(t) = [e T 1 (t), e T 2 (t), · · · , e T N (t)] T . Then substituting (16) into (1) yieldṡ Define δ i (t) = x i (t) − x 1 (t), then it can be known that the multi-agent system (1) will achieve consensus when lim t→∞ δ i (t) = 0 holds. On the basis of (17), one can obtain thaṫ And (18) can be transformed into the following form. where Then (19) can be rewritten aṡ If agent i is triggered, i.e. f i (t) ≥ 0, then its measurement error e i (t) will be reset to 0. It means that f i (t) will not cross 0 and the measurement e i (t) satisfies e i (t) ≤ c 1 e −αt before agent i is triggered. Clearly, e(t) ≤ √ N c 1 e −αt and lim t→∞ e(t) = 0 holds. Therefore, it can be seen that if the matrix Π is Hurwitz, then the system (20) can asymptotically converge to 0 as t → ∞, i.e. the multi-agent system (1) can achieve consensus under the consensus protocol (3) and the triggering function (5).
Following Lemma 3, an invertible matrix can be taken as . Therefore, it can be proved as like Lemma 3 that the eigenvalues of the matrix Π are the same as the ones of A + λ i (L)BK(i = 2, 3, · · · , N ). As a result, if all the matrices A + λ i (L)BK(i = 2, 3, · · · , N ) are Hurwitz, the matrice Π is surely Hurwitz. Then the system (20) can asymptotically converge to 0 as t → ∞, i.e. the multi-agent system (1) can achieve consensus under the consensus protocol (3) and the triggering function (5).
(Necessity)It is assumed that not all the matrices A + λ i (L)BK(i = 2, 3, · · · , N ) are Hurwitz, so it is clear that the matrice Π is not Hurwitz. If the initial value of δ(t) is not 0, then δ(t) will go to infinity as t → ∞. So the multi-agent system (1) can not achieve consensus under the consensus protocol (3) and the triggering function (5).
Next, the nonexistence of the Zeno behavior in the control process will be proved. From (4), it can be derived thaṫ And (16) can be rewritten as where l * i = [l i2 , l i3 , · · · , l iN ]. Then substituting (22) From the triggering function (5), it can be known that e i (t) ≤ c 1 e −αt . Then it can be obtained that From Lemma 3, it can be known that if all the matrices A + λ i (L)BK (i = 2, 3, · · · , N ) are Hurwitz, then all the matrices Ω i (i = 1, 2, · · · , N ) and Π are also Hurwitz. Then it follows from Lemma 2 that e Π(t−s) ≤ c Π e µΠ(t−s) and e Ωi(t−t i The solution of (20) can be obtained.
According to Lemma 2, it has that e Π(t−s) We(s) ≤β 2 e µΠ(t−s) e −αs (26) where β 2 = c 1 c Π √ N W . So it can be derived that where β 1 = c Π δ(0) , η 1 = β 1 + β2 |µΠ+α| , η 2 = β2 |µΠ+α| . Substituting (27) into (24) yields From the triggering function (5), it can be seen that when t t i k iė i (s)ds = c 1 e −αt holds, the events will be triggered. From (28), it can be known that where 0 < t i ki < t. So it can be seen that the event of agent i will not be triggered before and t i 2 are the two neighbouring triggering instants of the agent i satisfying 0 < t i 1 < t i 2 . Let τ = t i 2 − t i 1 denote the interval between the two neighbouring triggering instants, and it has been known that −α, µ Π , µ Ωi < 0, so there exists e −αs , e µΠs , e µΩ i (s−t i 1 ) ≤ 1. So it has that And it can be known from the triggering function that τ is the solution of the function t i 1 +τ t i 1ė i (s)ds = c 1 e −α(t i 1 +τ ) . Therefore, the value of τ must be greater than or equal to the solution of the following function, i.e. τ ≥ τ * .
Thus there must be a positive lower bound on interval between any two neighbouring event-triggered instants. The Zeno behavior is proved nonexistent. The proof is completed.
For the linear continuous-time multi-agent system (1) with the event-triggered consensus protocol (3) and the triggering function (5), an appropriate K can be chosen to ensure that all the matrices A + λ i (L)BK (i = 2, 3, · · · , N ) are Hurwitz by the following steps.
Step 1 It has been assumed that (A, B) in (1) is stabilizable in Assumption 1,thus the Riccati equation A T P + P A − P BB T P + I n = 0 has a unique nonnegative definite solution P , and all the eigenvalues of A − BB T P are in the open left half plane [28].
Simulation
In this section, a numerical examples is given to illustrate the effectiveness and the advantage of the method proposed in this paper. Consider a linear continuous-time multi-agent system consists of six agents. The dynamics model of agent i is described by the system (1) with The communication topology among the six agents is described by a weighted graph as shown in Figure 1. Let the initial state of the system be It can be seen that the linear continuous-time multi-agent system can achieve consensus, which means the event-triggered consensus protocol proposed in this paper can solve the consensus problem of multi-agent systems effectively. In Fig 3, the measurement error of each agent and the threshold of errors are presented. It can be seen that when the measurement error reaches the threshold, the event is triggered, then the measurement error is reset to zero. Table 1 lists comparisons between the methods in [20] [21] and the method this paper in terms of event-triggered times of each agent. It can be seen that the event-triggered times using the method in this paper is much fewer than using the methods in [20] and [21]. 1 25 38 18 2 12 16 3 3 19 24 5 4 12 11 3 5 12 12 2 6 11 16 5
Conclusion
This paper has investigated the event-triggered consensus for linear continuous-time multi-agent systems under the directed communication topology based on a predictor. A new event-triggered protocol has been designed based on a state predictor for the linear continuous-time multi-agent systems to achieve consensus without continuous communication. The consensus protocol provided in this paper only requires each agent to monitor its state to determine the event-triggered instants. And the Zeno behavior has been proved to be nonexistent. On the other hand, an advantage of the method proposed in this paper is that it can make the multi-agent systems achieve consensus with much fewer event-triggered times. So the method in this paper can reduce the unnecessary communication among agents more effectively and save more communication costs. | 2017-02-24T10:56:20.000Z | 2017-02-24T00:00:00.000 | {
"year": 2017,
"sha1": "05d8cab794701360beebfadba0ed7bf3c0b729e3",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1702.07536",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05d8cab794701360beebfadba0ed7bf3c0b729e3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3767210 | pes2o/s2orc | v3-fos-license | Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)
Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.
Introduction
There has been increasing interest in the use of 'decision aids' [1], defined as adjuncts to the discussions clinicians have with patients during deliberations about decisions: these aids provide information about options and help clarify personal values [2]. These adjuncts range from leaflets through face to face methods such as coaching or counselling to interactive multimedia websites. To describe this generic family of clinician-patient interventions we will use the term decision support technologies (DSTs) [3], corresponding with the internationally recognised need to assess the impact of 'health technologies' [4]. DSTs are complex interventions which require detailed assessment to ensure safe use in healthcare contexts [3] because they help make options explicit, provide information about harms and benefits, clarify patient values' and provide structured means to help people deliberate when making decisions. Although there are published methods to assess the quality of clinical practice guidelines [5], DSTs go further and address issues of equipoise for which patients need to deliberate about difficult choices [6]. However, as yet, there are no reliable methods to assure the quality of DSTs development process, content, potential bias, and method of field testing and evaluation -a gap which we address in this study. We did not intend to develop methods to assess how DSTs are used in practice, in the clinical encounter, although we recognise that this is an important area that requires further work.
There are reports that DSTs have achieved a 'tipping point' in the US and are widely accessed by increasing numbers of patients [1]. The ability of DSTs to improve the quality of decisions and enable reductions in discretionary surgery and invasive procedures without adverse effects on health outcomes has been demonstrated in clinical trials [2,7]. The central role that these technologies will play in future healthcare systems is increasingly recognised [1,[8][9][10]. Over the last decade, the interest in developing DSTs has moved beyond research groups and has entered the commercial world. A global interest in developing DSTs has emerged among both for-profit and not-for-profit organisations. It is therefore essential to have a set of internationally accepted standards to assess their quality, to assess whether interests are declared and whether they are unduly biased [8,9].
The International Patient Decision Aid Standards (IPDAS) Collaboration produced a checklist for the assessment of DSTs [11]. The checklist was rigorously developed in a two stage webbased Delphi process using online rating process to enable international collaboration. A total of 122 individuals from four stakeholder groups (researchers, practitioners, patients, policy makers) representing 14 countries reviewed background evidence summaries, and rated the importance of 80 criteria in 12 quality dimensions. Second round participants received feedback from the first round and repeated their assessment of the 80 criteria plus three new ones. The IPDAS checklist enabled broad assessments in 12 dimensions: systematic development process, providing information about options; presentation of probabilities; clarification of values; use of patient stories; information about guiding or coaching; disclosure of interests; providing internet access; balanced presentation of options; use of plain language; use of up-to-date evidence; and effectiveness. The IPDAS checklist allows users, developers and others to assess whether these technologies contain the suggested components and judge whether they underwent rigorous development and evaluation. It has been used in updating the Cochrane systematic review of DSTs and to guide the development of DSTs [12,13].
However, the checklist was not designed to provide precise, quantitative assessments, such that judgements could be made about the quality of DSTs, either at item, dimension or global levels. In addition, because not all checklist items were applicable to every DST, comparability, even at the checklist level, was not possible. Given interest in being able to assess these DSTs at a more precise level of detail -in terms of how they were developed and field tested, whether their content was valid and whether effectiveness had been evaluated with patients facing relevant decisions -the IPDAS Collaboration agreed that achieving this objective would require an instrument capable of quantitatively assessing the quality of DSTs. The aim of this article is to describe the development, validation and inter-rater reliability of an IPDAS instrument (IPDASi), built on the existing framework.
Methods
IPDASi was developed in four stages.
Stage 1 Refinement and preparation of instrument (IPDASi v1)
The published IPDAS checklist required transformation into a quantitative instrument, although we agreed to adopt the dimension-item framework. As part of this preparation, a group of researchers (GE, DS, RT, CB, SB, TW) used the existing checklist and dimension-item framework to score three purposefully selected DSTs, representing different design approaches and where our prior overall assessments indicated variable quality. These were Healthwise's Breast Cancer Surgery (BCS), web-based information, Bastian&McBrides Hormone Replacement Therapy (HRT), an illustrated booklet, and Wolf et al's Prostate Specific Antigen (PSA) screening, a brief text-based script. A binary (yes/ no) and 'not applicable' scale was proposed; comments were collected on item applicability. Tabulations and qualitative analyses were performed but inter-rater correlations were not calculated.
Stage 2 IPDASi Confirmation of items (IPDASi v2)
On the basis of the results of Stage 1, a refined version IPDAS instrument (IPDASi v2) was designed and used in Stage 2. The non-applicable option was removed, and in this and all subsequent versions, a 4-point rating scale was used for each item, with possible responses as follows: strongly agree = score 4 (the issue is addressed clearly and comprehensively); agree = score 3 (the issue is addressed but with room for improvement); disagree = score 2 (the DST fails to clearly address the issue); strongly disagree =score 1 (the DST totally fails to address the issue). In common with the binary (yes/no) scale it replaced, the scale intentionally does not include a midpoint expressing neutrality. Items in the 'balance' dimension were integrated into the 'information' dimension. The web dimension was not applicable to all DSTs, therefore removed. A website was created for data collection (http://www.ipdasi.org/ ). Scale anchor point descriptions were developed for all items.
Five raters, two in the UK (MA-D and SS, Cardiff) and three in North America (ED and SK in Ottawa and MP in Providence) were familiarised with IPDASi v2, prior to using it to score the three previously selected DSTs, and asked to comment on item phrasing. Members of the IPDASi development group were asked to view the IPDASi instrument online and comment on item phrasing. For IPDASi v2 and subsequent versions, item scores were rescaled to be 0 to 100. At Stage 2, only an unweighted average of all items was calculated, as our focus was not on dimension scores. Analysis included inter-rater reliability using intraclass correlations for two way random effects at item and global score levels [14].
Stage 3 IPDASi Validation Study
Based on the results of Stage 2, a third version, IPDASi v3 was designed. This retained the majority of items from Stage 2, albeit with changes to phrasing. It comprised 47 items representing 10 dimensions. 9 dimensions applicable to all DSTs relate to Information (8 items); Probabilities (8 items); Values (4 items); Decision Guidance (2 items); Development (6 items); Evidence (5 items); Disclosure (2 items); Plain language (1 item); Evaluation (2 items). One additional dimension (9 items) relates to decisions based around tests or screening. Feedback from the comments resulted in more detailed anchor scale descriptions and standardization of descriptions.
IPDASi v3 was then used in a validation study to assess the quality of a sample of DSTs. Two approaches were used to achieve a sample of DSTs. First, five major producers of publically available DSTs were identified (The Foundation for Informed Medical Decision Making, Healthwise, Mayo Clinic, Midwives Information and Resource Service (MIDIRS) and Ottawa Health Decision Centre (OHDeC). Three DSTs from each producer were chosen at random, giving a total of 15. Second, 66 Englishlanguage DSTs, for which contact details were available, were chosen at random from the Cochrane inventory maintained by the University of Ottawa (http://decisionaid.ohri.ca/cochinvent.php), and their developers were approached and asked: 1) Whether the DST was in current use and free of charge to clients; 2) For consent to assess the DST using IPDASi; and 3) For copies or information about documentation (published reports or peer reviewed articles) about the development or evaluation of the DST.
Each DST included in the sample was prepared for assessment in a standardised way. Background documents (relevant publications, reports) and all DST content were made available online (either in pdf or html formats; videos were converted into Windows Media Video format) for raters to assess. Table 1 provides details of the DSTs that were included in the sample, and the results of the IPDASi assessments.
Eight raters with diverse backgrounds and training were trained to undertake independent ratings: four in the UK (MA-D, MS, NJ, SS in Cardiff) and four in North America (SK, ED, AS in Ottawa; MP in Providence). Each DST was scored by two raters, one chosen randomly from each location, such that one rating was done in UK and the other in North America. New raters were asked to pilot the instrument on a 'test' DST and new raters also had access to raters who had completed the Stage 2 assessment if they required advice on item interpretation.
As in Stage 2, each item was scored on a 4-point scale, rescaled from 0 to 100, and dimension means were calculated. Two overall scores were calculated, scaled 0 to 100: the unweighted mean of all items (38 or 47, depending on whether the DST addressed a treatment or a test/screening decision) and the weighted mean score, a mean of the 9 or 10 dimension-specific means. The latter score upweights items belonging to dimensions comprising few items and downweights items from dimensions with many, but each dimension contributes an equal weight into the final score.
Summary statistics were calculated for dimension scores and unweighted and weighted overall means. Weighted means were modelled by rater and tool in a two-way balanced incomplete ANOVA model. Intraclass correlations and Cronbach's alpha, by each rater and by dimension means, were also calculated. The quality of each DST was then characterised by the average of the weighted mean scores from the two raters, adjusted by the model to take account of their personal propensity to give higher or lower scores. We wanted to predict the degree of accuracy if others used IPDASi in the future, considering one or two raters, known to us (i.e. one of the existing eight raters) or unknown to us. To achieve this, components of variation were determined by Bayesian modelling (Markov chain Monte Carlo) using WinBugs software [15], to arrive at estimated confidence interval half-widths for differing future rating situations. The raters' qualitative comments were summarised.
Stage 4 Agreement on IPDASi-SF (short form)
A core set of items was also chosen to develop a 'short form' (IPDASi-SF) aiming to test whether a 'minimum' quality threshold could be established. By agreement in the development group, these criteria were chosen based on having an equimedian score of 9 (i.e. maximum agreement) in the IPDAS consensus process [11]. The equimedian is designed to represent the cumulative distribution function for a population with equal numbers in each of the four stakeholder groups [11]. In addition, core-set items represented key concepts for each dimension. The 19 items selected for the IPDASi-SF consisted of 3 items for tests/screening and 16 others for all DSTs including: Information (4 items: options available, positive features, negative features, and fair comparison); Probabilities (3 items: reference class, event rates, compare probabilities); Values (1 item: personal importance); Development (3 items: patients' needs, impartial review, tested with patients) ; Disclosure (1 item: information about funding); Evaluation (2 items: knowledge, improved decision quality); Evidence (2 items: citations to studies, production date). The three items selected for the test/ screening dimension included: next steps, chances of detection, non-symptomatic. These SF items were not highlighted for special attention during the rating process. Unweighted mean scores were calculated (i.e. all SF items and not the means related to their respective dimensions), and correlations (Pearson) with the IPDASi overall mean adjusted weighted score (Table 2). Table 2 provides a synopsis of the different versions, detailed in the four stages.
Stage 1 Refinement and preparation of instrument (IPDAS v1)
Results of the seven raters were compared. The number of comments made at the interpretation level and the wide variation in scoring indicated a need for further item development. In addition some items had double criteria. In October 2006, five researchers met (AC, AOC, DS, CB&GE) and, using the results of this Stage, judged each item against two criteria, clarity and feasibility of measurement. All item phrasings were modified and it was decided to base the development of IPDASi on the following assumptions.
1. All items should be applicable to the assessment of all DSTs. This enables the computation of a standard quality score per DST with no adjustment for specific content. An exception was made for DSTs designed to guide deliberations about undertaking diagnostic or screening tests. This type of DST would be subject to an additional dimension of items relating specifically to information on test characteristics.
Stage 2 Refinement and preparation of instrument (IPDAS v2)
Mean scores on a 0-100 scale for the three DSTs were as follows, with SDs reflecting inter-rater variation: HRT 68.7 (6.9); BCS 46.0 (6.5); PSA 38.5 (6.4). The intraclass correlation coefficient was 0.89. These results provided sufficient confidence to refine the instrument for a larger reliability study (Stage 3). Qualitative comments revealed where more specific item anchors descriptors were required, achieved collaboratively using a shared online spreadsheet. Discussions regarding dimension weighting led to agreement that the mean of each dimension should contribute equally to the total score. Table 1 describes the sample of DSTs and provides the results. Table 3 lists the items used in IPDAS v3. Three DSTs were assessed from each of the five selected major producers. The other 15 were obtained by approaching 36 developers (representing 47 DSTs). Eighteen developers did not respond and we found that five of the DSTs were no longer in use. After repeated contacts, 13 developers (representing 15 DSTs) agreed to participate in the study, resulting in an overall sample of 30 DSTs.
Stage 3 Dual rater assessments of 30 DSTs (IPDAS v3)
The time taken to assess a DST varies considerably, dependent on its complexity. A simple DST comprising a leaflet could be completed in two hours but assessing multimedia web-based DST required at least 8 hours. A weighted overall score (scaled from 0 to 100) for each DST is shown, averaged over two raters, and then adjusted for the pair of raters. Adjusted IPDASi scores ranged widely from 33 to 82 ( Table 2). The intraclass correlation for the weighted overall score was 0.80. Correlations of dimension scores with the weighted overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the means in the 9 dimensions ranged from 0.50 to 0.81, indicating that the dimensions, although relatively well correlated, measure different aspects of DST quality. Calculations of the standard deviation (SD) presenting imprecision using a Bayesian model based on the existing eight raters, and projected for different number of known (one of the existing eight raters used) and unknown raters, for whom we have no information about their scoring tendencies, resulted in the following estimates: two known raters, 6.6; one known rater, 9.4; two unknown raters, 9.3; one unknown rater, 13.1. Qualitative comments were received on some items, requesting clarifications. This was achieved by adding examples and more descriptive elements to the anchor statements.
Stage 4 Agreement on IPDASi short form
The mean unweighted score for the short-form 16 item IPDASi was 56.1, similar to 56.3 for all items. The correlation of the unweighted IPDASi-SF to the overall mean weighted score (IPDASi score in Table 2) is 0.87 (CI 0.79-0.92). The ranking of the DSTs according to the SF version are very similar, with adjusted scores ranging from 34.5 to 83.1. DST number 32 still ranks highest, but the order shifts at the lower end of the scale. However, the aim of the IPDASi-SF was not to rank DSTs in order of quality but to determine whether or not a limited set of IPDASi items may be useful in determining minimal levels of quality. Assessors
Principal Findings
This work demonstrates that IPDASi has the potential to assess the quality of DSTs. The four stage process revealed the need to make significant changes in the IPDAS checklist and modifications to the set of assumptions so that a measurement tool could be applied across the range of all possible DSTs. Having undertaken this work, we also suggest that IPDASi could provide formative feedback about dimensions in which DST developers could make improvements to subsequent versions. A short-form may also support the development of rapidly applicable quality standards.
In addition, the study demonstrated the high correlation between IPDASi and IPDASi-SF, demonstrating support for the instrument's ability to provide correspondence between scores that indicate high quality at detailed dimension assessment and a version with focus on fewer items.
The study also displayed the levels of measurement imprecision when two raters assess each tool, and points to the need to ensure rater calibration and training in the use of IPDASi prior to assessment. We propose that IPDASi ratings should therefore be undertaken by raters who are familiar with DST development and use and who have undergone calibration training.
Strengths and weaknesses
The instrument design is based on prior international consensus which provided a framework in which to assess DST quality, and in addition, a set of criterion-based 'items' for a new instrument. Secondly, the work was planned by researchers who followed a detailed protocol and met regularly. Thirdly, a staged approach was used, adopting the principles of instrument development [16]. Limitations of the study included the limited size of the sample and our focus on only DSTs developed in English, a constraint imposed by resource availability. There are also further opportunities to examine the validity of IPDASi, for example by examining whether low IPDASi scores for the 'probability information' dimension are associated with low patient knowledge about probabilities, when measured in controlled trials. Additionally, the raters used in the second and third stages were all researchers in the DST field and had some content expertise, so it is likely that raters with more diverse backgrounds may not perform as well. There was no opportunity in this study to provide intensive group training to all raters to ensure tight calibration and standardisation of item interpretation. To mitigate against this weakness, a detailed online manual that provided details about scale anchor definitions was available. Nonetheless, the results indicate that there is room to improve inter-rater reliability.
Results in context
Two other studies have used the IPDAS checklist. Coulter et al undertook a detailed assessment of 40 information materials to support people in making decisions about their health and health care [17]. They found that the overall quality of information was poor and no systematic processes were adopted to give attention to presentational issues, such as readability or to ensure the validity of evidence. O'Connor et al used the checklist to assess the registered trials and found that several IPDAS process measures had not been used [13]. Williams used IPDASi v2 to assess DSTs for genetic testing for breast cancer [18]. We are not aware of any other work that has developed a quantitative measure of DST quality.
Implications
IPDASi, and IPDASi-SF, will be available as a quality assessment method to developers, researchers and purchasers, and given a recognised need to set standards and achieve benchmarks, will be subject to further development. The existing IPDASi provides an assessment of the quality of a DST's components, and in the absence of any other method, will be used as a tool to provide formative advice to DSTs developers and as a summative assessment for those who want to compare their tools against existing benchmarks (http:// www.ipdasi.org). In due course, data from these assessments might form a platform for potential certification but questions remain. There is for instance only one dimension on evaluation outcomes. The items in this dimension cannot be scored unless the developers have actually conducted an evaluation. It is likely that developers may assert that not all DSTs require evaluation, provided they meet other requirements. However, we contend that research in this field is at an early stage. There is no agreement as yet on the essential 'active' components of DSTs [19]; moreover the theoretical underpinning for both their mode of action, measurement models and implemen- 6. If the test detects the condition or problem, the decision support technology describes the next steps typically taken.
7. The decision support technology describes the next steps if the condition or problem is not detected.
8. The decision support technology describes the chances that the disease is detected with and without the use of the test.
9. The decision support technology has information about the consequences of detecting the condition or disease that would never have caused problems if screening had not been done (lead time bias).
tation strategies needs strengthening [20,21]. Further work is needed to assess which DSTs designs are superior to one another. Prospective studies that compare theoretically derived DSTs components and deliberation tools are required to help explore these areas. The IPDAS collaboration and the resulting instruments (IPDASi and IPDASi-SF) need to meet the following challenges: How can new dimensions and items be considered? How are valid 'option menus' in DSTs derived and agreed when there are complex debates about equity, economics and evidence? Should there be items that assess the use of theory in the development of these methods, given that these are examples of 'complex interventions' and deserve attention to frameworks of design and mode of action [22]. These challenges provide an agenda for future research.
What this paper adds
What is already known on this subject. Interest in decision support technologies is rapidly increasing and they are being accessed by ever larger number of patients, especially in the United States.
A quality checklist for decision support technologies has been published by the International Patient Decision Aid Standards Collaboration.
The checklist was not designed to provide precise, quantitative assessments about the quality these interventions.
What this study adds. Describes the development of an instrument which can assess the quality of decision support technologies, thereby enabling formative and summative feedback to developers and purchasers. | 2016-01-11T18:29:14.669Z | 2009-03-04T00:00:00.000 | {
"year": 2009,
"sha1": "6186b8d90d39981e6e634086ea10dc99c7978884",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004705&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abec622cb6cd211190dd16aaaf59f4b84fb3a3c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
208463144 | pes2o/s2orc | v3-fos-license | Synthetic Apparent Diffusion Coefficient for High b-Value Diffusion-Weighted MRI in Prostate
Purpose It has been reported that diffusion-weighted imaging (DWI) with ultrahigh b-value increases the diagnostic power of prostate cancer. DWI with higher b-value increases the diagnostic power of prostate cancer. DWI with higher b-value increases the diagnostic power of prostate cancer. DWI with higher b-value increases the diagnostic power of prostate cancer. DWI with higher Materials and Methods. Fifteen patients (7 malignant and 8 benign) were included in this study retrospectively with the institutional ethical committee approval. All images were acquired at a 3T MR scanner. The ADC values were calculated using a monoexponential model. Synthetic ADC (sADC) for higher b-value increases the diagnostic power of prostate cancer. DWI with higher Results No significant difference was observed between actual ADC and sADC for b-value increases the diagnostic power of prostate cancer. DWI with higher p=0.002, paired t-test) in sDWI as compared to DWI. Malignant lesions showed significantly lower sADC as compared to benign lesions (p=0.002, paired t-test) in sDWI as compared to DWI. Malignant lesions showed significantly lower sADC as compared to benign lesions (Discussion/ Conclusion Our initial investigation suggests that the ADC values corresponding to higher b-value can be computed using log-linear relationship derived from lower b-values (b ≤ 1000). Our method might help clinicians to decide the optimal b-value for prostate lesion identification.b-value increases the diagnostic power of prostate cancer. DWI with higher b-value increases the diagnostic power of prostate cancer. DWI with higher b-value increases the diagnostic power of prostate cancer. DWI with higher b-value increases the diagnostic power of prostate cancer. DWI with higher
Introduction
In the past few years, the use of diffusion-weighted magnetic resonance imaging (DWI-MRI) for disease detection and characterization has increased substantially. For instance, several studies have assessed the importance of DWI-derived apparent diffusion coefficient (ADC) in characterization of prostate cancer aggressiveness [1][2][3][4]. Quantification of ADC is based on at least two diffusion-weighted (DW) images with different b-values. In general, a monoexponential fit between the natural logarithm of the signal intensity against the b-value yields the ADC. In the literature, various other mathematical models have been suggested for ADC quantification, such as stretched-exponential, Gaussian, and Kurtosis [5,6]. However, in the prostate, a monoexponential fit for ADC calculation is sufficient to discriminate prostate cancer from normal tissue [5]. Moreover, different ADC values can be found in the literature due to the variation in the b-value used to compute the ADC [7].
Deciding the optimal b-value for prostate cancer characterization is an active area of research [8][9][10][11]. In most DWI studies, b-values of 1000 sec/mm 2 or less are used for prostate cancer detection or evaluation [4,6,7]. Normal parenchyma can show higher signal intensity in DWI with bvalues of 1000 sec/mm 2 or less, which can make it difficult to distinguish normal tissue from cancer tissue. It has been reported that use of higher b-values improves disease visualization and detection by increasing contrast between cancerous and noncancerous lesions [10,12,13]. Although the use of higher b-values (>1000 sec/mm 2 ) is desirable, obtaining higher b-value DW images is challenging as it leads to decreased signal-to-noise ratio (SNR), increased distortion, susceptibility artifact, and increased scan time. Computed DWI techniques have been proposed to overcome these difficulties [14][15][16][17][18].
Computed DWI is a mathematical technique, which generates images of higher b-values by using at least two different lower b-value (b ≤ 1000) images. It involves computing the ADC map from two lower b-value DW images by using the following equation: where S 0 is the signal intensity at b � 0 s/mm 2 . Once ADC for the lower b-value is known, computed DW images of the higher b-value can be extrapolated by solving equation (1) for S b : e underlying assumption of the computed DWI method is that the ADC is independent of b-values, which contradicts the observation that ADC can vary significantly with the b-value as reported in the literature [19,20]. Using this technique, DW images for higher b-values can be generated but the ADC value for the higher b-value cannot be obtained. Computed DWI technique might be useful for the visualization purpose; however, for quantitative DW image analysis, it might not be sufficient. erefore, there is a need of methods for generating synthetic ADC maps for higher b-values. To the best of our knowledge, methods for creating synthetic ADC maps have not been reported.
e primary objective of this study was to explore the relationship between ADC and b-values and use that relationship to extrapolate synthetic ADC corresponding to higher b-values. A secondary objective was to investigate the feasibility of this technique to improve visualization of lesions in prostate cancer cases for which higher b-value DWI may be desirable.
eory.
Diffusion of water through biological tissue is often quantified using the apparent diffusion coefficient calculated from pairs of b-value DW images using the monoexponential model (equation (1)) However, as many studies have demonstrated, the ADC follows a multiexponential law with respect to higher b-value DWI signal intensity; moreover, this multiexponential behavior is not only related to the perfusion artifact [6,7,15,21,22]. e multiexponential behavior depends upon the intravoxel proton pools that contribute to the signal decay. To overcome the difficulty of making assumptions about the number of intravoxel proton pools with different diffusion coefficients in biological tissue, Bennett et al. [6] introduced the stretched-exponential model. e stretched-exponential model is described as follows: where α represents intravoxel heterogeneity and DDC is the distributed diffusion coefficient representing the mean intravoxel diffusion rate, where α � 1 is equivalent to the monoexponential signal decay. Comparing equations (1) and (3), the ADC computed from the monoexponential model can be written as a function of b: where P 1 and P 2 are constants. erefore, we hypothesized a log-linear relationship between ADC derived from the monoexponential model and the b-value. e purpose of this study was to derive the log-linear relation for lower b-value ADCs and use that relationship to extrapolate ADCs for higher b-values.
Patient Selection.
A total of 15 patients with a median age of 62.5 years suspected to have prostate cancer were included in this retrospective study with the institutional ethical committee approval. All patients were treatment naïve and from a single center. Image-guided biopsy was performed after the imaging. e diffusion images were fused to USG images, and the biopsy from the abnormal diffusion lesion was taken using image guidance. e Gleason scores (GS) for the biopsies of the malignant tissue were recorded [23]. Out of 15 cases, only two patients had GS 7 and 5 patients had GS 6. e remaining 8 patients were reported as benign. Henceforth, we have considered GS 6 and 7 as malignant (N � 7) and rest as benign (N � 8). All benign lesions had benign hypertrophy of the prostate with no evidence of malignancy, and all malignant lesions with biopsy positive had PI-RADs 4 (n � 3) or PI-RADS 5 (n � 4). (4))was fitted voxel-wise to the lower bvalue ADCs (ADC 0-400 , ADC 0-700 , ADC 0-1000 ) to estimate the model parameters P 1 and P 2 . Synthetic ADC (sADC) calculated from equation (4)
Imaging
Synthetic DWI (sDWI) images for b-1500 and b-2000 were generated using DWI of b0 and sADC using the monoexponential model and compared with original DWI 1500 and DWI 2000 . Contrast ratio (CR) between normal and lesion for DWI and sDWI were computed using CR � (S cancer − S normal tissue )/(S cancer + S normal tissue ). CR for original DWI and sDWI for b-1500 and b-2000, sADC values of malignant and benign lesions were assessed by a paired t-test. p values <0.05 were considered as statistically significant. Statistical analysis was performed using Prism (GraphPad Software, Version 7.0).
Regions of Interest.
Regions of interest (ROIs) were placed at the normal appearing muscle area and at the lesion on the original DWI image and computed DWI image. Two radiologists, one with 10 years of experience and another with more than 20 years of experience blinded to each other and to histological finding, placed the ROIs. Overlapping of the ROIs from the two radiologists was 95%. For cases with an area suspicious for tumor, ROIs were placed on axial high b-value diffusion weighted images (b � 2000 s/mm 2 ) on a hyperintense area suspicious for tumor and a normal intensity area within the gland on the same image. For cases in which the area suspicious for tumor was in the peripheral zone of the gland, the normal intensity region of interest was selected from a location in the peripheral zone on the same image. For cases with no area suspicious for tumor, regions of interest were placed in the relatively hyperintense peripheral zone and in the transition zone-which is normally hypointense to the peripheral zone-on the same image.
Results
In the one-way ANOVA test, ADC shows highly significant change (p < 0.0001) with respect to the b-value, both in the transitional zone (TZ) and peripheral zone (PZ) (Figure 1) of the prostate in all the patient data.
is observation supports our initial assumption that the ADC is not constant with respect to b-values.
e log-linear model gives the best fit to the data (R 2 ∼0.9) from the prostate tissue ( Figure 2).
No significant difference was observed in the paired t-test between sADC as compared to actual ADC in the prostate lesions; however, the change was significant in the normal tissue (p < 0.001) at b-2000. Contrast ratio increased significantly between original DWI images and sDWI images (p � 0.002) (Figure 3).
Mean sADC of prostate lesions was significantly lower than that of surrounding normal tissue (p < 0.001) for b-2000 when considered for all data (N � 15). A significantly lower sADC was observed using an independent t-test in malignant lesions (GS 6,7) as compared to benign lesions (GS < 6) (Figure 4). In addition, sADC at b-1000, b-1500, and b-2000 was found to be significantly distinguish lesions with GS < 6 from the lesions with GS ≥ 6. e mean sADC value, confidence interval (CI), and the p values are given in Table 1.
Discussion and Conclusion
Choice of b-values can significantly influence ADC estimation using the monoexponential diffusion model in the prostate, in agreement with variations in ADC found in the literature [7,19,20]. Our study shows a log-linear relationship between ADC and b-values. Using the log-linear relationship derived from ADCs of the lower b-value (b � 400, 700, and 1000), ADCs for higher b-values (b � 1500 and 2000) can be extrapolated with a small relative error (10 ± 5)%. Contrast ratio of lesion and normal tissue significantly increases in synthetic DW images. e technique of generating synthetic ADC gives clinicians extra degrees of freedom with the choice of bvalues. e optimal b-value for disease detection depends upon image contrast that is likely to change with tissue type and histological findings. Rather than deciding the optimal b-value prior to imaging to get optimal contrast between normal and cancer tissue, the use of synthetic ADC may be able to modify the b-value and get the optimal image contrast even after imaging. Furthermore, the technique allows extrapolation of ADC values for higher bvalues, which cannot be obtained by the computed DWI method. However, this technique may not reduce the overall scan time; as in our scanning protocol, the scanning time to get three different b-values (b-400, 700, and 1000) is 1 min 39 sec and scanning time for one high b-value (b-2000) is 1 min 5 sec. is technique provides a method to obtain DW images and ADC values for a wide range of bvalues.
According to the diffusion equation, b-value has a [time] 3 dependency; thus, a very high b-value can be achieved in a clinical scanner with a moderate increase in the echo time (TE). However, the signal loss due to diffusion is a limiting factor at high b-values. e initial signal-to-noise ratio (SNR) and the tissue diffusion determine how quickly the signal goes below the noise level. As the tissue diffusivity is higher in normal tissue as compared to cancer tissue, normal region signal decay reaches to the noise level at a relatively faster rate. Hence, the observed signal at high b-values is dominated by the noise and appears to decay at a slower rate. is explains the reason of significant difference between ADC and sADC values at normal regions. As DWI signal attenuation is exponentially dependent on ADC, small changes in ADC can make a significant change in DWI contrast; this results in the significant increase of CR in sDWI images as compared to DWI. e present study demonstrates that, although the higher b-value sDWI increases the contrast between lesion and normal tissue, the sADC shows similar contrast for b-1000, b-1500, and b-2000. is could be due to small cohort size of the patient with different Gleason scores, consistent with results in other studies [12,24]. ADC computed from high bvalue DWI has been shown to be more accurate in distinguishing prostate lesions from benign and normal tissues [25,26]. Further investigation could be done for the clinical application of sDWI with larger patient populations. One of the limitations of our study was MRI examinations were not compared with a radical prostatectomy specimen. However, image-guided MR-overlayed biopsy could be a good alternative to radical prostatectomy where patient refuses to undergo prostatectomy.
Our initial investigation suggests that the ADC values corresponding to higher b-value DWI can be computed using a log-linear relationship derived from lower b-values (b ≤ 1000). Moreover, this computational method can also be manipulated to determine optimized b-values to create ADC maps. e synthetic ADC technique could be a useful tool to provide optimized image contrast for quantitative DW-MR imaging applications in oncology where ADC is routinely used in clinical practice.
Data Availability e data that support the findings of this study are available from the corresponding author upon reasonable request.
Disclosure
Partial results of this manuscript have been presented at the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), 2017, Barcelona, Spain, with the abstract titled, "Synthetic Apparent Diffusion Coefficient for Ultra High b-value Diffusion Weighted Imaging in Prostate" (abstract number: esmrmb2017.58233ce). | 2019-10-24T09:11:48.224Z | 2019-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "c87390c7d901643160aa9623f8d6553eb2cf254f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/5091218",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffbf43954f8a4b714b11b82731a0ee3c68aa7b4b",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118524714 | pes2o/s2orc | v3-fos-license | Reconstructed primary fragments and symmetry energy, temperature and density of the fragmenting source in $^{64}$Zn + $^{112}$Sn at 40 MeV/nucleon
Symmetry energy, temperature and density at the time of the intermediate mass fragment formation are determined in a self-consistent manner, using the experimentally reconstructed primary hot isotope yields and anti-symmetrized molecular dynamics (AMD) simulations. The yields of primary hot fragments are experimentally reconstructed for multifragmentation events in the reaction system $^{64}$Zn + $^{112}$Sn at 40 MeV/nucleon. Using the reconstructed hot isotope yields and an improved method, based on the modified Fisher model, symmetry energy values relative to the apparent temperature, $a_{sym}/T$, are extracted. The extracted values are compared with those of the AMD simulations, extracted in the same way as that for the experiment, with the Gogny interaction with three different density-dependent symmetry energy terms. $a_{sym}/T$ values change according to the density-dependent symmetry energy terms used. Using this relation, the density of the fragmenting system is extracted first. Then symmetry energy and apparent temperature are determined in a self consistent manner in the AMD model simulations. Comparing the calculated $a_{sym}/T$ values and those of the experimental values from the reconstructed yields, $\rho /\rho_{0} = 0.65 \pm 0.02 $, $a_{sym} = 23.1 \pm 0.6$ MeV and $T= 5.0 \pm 0.4$ MeV are evaluated for the fragmenting system experimentally observed in the reaction studied.
I. Introduction
Nuclear symmetry energy, a part of the equation of state (EoS) in the nuclear matter equation, has been extensively studied in the last three decades. The symmetry energy relates to many subjects such as in nuclear astrophysics, nuclear structure, and nuclear reactions. Its property determination is a key objective in laboratory experiments [1,2]. Investigations of the symmetry energy, especially focusing on its density dependence, have been conducted using many observables such as isotopic yield ratios [3], isospin diffusion [4], neutron-proton emission ratios [5], giant monopole resonances [6], pygmy dipole resonances [7], giant dipole resonances [8], collective flows [9] and isoscaling [10][11][12]. Different observables may probe the properties of the symmetry energy at different densities and temperatures.
In a theoretical work of the EoS study, Wiringa et al. [13] pointed out that the density dependence of the symmetry energy may have different slope parameters in different higher density regions. When a three body interaction is taken into account, the symmetry energy shows a significant softening at ρ/ρ 0 ∼ 2 − 3, hardening again at ρ/ρ 0 ∼ 5 and then shows an asymptotic soft trend for the higher density. Therefore it is important to know not only the values of the symmetry energy and slope parameter or the exponent of the density dependent terms, but also the density and temperature of the system when the values are evaluated.
In one of our previous works, the density dependence of the symmetry energy at low densities were experimentally studied in several heavy ion reactions at 47 MeV/nucleon, using the light particles (Z = 1, 2) from the intermediate velocity source as the probe [14].
In that study the temperature in the region 5−10 MeV was evaluated from the double ratio thermometer and the density of 0.03 ≤ ρ/ρ 0 ≤ 0.2 was extracted from the coales-cence technique. In the sampled density and temperature intervals, symmetry energies were derived and nonzero symmetry energies were obtained at low densities. However in the quasiparticle approaches, such as Skyrme Hartree-Fock and relativistic mean field models or Dirac-Brueckner Hartree-Fock calculations, the symmetry energy tends to zero at low densities [2,15,16]. This significant experimentally observed symmetry energy deviation at low densities from those of the quasiparticle predictions can be attributed to the cluster formation which dominates the structure of low-density symmetric matter at low temperatures, in accordance with the mass action law.
In violent heavy ion collisions at intermediate energy regime (20 ≤ E inc ≤ a few hundred MeV/nucleon), intermediate mass fragments (IMFs) are copiously produced through multifragmentation processes. Nuclear multifragmentation, which in general, can be divided into stages, i.e., the dynamical compression and expansion of the fragmenting source, and the formation of primary hot fragments, was predicted a long time ago [17] and has been studied extensively following the advent of 4π detectors [18][19][20]. Nuclear multifragmentation occurs when a large amount of energy is deposited in a finite nucleus, and thus it provides important information on the properties of the hot nuclear matter equation of state.
To model the multifragmentation process, a number of different models have been developed in two distinct scenarios. One is based on a transport model, in which nucleon propagation in a mean field and nucleon-nucleon collisions under Pauli-blocking are two main physical ingredients. Various transport models have been coded, since Boltzmann-Uehling-Uhlenbeck (BUU) model [21] was first proposed in 1980s, which is a test particle based Monte Carlo transport model. Vlasov-Uehling-Uhlenbeck model (VUU) [22], Boltzmann-Nordheim-Vlasov model (BNV) [23] are formulated slightly differently with the same concept. Stochastic mean field (SMF) model [24][25][26] is also a test particle based model, but with fluctuations in multifragmentation process. Instead of using the test particles, Gaussian wave packets are introduced in describing the nucleons such as quantum molecular dynamics model (QMD) [27][28][29]. Constrained molecular dynamics(CoMD) model [30][31][32][33] and improved quantum molecular dynamics model (ImQMD) [34][35][36][37][38] are based on QMD, but an improved treatment is made on the Pauli blocking during the time evolution of the reaction. Fermionic molecular dynamics(FMD) [39] and anti-symmetrized molecular dynamics (AMD) [40][41][42] are most sophisticated models, in which the Pauli principle is taken into account in an exact manner in the time evolution of the wave packet and nucleonnucleon collisions. Most of them can account reasonably well for many characteristic properties experimentally observed. On the other hand statistical multifragmentation models such as microcanonical Metropolitan Monte Carlo model (MMMC) [43,44] and statistical multifragmentation model(SMM) [44][45][46][47][48][49][50][51][52], based on a quite different assumption from the transport models, can also describe many experimental observables well. The statistical models use a freeze-out concept. The multifragmentation is assumed to take place in equilibrated nuclear matter described by parameters, such as size, neutron/proton ratio, density and temperature. In recent analyses the parameters are optimized to reproduce the experimental observables of the final state. In contrast, the transport models do not assume any chemical or thermal equilibration. Nucleons travel in a mean field experiencing nucleonnucleon collisions subject to the Pauli principle. Fragmentation mechanisms are determined by the evolutions of the wave pockets or nucleons in the phase space, which also differ from those of the statistical models.
One of the complications one has to face when comparing the experimental observables to the model predictions in either dynamical or statistical models, is the secondary decay process. When fragments are formed in a multifragmentation process, many of them can be in excited states and cool down by evaporation processes before they are detected experimentally [53][54][55][56]58]. Here the fragments at the time of formation are called "primary" fragments. Those observed after the cooling process are called the "secondary" or "final" fragments. Multifragmentation process is a very fast process which occurs in an order of 50-100 fm/c in the intermediate energy heavy ion collisions, whereas the secondary decay process is much slower. Therefore the secondary cooling process may significantly alter the fragment yield distributions of the primary isotopes [59][60][61]. Even though the statistical decay process itself is rather well understood and well coded, it is not a trivial task to combine it with a dynamical code. That is because the statistical evaporation codes assume the nuclei at thermal equilibrium with normal nuclear density and shapes. However these conditions are not guaranteed for fragments when they are formed in the multifragmentation process.
In order to avoid this complication and make the comparisons between results from the experimental data and different models more straight forward, we proposed a method in which the primary hot fragment yields are reconstructed experimentally. The method utilizes a kinematic focusing of the evaporated particles along the precursors of IMFs. In Fermi energy heavy ion collisions, light particles are emitted at different stages of the reaction and from different sources during the evolution of the collisions. Those from an excited isotope are kinematically focused into a cone centered along the isotope direction. The kinematical focusing technique uses this nature. Details of the experiment, the kinematical focusing technique and the results are presented in Refs. [55,56].
In that work, the events triggered by IMFs in the experiment are "inclusive", but they belong to a certain class of events. In order to determine the event class taken in the experiment, AMD simulations are used to evaluate the impact parameter range sampled. Firstly the impact parameter distributions, corresponding violent, semi-violent, semi-peripheral and peripheral collisions are calculated. The violence of the reaction for each event in the AMD simulation is determined in the same way as our previous work [57]. Then the impact parameter distribution of the events triggered by the IMFs at 20 • is calculated and compared to those corresponding to the different violence. The distribution is very similar to those of the semi-violent collisions, in which the majority of the events originates from the impact parameter range of 0 − 8 fm. Therefore in the following analyses, the comparisons of the extracted parameters from the experimentally reconstructed isotope yields are made with those of the AMD simulations in the impact parameter range of 0 − 8 fm. In Fig. 1. the results of the multiplicity distributions of the experimental cold and reconstructed hot isotopes are shown, together with those of the primary isotopes simulated by the AMD calculations.
The reconstructed isotope multiplicities are reasonably well reproduced by the primary isotope distribution of the AMD simulation. In Refs. [56,58], we studied the properties of the fragmenting system through the symmetry energy coefficient relative to the temperature, a sym /T . In the study the a sym /T values were extracted in a simpler formalism, utilizing three isobars of the reconstructed primary hot fragments with I = N − Z = −1, 1 and 3.
This article presents an improved method to calculate the a sym /T values, in which the mass dependence of the temperature is taken into account as an apparent temperature. This method has been applied recently to the simulated AMD events of the very central collisions for 40 Ca + 40 Ca at 35 MeV/nucleon [62]. A self-consistent determination of density, symmetry energy and temperature described in Refs. [56,58] was also employed there. In this work the same procedure following Ref. [62] is applied to the experimentally reconstructed isotope yields of 64 Zn + 112 Sn at 40 MeV/nucleon to study the characteristic properties of the hot nuclear matter in the multifragmenting system. This article is organized as follows. In Sec.II we describe the improved method to determine the symmetry energy coefficient relative to the temperature, a sym /T , utilizing all isotope yields. In Sec.III, a self-consistent determination of density, symmetry energy and temperature is discussed. In Sec.IV, the mass dependent apparent temperature is studied.
Finally, a summary is given in Sec.V.
II. Extraction of a sym /T 0 values
In order to make a connection between the symmetry energy in a model and the experimentally reconstructed primary hot isotope yields in Fig. 1, the Modified Fisher Model (MFM) is employed [63][64][65][66]. MFM has been used to study the characteristic properties of the hot nuclear matter in the previous works [56, 58-60, 62, 66, 67]. In the framework of MFM, the yield of an isotope with I = N − Z and A (N neutrons and Z protons) produced in a multifragmentation reaction, can be given as (1) Using the generalized Weizsäcker-Bethe semiclassical mass formula [68,69], W (I, A) can be approximated as In Eq. (1) is the neutron (proton) chemical potential. τ is the critical exponent. In this work, the value of τ = 2.3 is adopted from the previous studies [66]. Since we apply this formulation for the primary hot fragments, the coefficients, a v , a s , a sym , a p and the chemical potentials, are generally temperature and density dependent, even though these dependencies are not shown explicitly.
In this formulation a constant volume process at an equilibrium is assumed in the free energy, and therefore the term "symmetry energy" is used throughout this work, following Ref. [70]. If one assumes a constant pressure at the equilibrium process [71], the therm "symmetry enthalpy" should be used. Experimentally, whether the equilibrium process takes place at constant pressure or volume can not be determined, and thus we use "symmetry energy" through out the paper, keeping in mind the ambiguity [70].
In the previous analyses [56,[58][59][60][61], the temperature in Eq.(1) was assumed to be identical to the temperature of the fragmenting source and treated as a constant for all isotopes.
However as seen in Ref. [62], this temperature turns out to be fragment mass dependent.
This mass dependence on the temperature was not recognized in these previous analyses, just because the mass dependence was masked by the larger error bars. However in this improved method, the error bars become small and the mass dependence becomes evident. In order to take into account this mass dependence of the temperature in Eq.(1), the temperature T is replaced by an apparent temperature T (A) = T 0 (1 − kA). T 0 is the temperature of the fragmenting source and k is a constant. As discussed in Ref. [62], this mass dependence of the apparent temperature is attributed to the system size effect.
In order to study the density, temperature and symmetry energy in the fragmenting source, the improved MFM of Eq.(1) is utilized to calculate the a sym /T 0 value, which is extracted from the available isotope yields. Since the a sym /T 0 value in Eqs. (1) and (2) depends on 5 parameters, a v , a s , a c , a p and ∆µ (∆µ = µ n − µ p ), the optimization process of these parameters is divided into the following three steps to minimize the ambiguity of each parameter. For a given k value, 1. Optimize ∆µ/T 0 and a c /T 0 values from mirror isobars.
2. Optimize a v /T 0 , a s /T 0 and a p /T 0 values from N = Z isotopes.
3. Using extracted parameters in step (1) and step (2), a sym /T 0 values are extracted from all available isotopes. Comparing the extracted a sym /T 0 values from the AMD simulations with three different interactions, the density of the fragmenting source is extracted. Using this density, the value of the symmetry energy coefficient, a sym , for each interaction is determined. The temperature is then extracted following the relation, T 0 = a sym /(a sym /T 0 ).
It is expectable that if the k value is properly selected which means the mass dependence is well considered, a constant T 0 is obtained. Since the k value is small as seen below, we perform the optimization of the parameter k in an iterative manner, that is, in the first round k = k 1 = 0 is set in T (A) = T 0 (1 − kA) and calculate the temperature as a function of A, using steps (1)- (3). From this plot a new k ′ 1 value is extracted from the slope. In the second round, k = k 2 = k 1 + 1 2 k ′ 1 is used for the steps (1)-(3) and a new k ′ 2 value is extracted. If the new k ′ 2 value is 0 within a given error range, the iteration stops and the k 2 value is fixed as the mass dependent parameter of the apparent temperature and T 0 value is determined. Otherwise the iteration continues.
These procedures are applied individually for the reconstructed isotope yields and the AMD simulated events with interactions having different density dependencies of the symmetry energy term, i.e., the standard Gogny interaction which has an asymptotic soft symmetry energy (g0), the Gogny interaction with an asymptotic stiff symmetry energy (g0AS) and the Gogny interaction with an asymptotic super-stiff symmetry energy (g0ASS) [41,72]. To keep consistent with experimental isotope selections, for AMD primary hot fragments, an approximate window is employed, in which the multiplicity of the IV source component is calculated by integrating the energy spectra over E > 5 MeV/nucleon and between 5 • < θ < 25 • in the laboratory frame in order to minimize the contribution from the projectile-like and the target-like sources, based on the moving source analysis [56,58].
Details of each step are described below. In the step (1), following Ref. [59], the isotope yield ratio between isobars with I + 2 and I, R(I + 2, I, A), is utilized, which is Table I for the first round (k = 0) and the final round (k = 0.0022).
where ∆F (N,Z) T 0 is the free energy relative to the temperature, F (N,Z) T 0 , subtracted by the calculated contributions of the volume, surface, Coulomb and paring terms, using the parameters in Table I. Resultant ∆F (N,Z) T 0 values are shown by symbols in Fig. 3(b). They exhibit quadratic shapes with the minimum values close to zero, indicating the N/Z of the fragmenting source is close to 1. The fluctuation around zero for N = Z isotopes reflects the deviations between the data and the fit points in Fig. 3(a).
In this step, the a sym /T 0 and the ∆µ/T 0 values are optimized. Since the ∆µ/T 0 values are extracted from the step(1), the optimization is made for each isotope around the values in the fifth column of Table I function of the density for the three interactions used in the calculations and in Fig. 4(c) their ratios, R sym = a sym (g0)/a sym (g0AS) and R sym = a sym (g0)/a sym (g0ASS), are plotted.
Using the ratio values determined from Fig. 4(a) and the density dependence of the R sym values in Fig. 4(c), the implied densities of the fragmenting sources are indicated by the shaded vertical areas shown in Fig. 4(c). The extracted density values for each case are given in the second column of Table II. Assuming that the nucleon density should be same for the three different interactions used, the nucleon density of the fragmenting source is determined from the overlap of the extracted values. This assumption is reasonable for the violent collisions because the nucleon density is mainly determined by the stiffness of the EOS and not by the density dependence of the symmetry energy term. From the overlapped density area in Figs. 4(c), ρ/ρ 0 = 0.65 ± 0.02 is extracted as the density at the time of the fragment formation. This overlapped density value is also assigned to the experimental density [56,58]. The corresponding symmetry energy values at that density are extracted for the three different interactions from Fig. 4(b). The experimental symmetry energy, a sym (Exp) is calculated from the average value of R sym (Exp) shown by the full line in Fig. 4(a), and a sym (g0) at the obtained density from the AMD events, ρ/ρ 0 = 0.65 ± 0.02, as a sym (Exp) = a sym (g0)/R sym (Exp). This operation is under the assumption that the system temperatures are almost identical from the AMD events and the experimental reconstructed isotope yields [56,58]. Their a sym values are given in the third column of Table II. Once the symmetry energy value is determined for the individual cases, the temperature Table II. the first round.
The iteration is repeated three times in this work. The same plots as Fig.3, but with the k value for the final (third) round, k = 0.0022, are shown in Fig.6 and the extracted parameters are also given in Table I are shown in Fig. 5 The extracted density and symmetry energy in the different iteration round are very similar as seen in Table II, even though the parameter values in Table I are 5 to 10% different in some cases. This indicates that the extracted density, symmetry energy and temperature values in Table II are quite stable in the iteration procedures. All parameters extracted in this work are also consistent to those in the previous works [56,58], in which a simpler method is employed to evaluate a sym /T 0 values.
IV. Discussion
In order to study the observed slope in the apparent temperature, a simple Monte Carlo model is employed, following Ref. [62]. Under a thermal equilibrium condition, the thermal motion with velocity v th i , where i = x, y, z, is expressed by a Maxwell-Boltzmann distribution where T 0 is the input parameter in the model. Fragments are generated by a percolation model for a system with mass 180 (6 × 6 × 5) [73].
V. SUMMARY
An improve method to extract the symmetry energy coefficients relative to the temperature, a sym /T 0 , and a self-consistent determination of the density, temperature and symmetry energy of the fragmenting system are presented. Using the improved method based on the | 2014-10-14T07:13:50.000Z | 2014-09-15T00:00:00.000 | {
"year": 2015,
"sha1": "b0596122e428ac32c1574c0a3cb5ebd960c076a2",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://manuscript.elsevier.com/S037594741400459X/pdf/S037594741400459X.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "b0596122e428ac32c1574c0a3cb5ebd960c076a2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54634544 | pes2o/s2orc | v3-fos-license | FINANCIAL INCLUSION INDEX: PROPOSAL OF A MULTIDIMENSIONAL MEASURE FOR MEXICO
The main purpose of this paper is introducing an index that allows for a general overview of Mexico’s municipalities in terms of financial inclusion. The index is calculated using principal component analysis on variables related to the measurement of the levels of access and usage of financial services, financial education, consumer protection and social development. Subsequently, all municipalities are ranked by an estimated degree of inclusion performing hierarchical cluster analysis. Results show that 36% of Mexico’s municipalities possess a high level of financial inclusion while 29% the lowest. Municipalities with higher income and better education benefit from the services that financial institutions offer, yet millions are still excluded from the financial system.
Introduction
Financial inclusion is defined as the access and usage of financial services under appropriate regulations to ensure consumer protection schemes and promote financial education such that it improves the financial capabilities of all segments of the population. 1 Although there is no global consensus on its definition, it is clear that financial inclusion has the characteristic of being a multidimensional phenomenon, and literature of recent years has been dedicated to define new ways of measuring, understanding and expanding its study worldwide, especially in developing countries.
As of today, there are numerous indicators that describe the different dimensions of financial inclusion individually; however, there is currently no measure designed to rank, in terms of financial inclusion, the situation of the municipalities or states of Mexico.Studying the number of bank branches on a specific location or the number of adults with a formal (regulated institution) bank account or credit card, allows to describe the dimension to which the variables pertain (access and usage, respectively) but does not, in any way, provide a junction between both dimensions to let us know whether a municipality with a number of bank branches and accounts is more included or excluded than the rest of the country's administrative units. 2 On this basis, the need of capturing the different dimensions of this phenomenon on a financial inclusion index is born.Subsequently, this paper presents, with aid of the index, a classification of the municipalities according to a certain degree of financial inclusion, and therefore, detecting that there are pronounced differences and similarities between certain groups of municipalities, states and regions.
It is also considered that this document will be of future reference for financial institutions and regulatory entities in the financial sector seeking to propose alternatives and policies that help provide more and innovative financial services to all segments of the population who do not have them and can benefit from their correct use.
The study is divided into the following sections: section 2 describes previous efforts from other countries to measure financial inclusion as a multidimensional phenomenon; section 3 defines the dimensions of financial inclusion used in this paper and the institutions that are included as part of the study; section 4 describes the methodology applied to calculate the index and classify Mexico's municipalities according to their degree of financial inclusion; section 5 presents the advantages and disadvantages of the proposed index; section 6 describes a general overview of financial inclusion in Mexico as an introduction to the results of the index; section 7, the main results at a municipal, state, and regional level; finally, section 8 concludes.
2 As of 2012, Mexico has 2,456 municipalities (administrative units) distributed in 32 states.
Previous Findings
Proposals of a financial inclusion index that allow comparisons between geographical divisions or countries around the world have been developed previously in India and Brazil.In the first example, an index of financial inclusion was constructed to find differences between countries.
Developed by Mandira Sarma,3 this index brings forward three dimensions of an inclusive financial system: banking penetration, availability of banking services and usage of the banking system. 4These dimensions were motivated by the availability of relevant and consistent data for a large number of countries.Sarma's index is computed as the normalized inverse Euclidean distance of weighted observations to an ideal point with an upper value limit of the empirically observed 94th quantile.The limitations of this proposal include that it only uses banking information, that the dimensions are weighted subjectively, and that values greater than the 94th quantile are set equal to it.
A second effort for a financial inclusion index was presented by Brazil.This other index is computed using the same formula and dimensions that Sarma determined, but including 18 different variables.Nonetheless, even with these proposals, there is yet no tool to understand the phenomenon as a whole for the particular case of Mexico and each of its geographical divisions.
Dimensions of Financial Inclusion and Institutions Studied
Considering the possibility of constructing a financial inclusion index requires the definition of two areas: the financial institutions that will be part of the analysis, and the dimensions selected to describe the phenomenon.
According to the information gathered, the universe of financial institutions includes: banking institutions, correspondent banking agents (which serve as an extension of the banking services) and the Cooperative and Microfinance institutions that are supervised by the National Banking and Securities Commission (CNBV, in Spanish), Mexico's regulatory entity for this kind of institutions.When more detailed information is available at a municipal level, the universe of these financial institutions can be broadened5 to effectuate a more comprehensive study taking advantage of the data.
The main dimensions used to describe the phenomenon must be those measurable and quantifiable.For this study, financial inclusion is considered to be composed by four basic dimensions: access (infrastructure), usage (financial products), financial education and consumer protection.Also, a fifth additional dimension has been included: social development.The reason for including such dimension is that financial inclusion should not only take into account the number of bank branches or financial products in a country, but to study the relationship between these variables and other social barriers that allow to further understand the limitations of individuals to access the financial system.
There are various methods for calculating an index: averages, intervals, distance measures, among others.The previous examples (India and Brazil's) are computed using the latter.In this paper, however, a different methodology is applied, which not only allows to preserve as much information of the variables as possible, but also aids in creating a new and easy to interpret variable (index).Once the indicators were computed, principal component analysis was used to calculate the financial inclusion index.Such analysis allows reducing a number of variables which are highly correlated to a smaller number of uncorrelated variables that explain the phenomenon with the same information than the original indicators (retaining the variability of the data) but in a synthetic manner.Additionally, the results of this methodology provide the opportunity to find linear relationships between the original indicators, and determine whether they are correlated negatively, positively, or if there is no close relationship between them.An important feature that has to be taken into account when performing this type of analysis is the scale of the variables: if the scale of measurement of the variables is not consistent, it is advised to perform principal component analysis on standardized variables.
Methodology
In other words, principal component analysis seeks to build a new set of synthetic variables called principal components, such that they are a linear combination of the original indicators under certain conditions: ( 1) The new variables must be uncorrelated: each principal component will describe a different characteristic of the dataset and not redundant information.
( 2) The new variables must have maximum variance: principal components account for most of the variance of the observed variables.Therefore, each synthetic variable c is a linear combination of the observed variables such that: where X is the matrix of the observed standardized variables and a is the weight vector associated with each of these variables.We are looking to maximize variance so: max Var(c)with the constrainta T a = 1 Nonetheless, to assess the variance of c it is necessary to obtain the function to be maximized: a where V is the covariance matrix of the standardized variables.This matrix is invertible, symmetric positive definite V = X T DX, and for standardized data, it corresponds to a correlation matrix.Naming c 1 as the first principal component and a 1 as the weight vector associated to the first component, we use the Lagrange multiplier technique to find the maximum of the objective function subject to the previously mentioned constraint8 and obtain the first solution: showing that a 1 should be chosen to be an eigenvector of V , say a 1 with eigenvalue λ 1 .But since the expression to maximize is V ar 1 a 1 = λ 1 , then a 1 should be chosen as the eigenvector corresponding to the largest eigenvalue λ 1 of V .Elements a 1j of vector a 1 are called weights and they measure the importance of variable j in principal component 1.This definition applies to the rest of the principal components as well.Now, in order to extract the second and remaining principal components, the function is now set to maximize the sum of the variance of those two (or more) components, subject to constraints a T 1 a 1 = 1 and a T 2 a 2 = 1.Therefore, the second solution is obtained from the following function: Once again, making use of vector differentiation: the solutions obtained are V a 1 = λ 1 a 1 and V a 2 = λ 2 a 2 , where λ 1 and λ 2 are the corresponding eigenvalues to a 1 and a 2 , eigenvectors.Subsequently, evaluating the function with this solution implies that the variance will be the highest with the value λ 1 + λ 2 , concluding they correspond to the highest and second highest eigenvalues possible.Equivalently, matrix V has p distinct eigenvalues (p is also the number of original variables), so that λ 1 > λ 2 > ... > λ p .
After obtaining the results of principal component analysis, classifying the observations in groups whose members are similar and which have characteristics that distinguish them between groups is possible with aid from the components.To achieve this, performing a hierarchical cluster analysis using Ward's method is advised.Further in this paper, the method allows to classify the municipalities into three levels (degrees) of financial inclusion.
Hierarchical cluster analysis is a statistical method of partitioning data into homogeneous classes.In this analysis, each observation is part of a cluster or group (i.e. at the beginning there are as many groups as individuals) which is combined sequentially using a particular rule to reduce the number of clusters.The clustering method uses the dissimilarities or distances between objects when forming the clusters and, as a result, more and more groups are linked together creating clusters with increasingly dissimilar elements.As mentioned, a distance measure and a method for clustering must be approached.In the present text, Ward's method is used, computing Squared Euclidean distances.Ward's method for hierarchical cluster analysis indicates that two clusters will form a new one if the merging cost of combining those clusters will be minimal.Such merging cost is how much the sum of squares will increase when they merge.
Finally, an explanatory data analysis to distinguish the fundamental differences between each of these types of municipalities, the states, and geographic regions of the country can be performed.
Advantages and Limitations of the Index
The financial inclusion index computed in the following sections has the advantage of integrating major dimensions that have not been considered before by other countries when proposing a summary measure.Today, consumer protection and financial literacy represent a fundamental part of the policy agenda to broaden the use and access to financial services.The integration of these dimensions allows a more precise and profound comprehension of the analyzed phenomenon.
However, the index also reflects the need of improving the way data is reported by financial institutions.Hence, there are still a number of limitations that can be worked out in the near future.The first problem facing Mexico in terms of measurement is that the number of accounts cannot be individualized.So far, reports provided by financial institutions at a municipal level do not distinguish the number of people who have a specific account but offer the data on the number of accounts available.Very often, a person has multiple accounts for one or more types of products.Moreover, the universe of financial products included in the index can continue to expand.Products such as insurance and microinsurance, pension and investment accounts that are granted exclusively to individuals can be integrated in the calculation of this index, but need to be handed at a municipal level to be studied.To the extent that the information on these products is available, financial inclusion can be studied fully through an index.
A Picture of Financial Inclusion in Mexico: Exploratory Data Analysis
Prior to the construction of the index, it is convenient to perform an exploratory data analysis that can help to provide an overview on financial inclusion in Mexico.
According to 2011 data of the CNBV, approximately 44% of Mexico's municipalities have infrastructure coverage through banks, Microfinance branches and Cooperative branches.The implementation of banking agents as a business model has allowed to increase municipality coverage in 13% and the adult population with possibility of access in 4%.Nonetheless, given the demographic distribution in the country, 90% of the adult population9 inhabits these municipalities.While rural municipalities have roughly 0.47 access points per 10,000 adults, metropolitan municipalities are attended by 4.37.
Banks, Cooperatives and Microfinance institutions serve the population in Mexico through 14,537 branches, 12,486 banking agents, 36,098 ATM's and 500,294 point of service (POS) terminals.The demographic indicators corresponding to these values allow for a better comprehension when linking them with the adult population: there are approximately 1.83 branches, 1.56 banking agents, 4.52 ATM's and 62.68 POS per 10,000 adults.For purposes of this paper, branches (bank, Cooperative or Microfinance) and banking agents altogether will be referred to as access points.
Out of all branches, 86% belong to banks while 14% belong to Cooperative and Microfinance institutions.When integrating the indicators for the deposit and credit services that these institutions offer, it is obtained that there are 11,742 deposit and 4,125 credit (loan) products per 10,000 adults.Of course, deposit accounts represent the most used financial service.
In terms of education and according to INEGI's 2009 municipal statistics, it is reported that the Mexican adult population attends an average of 8.1 years of schooling.CONEVAL, on the other hand, with its own study up to 2010 reports that 19.9% of the adult population has not completed primary education, while 6.9% of Mexico's population is illiterate.Complementarily, CONAPO informs that 20.6% of the Mexican population has an educational gap (referred to as lack of education), which is defined by not having complete compulsory basic education.Schooling is not only a fundamental tool for a person's daily life nowadays, but it has an impact on the socioeconomic conditions and the quality of life, because it allows people to access new knowledge, information, and implies greater opportunities for personal and professional development.Additionally, illiteracy plays it part: it limits an individual's participation in the society and increases their economic vulnerability, directly affecting the ability to get a job.The implications of these shortcomings are primarily economic and social. 10Moreover, in the dimension of consumer protection, CONDUSEF states in its latest statistical yearbook (2010) that in carrying out its duties, 1,086,999 actions of advisory work, management, conciliation, arbitration and free legal defense for the protection and defense of the users of financial services were executed.Promoting transparency in the formal financial sector is essential, so publishing these results is part of the tools that can be provided to the authorities, institutions and users to continue helping the cause.
Despite increased efforts, the reality of the Mexican financial inclusion has yet to reach wider segments of the population.The incidence of poverty 11 is still a reality in the country.This large segment of the population, according to CONEVAL in 2010, is represented by 46.2% of Mexicans (52 million).These people, lacking income and the essential services, are even further away from accessing or acquiring a formal financial service that allows them to make better use of the income they receive, however small it may be.New policies and 10 Consejo Nacional de Población (2010), Índice de marginación por entidad federativa y municipio 2010, Mexico, p. 15.
11 CONEVAL states that the definition of poverty encompasses the living conditions of the population in three areas: economic welfare, social rights and the territorial context.Specifically, it refers to the population with an income below the welfare line, lack of rights related to health, social security, basic services, quality of living, education and food, as well as elements that transcend the individual and are associated with social cohesion.
alternatives to attend this excluded segment must be created: financial services represent an opportunity.Now that the national situation of each dimension has been outlined briefly, finding linear relationships between some of the variables provides a first view of the possible results.For this analysis, an indicator of each dimension was chosen: access points per 10,000 adults, deposit accounts per 10,000 adults, number of CONDUSEF's technical and legal advices and disputes per 10,000 adults and the percentage of the population in poverty conditions (or incidence of poverty).The results were obtained with SPSS Statistics 20 Figure 2. Scatterplot Matrix: One Variable per Financial Inclusion dimension.
In the scatterplot and correlation matrices, two important details are spotted: first, a strong linear correlation between the adults with incomplete elementary education (NO PRIMARY) and the population in poverty conditions (POVERTY); second, the indicators of access points (ACCESS), CONDUSEF's technical and legal advices and disputes (CONDUSEF) and deposit accounts (DEPOSIT) per 10,000 adults are negatively correlated with those of poverty and incomplete elementary education.These results suggest that municipalities with greater access to financial services are those that tend to have a smaller percentage of the population living in poverty, as well as better education.Further analysis will take a look at the relationships between all 20 indicators and construct a synthetic variable to describe the municipal situation from all dimensions of financial inclusion.Finally, it is appropriate to highlight a characteristic of the database used for this study: the set of observations requires no probabilistic assumptions.This is because, first, the data is not based on a random sample, but represents the total population (all the municipalities in the country), and second, because the variables used are not random variables but the real observed values, that is, the official data reported for each of the municipalities in the country.Even though these indicators may be subject to errors related with quality of the data, they are the official measures reported by each institution.Therefore, the variability of the data is caused exclusively by the value of each observation.
Results and Analysis 7.1 Computing the Financial Inclusion Index
The set of units or individuals studied for the creation of this index is composed of the 2,456 municipalities in Mexico {x 1 , x 2 , ..., x 2456 }.Moreover, the set of variables x 1 , x 2 , ..., x 20 consists of the 20 indicators that have been chosen to compute the financial inclusion index.From this moment, it is assumed that the variables x are centered and reduced for each of the municipalities (i.e., standardized).The solution to max V ar(c) with constraint a T a = 1 is V a = λa where λ i ≥ 0 and a j ⊥ a k (which consequently, implies that c j ⊥ c k ).To find the eigenvectors and eigenvalues of the correlation matrix 12 V = X T DX as well as the graphs and principal component analysis, SPSS Statistics 20 was used.The resulting determinant of the correlation matrix has a value of $7.91 * 10 −12 .A low determinant for this matrix implies that there is at least a very strong relationship between two variables.What is initially trying to be explained in a space of twenty dimensions (because there are twenty indicators) is actually in a smaller space, thus, suggesting that the principal components will capture summarized information that is valuable.
On this analysis, the expression
V ar(c 1 )+...+V ar(c k ) p indicates the proportion of the inertia (variance) explained by the first k principal components.This measure is used to evaluate the quality of the graphs produced with the components.Even though the minimum acceptable values for this measure cannot be bluntly affirmed, when the ratio explained is satisfactory it is considered an adequate approach to the phenomenon.
The table below displays the total inertia (variance) explained by each of the principal components.In total, 20 components can be extracted.
Figure 4 The first two principal components accumulate 57.6% of the total variance, i.e., ( λ1 p + λ2 p ) * 100% = 57.6%.In the practice of multivariate statistics, this is considered to be an appropriate percentage.Further results confirm that the estimations provided contribute to a satisfactory interpretation of financial inclusion.For ease of the graphical explanation in a two-dimensional space, and due to the variance accumulation presented in the table, only the first two principal components will be used for the analysis.It is important to take into account that each additional component contributes with less variance than the last and that the components are uncorrelated with each other.
Since c k is an element of the variable space, it can be considered as a synthetic variable.Accordingly, it is possible to obtain the correlation between the component c k and any other element (indicator) of this particular space.The following table displays the correlation of each of the variables with principal components 1 and 2. These correlations allow to interpret which of the indicators are better explained by each of the components.On the other hand, the second component is more correlated to the variables that relate to banks, Cooperative and Microfinance institutions, specifically, to those that explain the proportion of the products in each municipality.
Additionally, this component associates negative values to the variables that describe certain social and economic deficiencies of the population.
Values close to zero for the elements of the first and second components are municipalities that can be considered average.That is, municipalities that do not have a broad access to financial services, but also do not lack education or income severely.
With this data it is possible to construct a component plot, where the horizontal axis corresponds to c 1 and the vertical axis to c 2 .In it, appreciation of which variables are more related to each other positively or negatively and which are uncorrelated is possible.Also, drawing lines from the origin of the plane to coordinates (Corr(c 1 , x j ), Corr(c 2 , x j )) for each of the variables allows to determine the angle between an indicator and the other, visually displaying how correlated they are.
Studying the component plot, it is perceived that the indicators of access and usage of financial services (variables BANK,' ATM, POS, AGENTS, MICROFINANCE, CREDIT, DEPOSIT, CONDUSEF) form much smaller angles between them, and thus, are more and positively correlated because municipalities with a higher coverage of financial institutions are those with a greater amount of products.Also, the incidence of poverty (POVERTY) and low-education related variables (NO EDUC, NO PRIMARY, ILLITERATE) form small angles with each other, which means these variables are correlated positively as well.This is because the angle between the lines that were drawn tends to a number close to zero, and therefore, tends to a strong but positive correlation,13 since the cosine of angles near 0 tends to a value of 1. Furthermore, variables such as the proportion of deposits and credits (BANK CREDIT, BANK DEPOSIT, MICRO CREDIT, MICRO DEPOSIT) for any of the financial institutions tend to be uncorrelated with the rest of the variables of access, usage, education and social development.Angles close to 90 degrees imply cosine values close to 0, and consequently, little or no correlation between the variables at all.This is easy to explain because the fact that financial services exist in a municipality generally does not imply that the Bank or the Cooperative and Microfinance institutions have the greatest share of this market.However, the plot displays that it is a fact that the variables of the proportion of deposit and credit products of Cooperatives and Microfinance institutions (MICRO DEPOSIT, MICRO CREDIT) are strongly correlated with municipalities in which the presence of such institutions (variable MICROFINANCE) is higher.This result is equivalent for the banking variables (BANK DEPOSIT and BANK CREDIT with variable BANK).These indicators do explain the reasons why there is a higher proportion of services provided by each of these institutions.
The same graph provides a fundamental result: variables belonging to the dimensions of access and usage of the financial services and greater average years of schooling (AV EDUC), the non-poor and non-vulnerable population (NON POOR) and a higher average income (INCOME) are strongly but inversely correlated to educational backwardness and poverty.This is explained because the angles between the lines of the variables form angles close to 180 degrees, implying that the cosine of that angle tends to −1, that is, strong but negative correlation.This result suggests that the costs and minimum requirements of financial products are an essential barrier for people with insufficient income, along with the lack of education, which is also a barrier related to an adequate use of these services.Additionally, the impossibility of being able to fill out the required documentation applies for illiterate adults.
Thus, principal component c 1 will receive the name of financial inclusion index, while the second, c 2 , will aid in the understanding of this phenomenon.The rest of the components are not included as part of this study because on one hand, they naturally accumulate much less variance of the data, and secondly, its interpretation does not represent an added value to the analysis14 The municipality with the highest financial inclusion index is the Cuauhtémoc borough in Mexico City: 15 the capital's downtown.This municipality is attended by a considerable number of financial institutions given the presence of businesses and government institutions that use these services.This administrative unit's index value is followed by other boroughs of Mexico City and municipalities of Nuevo León like Monterrey, San Pedro Garza García and San Nicolás de los Garza.Other municipalities with the highest values include Solidaridad in Quintana Roo, whose county seat is Playa del Carmen; Guadalajara in Jalisco; Los Cabos in Baja California Sur; Tampico in Tamaulipas, among others.The highest values of the index are found for municipalities with higher development and tourism.
In contrast, the lowest value of the financial inclusion index belongs to the San Simón Zahuatlán municipality in Oaxaca.This is a rural municipality, with little adult population and a high percentage of the population in a poverty situation.Other municipalities in the same state and from states like Chiapas, Guerrero and Veracruz accompany this municipality with the lowest values.These four states are exactly the ones with the most pronounced social deficiencies (including insufficient education, quality of living, social security, health and the basic services).Also, they're the states with the highest incidence of extreme poverty, being Chiapas (32.8%) the one with the highest percentage. 16nce the index has been computed, it is desirable to find a way to classify these municipalities according to their summarized financial inclusion.
Classification of the municipalities by their degree of financial financial inclusion
With aid of the financial inclusion index c 1 and the auxiliary component c 2 , the next objective set was to classify the municipalities according to certain degree of financial inclusion that allows creating groups whose members are similar to each other and have characteristics that distinguish them between groups.Thus, three financial inclusion degrees (or levels) have been determined using a hierarchical cluster analysis: high, medium and low.
Using Ward's method as a criterion for creating clusters and the Squared Euclidean distance as a measure, the results of components 1 and 2 are used as the inputs for which similarities and dissimilarities are evaluated.Once this algorithm was applied, it was possible to plot the municipalities according to the value of their financial inclusion index and auxiliary component.Using the first component as the horizontal axis and the second component as the vertical axis, the plot distinguishes each of the groups as a result of applying hierarchical cluster analysis.
assigns positive values to variables of financial services and lag but with much lower correlations.This component explains about 8% of the variance of the data, so it is not taken into account for the analysis.
15 Mexico City is divided into sixteen boroughs for administrative purposes.Even though these boroughs do not have regulatory powers or are fully autonomous like municipalities, each is considered a municipality when reporting data at this level. 16Consejo Nacional de Evaluación de la Política de Desarrollo Social (2011), Informe de Evaluación de la Política de Desarrollo Social 2011, Mexico, p. 23.
7. Municipalities by their Degree of Financial Inclusion with Coordinates (c 1 i , c 2 i ) Fuente: Elaboración propia.
The groups are now perfectly distinguishable.The first group (marked with circles) corresponds to those municipalities with a positive and higher value of the financial inclusion index in comparison with groups 2 (marked with triangles) and 3 (marked with rectangles).The latter (groups 2 and 3) contain municipalities whose financial inclusion index is not necessarily higher for some than others, but the auxiliary component c 2 explains its differences: the graph suggests that municipalities in group 2 are more attended by Cooperative and Microfinance institutions and have less education and economic deficiencies than those that characterize group 3. Given this analysis, three degrees of financial inclusion are determined as follows: group 1 corresponds to municipalities with a high level of financial inclusion, group 2 to municipalities with a medium level of financial inclusion and group 3 to those with a low level.
Municipal Analysis
Once that the degree of financial inclusion has been assigned for each municipality, an analysis of each of these groups allows comparisons between them according to their financial and social characteristics.
Figure 8. Geographic and demographic distribution in the country is complex.
Although each group has a percentage of municipalities between 29% and 36%, most of the adult population (79.1%) lives in municipalities with a high level of financial inclusion.However, this does not mean that all these adults have access to the financial services they require, it only indicates that they have the possibility of doing so.Moreover, the rest of the municipalities of the country account for a medium and low level of financial inclusion, even though only 21% of the adult population inhabits them.These results suggest that there is infrastructure (branches or banking agents) in municipalities inhabited by the vast majority of the adult population, but the products are not all that suitable for their inclusion in the financial system.
Municipalities with a high level of financial inclusion are characterized for having branches, banking agents, ATM's and POS per 10,000 adults in higher proportions than the reported national level, 17 higher presence of the banking institutions and a more considerable amount of deposit and credit financial products per 10,000 adult. 18In contrast, municipalities with a low level of financial inclusion have barely one access point, 19 and despite having a channel through which they can access the financial system, they can often only perform basic operations such as payments of the basic services (electricity, water, telephone), placing deposits or cashing checks, activities which do not entail them as members of the financial system.A first step for this individuals is that they are holders of an account and that they can maintain it over time. 17Municipalities with a high level of financial inclusion have 3.81 access points, 5.42 ATM's and 76.7 POS per 10,000 adults: these indicators are higher than the national computed values (3.29 access points, 4.52 ATM's and 62.7 POS per 10,000 adults).
18 When comparing indicators by the level of financial inclusion with the national values, it is observed that municipalities with a high level of financial inclusion have more deposit (13,166 products per 10,000 adults) and credit (4,755 per 10,000 adults) products per 10,000 adults than the national level (11,742 and 4,125 per 10,000 adults, respectively). 19Access indicators for the municipalities with a low level of financial inclusion are: 1.01 access points, 0.49 ATM's and 2.44 POS per 10,000 adults.Cooperative and Microfinance institutions mainly attend municipalities with a medium level of financial inclusion 20 .Even though this suggests that their objectives are well on track, these institutions, along with banks, must include reaching wider segments of the population with new and innovative financial services as part of their strategy.Figure 10.Education and Poverty by degree of Financial Inclusion Fuente: Elaboración propia. 20In fact, Cooperative and Microfinance institutions have higher values of the deposit and credit products per 10,000 adults in municipalities with a medium level of financial inclusion than those with a high or low level, where banks dominate the market.
Municipalities with a high level of financial inclusion also account for a higher level of education (average adult education in years) and less presence of adult illiteracy and incomplete elementary school studies.These are also the only municipalities that have an attention module of CONDUSEF that people can attend to, 21 and of course, they have a higher average income 22 than the rest of the municipalities in the country.In contrast, municipalities with a lower degree of financial inclusion have the largest amount of population living in poverty (82.12%, compared to the national level of 46.2%) and a higher number of social deficiencies.The population that does not have access to the basic services of living and an adequate income uses it to meet these needs, and therefore, does not have the money or interest in applying for a financial product.Lower incomes are a barrier to some financial services such as credits.Also, some deposit and savings accounts charge high commissions if the holders do not have a minimum balance in their account.
To complement this analysis, the following map of Mexico displays each of the municipalities according their degree of financial inclusion.The map was produced with INEGI's IRIS 4.2 software.
Presence of Institutions by Degree of Financial Inclusion
Fuente: Elaboración propia.
21 Municipalities with a high level of financial inclusion compose the only group that shows positive values for the indicator of CONDUSEF's technical and legal advices and disputes (169.38 per 10,000 adults).
22 Municipalities with a low level of financial inclusion have an average income of $870 Mexican pesos, while this indicator for municipalities with a medium level is $1,348.On the other hand, the average income for municipalities with a high level of financial inclusion is $2,289.
State Analysis
Studying the percentage of municipalities that have a high, medium or low level of financial inclusion for each of Mexico's 32 states can suggest which of them appear to be more excluded from the financial system.Figure 12 When principal component analysis is performed with the observations of the 32 states, it is obtained that Mexico City, Baja California and Baja California Sur are the states with the highest value of the first principal component c 1,S i (state level).Oaxaca, followed by Chiapas, Guerrero, Michoacán, Puebla and Guanajuato, are the states with the lowest value.These results are quite consistent with those obtained by merely analyzing the type of municipalities that predominate in each state.Financial exclusion and a lower value of the first principal component are closely related to the states whose social and educational backwardness is much more marked, while those who have more access to financial services reflect smaller values for these indicators.In fact, according to CONAPO, the states of Guerrero, Chiapas and Oaxaca have the highest level of poverty due to the shortages and hardships that most of their population suffers. 237.5 Regional Analysis Based on the Mexican government's National Development Plan, 24 the country can be divided into five major geographical regions: the Central region, with Mexico City, Hidalgo, the state of Mexico, Morelos, Tlaxcala and Querétaro; the Central-west region, which includes the states of Aguascalientes, Colima, Guanajuato, Jalisco, Michoacán, Nayarit, San Luis Potosí and Zacatecas; the Northeast region, with the states of Coahuila, Chihuahua, Durango, Nuevo León and Tamaulipas; the Northwest region, with the states of Baja California, Baja California Sur, Sinaloa and Sonora; and finally, the Southeast region, with Campeche, Chiapas, Guerrero, Oaxaca, Puebla, Quintana Roo, Tabasco, Veracruz and Yucatán.
Performing a similar analysis for the regions of the country, it is found that municipalities with a higher level of financial inclusion predominate in the Northwest, Northeast and Central regions, while municipalities with a medium level predominate in the Central-west part of the country. 25Needless to say, the Southeast region is mostly dominated by a low level of financial inclusion (42% of its municipalities have a low level).This is confirmed with the fact that the Northeast region has the highest value of the access points indicator per 10,000 adults (4.70 per 10,00 adults), while the Central region has the highest indicators for financial products (15,290 deposit products and 5,990 credit products per 10,000 adults) and CONDUSEF's technical and legal advices and disputes (306.26 per 10,000 adults).In contrast, the Southeast region possesses the lowest value for all indicators, 26 thus, confirming this is the geographical region with the most pronounced financial exclusion.Also, Cooperative and Microfinance institutions attend this and the Central-west region in a higher proportion (10% of the access points of these regions belong to these institutions) in comparison with the rest (no more than 6% of the access points of the other regions belong to Cooperative or Microfinance institutions).
Financial exclusion in the Southeast region is also evidenced by the fact that roughly 41% of its municipalities have an access point to financial services, whilst the rest of the country's regions have an infrastructural coverage of at least 73%.Finally, this region demonstrates to have the most pronounced social and economic deficiencies in the country: the least average years of schooling (5.4), the highest percentage of the population living in poverty (76%), the highest illiteracy (19%), severe lack of education (35%) and the most adults without complete elementary school (40%).
Concluding Remarks
Financial inclusion has been identified as a measurable multidimensional phenomenon.In practice, its measurement should not be limited to studying the number of financial institutions and products, but allowed to include features of social, political and economic scope.This study shows evidence that there are existent relationships between education, income, poverty and health with financial inclusion related data.
The financial inclusion index presented has several advantages over other indices that have been proposed before: first, it is constructed with principal component analysis and does not subjectively establish the ideal or maximum value the variables should have for each of the dimensions; second, it incorporates information from financial institutions (Cooperative and Microfinance) that were not previously considered; third, it covers five major dimensions of the phenomenon that not only account for infrastructure and product-related indicators; finally, the index allows to classify the municipalities into groups according to their degree of financial inclusion, revealing which characteristics are shared among them.
According to the results, adult population is heavily concentrated in municipalities with a high level of financial inclusion.This does not imply that most of these adults are candidates for a financial product of their choice, but it is evidence that broadening the financial infrastructure to the rest of the municipalities is not enough to serve the excluded population.
Efforts should be directed to designing new and innovative financial products that meet the needs of these segments of the population and that can directly and positively impact their quality of life.Encouraging the habits of saving and investing allows for a better preparation and a more stable future of individuals.Additionally, it employs more complex schemes that can aid in addressing future expenses and needs such as education or incidentals related to health, while avoiding the decline of all or part of their income. 26The Southeast region has 2.26 access points, 8,129 deposit products, 2,746 credit products and 50.32 CONDUSEF's technical and legal advices and disputes per 10,000 adults.
It is also important to note that the lack of education is a fundamental barrier to the access of financial services.Accompanied by illiteracy and other social needs that distinguish the population in conditions of poverty, it marginalizes individuals' opportunities far beyond financial inclusion: these deficiencies also impact their daily development in today's society.
The Southeast region suffers the greatest economic and social deprivation, suggesting that the conditions of poverty and insufficient income exclude the individuals from the ability to access financial services, regardless of the absence of financial institutions in a considerable portion of its territory.This only indicates that even when the financial system's infrastructure is expanded to this region, most of its population would probably be unable to obtain a financial product that can help improve their quality of life.It is therefore necessary to develop products fitted to their needs which can provide real benefits.Hand in hand with this, education in the country must be improved.
However, the task is not limited to promoting new financial products.A significant fraction of the society is not interested in being part of the financial system due to the mistrust they have for the institutions.Thus, the creation of financial education strategies that encourage people to join the system and that instruct them the basic principles on the use of these mechanisms is the first step to reduce the self-exclusion of these individuals.Still, informing the voluntarily excluded population must not be the only objective, but also to create financial education programs that teach the appropriate use of the services, especially at the basic levels.
The population with higher income and education is the one that primarily benefits from the services that financial institutions offer, yet millions of people are excluded from the Mexican financial system every day.Proposing indicators and measures to study and subsequently develop policies to benefit these segments of the population is a bet towards new opportunities: it allows both the families and the country as a whole to prosper, towards a more equitable growth.An efficient identification of the ongoing financial inclusion in the country leaves the door open not only to improving the quality of life for each individual, but also to position Mexico as a role model: a country of opportunities, in which the great economic disparities are a matter fought through numerous efforts.
Creation of the index is achieved with information from the following sources: the 2011 regulatory reports and databases of the National Banking and Securities Commission 6 (CNBV); data of the 2010 Population Census and 2009 municipal statistics reported by the National Institute of Statistics and Geography (INEGI); indicators of the 2010 Statistical Yearbook of the National Commission for the Protection and Defense of the Users of Financial Services 7 (CONDUSEF); dimensions pertaining to the 2010 poverty measurement of the National Council for Evaluation of the Policy of Social Development (CONEVAL); and finally, education indicators measured by the National Population Council (CONAPO) which were part of its 2010 marginalization index.Parting from these sources, 20 indicators were built for each of the 2,456 municipalities of Mexico, its 32 states, and five geographical regions." These twenty indicators cover the five proposed dimensions (access, usage, financial education, consumer protection and social development).
Figure 3 .
Figure 3. Bivariate Correlation Matrix: One Variable per Financial Inclusion Dimension
Figure 5 .
Figure 5. Correlations between the Variables and Principal Components c 1 and c 2
Figure 9 .
Figure 9. Presence of Institutions by Degree of Financial Inclusion . Total Variance Explained by each Principal Component Fuente: Elaboración propia.
On one hand, the table displays that the first principal component has a correlation higher than 0.4 (in absolute value) for most of the indicators.This component associates positive values to the indicators related with financial access such as branches, presence of institutions and financial products, while it associates negative values to indicators such as the population with lack of education, illiteracy, incomplete elementary education or in a poverty situation.
Municipalities and Adult Population by Degree of Financial Inclusion Fuente: Elaboración propia.
. Proportion of Municipalities by Degree of Financial Inclusion and StateMost municipalities in 27 states have a high or medium level of financial inclusion.While 17 states are dominated by municipalities with a high level of financial inclusion, 9 are dominated by municipalities with a medium one.Colima is the only state that has half of its municipalities with a high level of financial inclusion and the other half with a medium level.However, the states of Chiapas, Guerrero, Puebla, Veracruz and Yucatán are the five states where most of the municipalities have a low level of financial inclusion, with Chiapas as the state with the highest percentage (80%). | 2018-12-07T10:24:38.460Z | 2013-07-01T00:00:00.000 | {
"year": 2013,
"sha1": "faf530816a016c3163c06ddb44e567d44bf96208",
"oa_license": "CCBYNC",
"oa_url": "https://www.remef.org.mx/index.php/remef/article/download/46/77",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "faf530816a016c3163c06ddb44e567d44bf96208",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geography"
]
} |
9638705 | pes2o/s2orc | v3-fos-license | A novel statistical fusion rule for image fusion and its comparison in non subsampled contourlet transform domain and wavelet domain
Image fusion produces a single fused image from a set of input images. A new method for image fusion is proposed based on Weighted Average Merging Method (WAMM) in the NonSubsampled Contourlet Transform (NSCT) domain. A performance analysis on various statistical fusion rules are also analysed both in NSCT and Wavelet domain. Analysis has been made on medical images, remote sensing images and multi focus images. Experimental results shows that the proposed method, WAMM obtained better results in NSCT domain than the wavelet domain as it preserves more edges and keeps the visual quality intact in the fused image.
INTRODUCTION
Image fusion provides an efficient way to merge the visual information from different images. The fused image contains complete information for better human or machine perception and computer-processing tasks, such as segmentation, feature extraction, and object recognition. Image fusion can be done in pixel level, signal level and feature based. The traditional image fusion schemes performed the fusion right on the source images, which often have serious side effects such as reducing the contrast. Later researchers realized the necessity to perform the fusion in the transform domain as mathematical transformations provides further information from the signal that is not readily available in the raw signal.
With the advent of wavelet theory, the concept of wavelet multi-scale decomposition is used in image fusion [9]. The wavelet transform has been used in many image processing applications such as restoration, noise removal, image edge enhancement and feature extraction; wavelets are not very efficient in capturing the two-dimensional data found in images [5]. Several transform have been proposed for image signals that have incorporated directionality and multiresolution and hence, those methods could not efficiently capture edges in natural images. Do and Vetterli proposed contourlet transform [8], an efficient directional multi resolution image representation. The contourlet transform achieves better results than discrete wavelet transform in image processing in geometric transformations. The contourlet transform is shift-variant based on sampling. However, shift invariance is a necessary condition in image processing applications.
The NSCT is a fully shift-invariant, multiscale and multidirection expansion that has a fast implementation [1]. It achieves a similar sub band decomposition as that of contourlets, but without downsamplers and upsamplers in it, thus overcoming the problem of shift variance [2].
NON SUBSAMPLED CONTOURLET TRANSFORM
The Non Subsampled Contourlet Transform (NSCT) is constructed by combining the Non subsampled Pyramids (NSP) and the Non subsampled Directional Filter Banks (NSDFB) [1]. The former provide multiscale decomposition and the later provide directional decomposition [3]. A Non subsampled Pyramid split the input into a low-pass subband and a high-pass subbands. Then a Non subsampled Directional Filter Banks decomposes the high-pass subband into several directional subbands. The scheme is iterated repeatedly on the low-pass subband [11].
IMAGE FUSION SCHEME AND STATISTICAL FUSION RULES
Image fusion scheme in two source images can be considered as a step by step process. First, the source images are divided into coarse scales and fine scales. Coarse scales represent the high frequency components and fine scales represent low frequency components in the source images. Low frequency components contain overall details of the image while the high frequency components contain details about edges and textures. Then, the coefficients of the source images are decomposed. Second, the coarse scales and the fine scales in the source images are separately fused based on statistical fusion rule using NSCT [4][5] [6] [7]. Separate fusion rules are applied on these fine scales and coarse scales to obtain the fusion coefficients. The fused image is obtained by inverse NSCT from these fusion coefficients.
In this section, two different statistical fusion rules are discussed. These rules are analyzed experimentally thereby examining the performance of the image fusion in both wavelet and NSCT domain.
Method 1-Fusion based on Entropy
Entropy is the measure of information content in the image. A high value of entropy denotes more information content and vice versa. So this statistical measure could be used in making a decision to select the fusion coefficient.
Entropy is calculated on the low frequency components of the input images within a 3-by-3 window and whichever having higher values of entropy were selected as the fusion coefficients among the low frequency components For the high frequency components, regional energy is calculated over a 5-by-5 window using the Finally the fused image is reconstructed using the fused coefficients,
Method 2-Fusion based on Mean
Mean is the representative value of a large dataset that describes the center or middle value. Mean is the measure of the group contributions per contributor which is conceived to be the same as the amount contributed by each n contributors if each were to contribute equal amounts Mean is calculated on the low frequency components of the input images within a 3-by-3 window and whichever having higher values of mean were selected as the fusion coefficients among the low frequency components.
For the high frequency components, regional energy is calculated over a 5-by-5 window using the formula W is a filter that gives more weightage to the central coefficient and is defined as Finally the fused image is reconstructed using the fused coefficients,
Method 3-Fusion based on Standard Deviation
Standard Deviation provides a way to determine regions which are clear and vague. It is calculated by the formula Standard Deviation is calculated on the low frequency components of the input images within a 3by-3 window and whichever having higher values of mean were selected as the fusion coefficients among the low frequency components.
For the high frequency components, regional energy is calculated over a 5-by-5 window using the formula W is a filter that gives more weightage to the central coefficient and is defined as Finally the fused image is reconstructed using the fused coefficients,
4.IMAGE FUSION BASED ON WEIGHTED AVERAGE MERGING METHOD (WAMM)-PROPOSED APPROACH
In this section, we discuss the fusion based on WAMM in NSCT Domain. WAMM is used in the high frequency components to obtain the fusion coefficient whereas Standard Deviation is calculated on the low frequency components of the input images within a 3-by-3 window. An average of the low frequency components is calculated. Whichever obtains the higher values of average are selected as the fusion coefficients among the low frequency components.
The main features of this new method are that it preserves the image quality and the edge details of the fused image. The visual quality of the fused image is better in NSCT domain.
The Weighted Average Merging Method (WAMM) is formulated as
The weights are estimated as: Where T denote the threshold and T ∈ (0,0.5).
When the weight is zero, this means the substitution of an image by another.
Fusion Based on Proposed Method in NSCT Domain
Standard Deviation is calculated on the low frequency components of the input images within a 3by-3 window and whichever has the higher values of Mean are selected as the fusion coefficients among the low frequency components.
While WAMM is used in the high frequency components to obtain the fusion coefficient . The high frequency component has the crucial information within the images like the texture, brightness and contrast. The WAMM takes care of preserving these details much more better than the fusion schemes that we discussed before. The use of standard deviation enhances the fusion scheme by preserving the edges in the images, while WAMM help to preserve the texture and other detailed information in the images by providing a way to combine the values without much bias, ie, a low value is compensated by giving a proper weight to provide a considerable contribution with respect to a high value.
PERFORMANCE MEASURES
The issue in the performance evaluation of an image fusion algorithm is that the unavailability of reference images. In addition, relevant research shows that a single measurement cannot be effectively evaluate the performance of different fusion algorithms or always cannot be consistent with human visual perception.
(1) Qualitative approaches: involves visual comparison of the input images and the output image.
(2) Quantitative approaches: involves a set of pre-defined quality indicators for measuring the spectral and spatial similarities between the fused image and the original images.
Because qualitative approaches and visual evaluations may contain subjective factor and may be influenced by personal preference, quantitative approaches are often required to prove the correctness of the visual evaluation. In our work, objective analysis of the proposed method is done using the performance metrics. Even though these metrics do not provide a foolproof estimate of the performance of the method, they can be used in comparative analysis as a performance indicator. Following are the metrics used for performance evaluation in this research work.
Entropy
Entropy is a measure of information content of an image. It helps to know the information content of the source images and the output fused image. An increased value of entropy of the fused image implies a better fusion scheme.
A higher value of S would indicate a better fusion scheme.
Piella Metric
Let x={x i |i=1,2,…,N} and y={y i |i=1,2,…,N} be the original and the test image signals, respectively. The Universal Image Quality index proposed by Zhou Wang [13] Piella Metric [13] is a quality measure which is derived from the above mentioned metric and offers much more focus on the locality of reference of the images. It takes into account regionbased measurements to estimate how well the important information in the source images is represented by the fused image. The evaluation of Piella Metric of the fused image f of the input images a and b is defined as are the edge images of a, b and f respectively. Canny operator is selected to detect the edge information, which detects the edges by searching the local maximum of image gradient. Canny operator detects the strong edges and weak edges with two thresholds, respectively, where the thresholds are system automatic selection. The Canny operator is not sensitive to noise and can detect the true weak edges.
In order to measure the metric, a sliding window is employed: starting from the top-left corner of the two images, a sliding window of a fixed size traverses over the entire image until the bottomright corner is reached. For each window the local quality index is computed. Finally, the overall image quality index is computed by averaging all local quality indices. ) , is the weighted fusion quality index and is defined as is some saliency of image a in window w. The energy is selected as the salient feature and the size of the window is 3 by 3 and it is moved starting from the top-left corner of the two images until the bottom-right corner is reached The overall saliency of a window is defined as The Keywords section begins with the word, "Keywords" in 13 pt. Times New Roman, bold italics, "Small Caps" font with a 6pt. spacing following. There may be up to five keywords (or short phrases) separated by commas and six spaces, in 10 pt. Times New Roman italics. An 18 pt. line spacing follows.
RESULT ANALYSIS
Image Fusion techniques requires the registered images for testing. Image Registration [4] is the determination of a geometrical transformation that aligns points in one view of an object with corresponding points in another view of that object or another object. The experiments are carried out with the registered images.
The various statistical rules have been analyzed and the proposed statistical fusion rule (WAMM) is tested in both wavelet domain and NSCT domain on Medical images, Remote Sensing images and Multi Focus images.
Experiment 1 on Medical Images
Different medical imaging techniques may provide scans with complementary and occasionally conflicting information. The combination of images can often lead to additional clinical information not apparent in the separate images. The goal of image fusion is to impose a structural anatomical framework in functional images. Often a single functional image may not contain enough anatomical details to determine the position of a tumour or other lesion.
Aim:
To fuse two greyscale medical images,of which one is a CT image and the other is a MR image.
Experimental Setup:
Input images: 256 x 256 greyscale CT and MR image of brain ( Figure 5.1 (a-b)). Here EN1 and EN2 represent the entropy of the original images to be fused in wavelet and NSCT domain respectively. In the fused image with, WAMM performs better than in NSCT than wavelet domain and it preserves more details in the fused image. The artifacts and inconsistencies in wavelet domain is removed in NSCT domain using WAMM method. In the above table, it is seen that the fusion with SD, Similarity and PM gives better results
Experiment 2 on Multifocus images
Due to the limited depth-of-focus of optical lenses (especially those with long focal lengths) it is often not possible to get an image that contains all relevant objects "in focus". One possibility to overcome this problem is to take several pictures with different focus points and combine them together into a single frame that finally contains the focused regions of all input images.
Aim:
To fuse two greyscale multifocus images using the existing methods and the proposed method.
Experimental Setup:
Input images: 256 x 256 greyscale clock images with Figure 5 Here EN1 and EN2 represent the entropy of the original images to be fused in wavelet and NSCT domain respectively In the fused image with, WAMM performs better than in NSCT than wavelet domain and it preserves more details in the fused image. The artifacts and inconsistencies in wavelet domain is removed in NSCT domain using WAMM method. In the above table, it is seen that the fusion with SD, Similarity and PM gives better results.
CONCLUSIONS
In this work, a new statistical fusion rule, Weighted Average Merging Method (WAMM) is proposed in NSCT domain. A review of the different statistical fusion rules such as Entropy , Mean ,Standard Deviation and Weighted Arithmetic Merging Method is discussed. In the method, fusion rule using Standard Deviation, the edge information is preserved successfully within the image, but image lacked visual quality that we expected. So in order to overcome the limitations of the existing statistical fusion rules, the combination of Standard Deviation in the coarse scales and Weighted Arithmetic Merging Method (WAMM) in the fine scales is proposed.
A new statistical fusion rule, WAMM is proposed in NSCT domain. Experimental results shows that WAMM method for image fusion obtained better results in NSCT domain when tested with performance measures, SD, Similarity and Piella Metric. It preserves the edge details and the visual quality of the fused image. The analysis obtained shows that the proposed WAMM yields better results in NSCT domain. This proposed scheme is tested both in NSCT and in wavelet domain and the results were compared and obtained better results. As a future work, an Adaptive Weighted Average Merging Method can be suggested. | 2012-05-08T02:52:46.000Z | 2012-04-30T00:00:00.000 | {
"year": 2012,
"sha1": "82eb2f1c0ffc616bc22d97fe2a68d9c49f181e36",
"oa_license": null,
"oa_url": "https://doi.org/10.5121/ijma.2012.4206",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "82eb2f1c0ffc616bc22d97fe2a68d9c49f181e36",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
249151969 | pes2o/s2orc | v3-fos-license | Near-ultraviolet to visible spectroscopy of the Themis and Polana-Eulalia complex families
Context. Spectrophotometric data of asteroids obtained in the 1980s showed that there are large variations in their near-ultraviolet (NUV) reflectance spectra. Reflectance spectra at NUV wavelengths are important because they help detect the presence of hydrated minerals and organics on the asteroid surfaces. However, the NUV wavelength region has not been fully investigated yet using spectroscopic data. Aims. The aim of our study is to obtain the near-ultraviolet to visible (NUV-VIS, 0.35 – 0.95 µ m) reflectance spectra of primitive asteroids with a focus on members of the Themis and Polana-Eulalia complex families. This characterization allows us to discuss the origin of two recent sample return mission target asteroids, (162173) Ryugu and (101955) Bennu. Methods. We obtain low-resolution visible spectra of target asteroids down to 0.35 µ m using the telescopes located at the Roque de los Muchachos Observatory (La Palma, Spain) and revisit spectroscopic data that have already been published. Using new spectroscopic and already published spectrophotometric and spectroscopic data, we study the characteristics of the NUV-VIS reflectance spectra of primitive asteroids, focusing on data of the Themis family and the Polana-Eulalia family complex. Finally, we compare the NUV characteristics of these families with (162173) Ryugu and (101955) Bennu. In this work, we also study systematic e ff ects due to the use of the five commonly used stars in Landolt’s catalog as solar analogs to obtain the asteroid reflectance in the NUV wavelength range. We compare the spectra of five G-stars in Landolt’s catalog with the spectrum of the well-studied solar analog Hyades 64, also observed on the same nights. Results. We find that many widely used Landolt’s G-type stars are not solar analogs in the NUV wavelength spectral region and thus are not suitable for obtaining the reflectance spectra of asteroids. We also find that, even though the Themis family and the Polana-Eulalia family complex show a similar blueness at visible wavelengths, the NUV absorption of the Themis family is much deeper than that of the Polana-Eulalia family complex. We did not find significant di ff erences between the New Polana and Eulalia families in terms of the NUV-VIS slope. (162173) Ryugu’s and (101955) Bennu’s spectral characteristics in the NUV-VIS overlap with those of the Polana-Eulalia family complex which implies that it.
Introduction
Photometric studies of asteroids in the NUV (0.35 -0.50 µm) started in the 1950s using photomultiplier tubes with UBV broadband filters (e.g., Groeneveld & Kuiper 1954;Wood & Kuiper 1963) mainly because the photoelectric response of CsSb detectors used at that time was better at those wavelengths. These studies found that asteroids showed variation in U-B and/or B-V colors. The first asteroid large survey with wide wavelength coverage from the NUV to near infrared was done using 24 narrowband filters (the so-called 24 color asteroid survey, Chapman & Gaffey 1979;McFadden et al. 1984). The spectral reflectance curves were found to be indicative of silicate-rich (S) compositions to carbonaceous-rich (C) compositions, which were related to certain classes of meteorites (McCord & Gaffey 1974). Around the same time, a study in the NUV region using the UBV system was expanded to a larger number of objects and found that it is possible to distinguish S, C, and 'unclassified' groups by UBV color only (Zellner et al. 1975;Bowell & Lumme 1979).
Send offprint requests to: E. Tatsumi, e-mail: etatsumi-ext@iac.es The significance of the NUV was understood in terms of broad ultraviolet charge exchange absorptions due to transition metal ions, principally Fe 2+ in a silicate lattice (Gaffey 1976). Shortly after that, Zellner et al. (1985) expanded the wavelength range to 0.34 to 1.04 µm using an indium-gallium-arsenide-phosphide (InGaAsP) photomultiplier of high quantum efficiency and eight filters, known as the Eight Color Asteroid Survey (ECAS).
The introduction of charge-coupled devices (CCDs) greatly increased the ability to obtain higher wavelength resolution spectroscopy of fainter objects. However, CCDs' quantum efficiency decreases drastically in the NUV. This led most of the spectroscopic surveys, such as Small Main Belt Asteroid Spectroscopic Survey (SMASS) I, II, and Small Solar System Objects Spectroscopic Survey (S 3 OS 2 ), to stay in the visible (VIS) wavelength range, in other words, at wavelengths > 0.45 µm (Xu et al. 1995;Bus & Binzel 2002b,a;Lazzaro et al. 2004). In this study, we expand the wavelength range down to 0.35 µm to study the diagnostics in the NUV with a focus on carbonaceous asteroids.
Among carbonaceous asteroids, those with a negative visible spectral slope (i.e., spectrally blue) are gaining a great deal of attention because of several ongoing and planned missions to these types of objects, such as NASA OSIRIS-REx, JAXA Hayabusa2, and DESTINY+ (Lauretta et al. 2019;Watanabe et al. 2019;Sarli et al. 2018). The relation between the target asteroids of these missions: (101955) Bennu, (162173) Ryugu, and (3200) Phaeton, respectively, and their precursor bodies in the main asteroid belt, is an important question still to be addressed. We focus on two large, low-albedo asteroid families, the Themis family and the Polana-Eulalia complex family, in which the largest members are spectrally blue (Xu et al. 1995;Bus & Binzel 2002b;de León et al. 2016;Tatsumi et al. in prep.). The Polana-Eulalia family complex is located in the inner main belt and is considered to be the most likely origin of both Ryugu and Bennu (Campins et al. 2010a(Campins et al. , 2013Bottke et al. 2015). This was the starting point of our PRIMitive Asteroids Spectroscopic Survey (PRIMASS; Pinilla-Alonso et al. 2021), in which we focused on acquiring visible and near-infrared spectra (and NUV spectra to a minor extent) of primitive, carbonaceous-like asteroids in the main belt. We gave particular attention to members of collisional families and/or dynamical groups Pinilla-Alonso et al. 2016;Morate et al. 2016Morate et al. , 2018Morate et al. , 2019De Prá et al. 2018Arredondo et al. 2020Arredondo et al. , 2021a.
In the 1980s, the great variation in reflectance in the NUV was pointed out (e.g., Tholen 1984). Based on the NUV and near infrared observations, Feierberg et al. (1985) suggested that this variation is due to the correlation between NUV absorption and the 3-m band depth, which is mainly caused by the presence of hydroxyl in phyllosilicates. Moreover, (Hiroi et al. 1993;Hiroi et al. 1996) find a similar correlation in hydrated meteorites (CM, CI) through heating experiments. More recently, it was pointed out that the carbon or magnetite formed by the space weathering process on the asteroid surfaces might also affect the NUV behavior (Izawa et al. 2019;Hendrix & Vilas 2019). Thus, the NUV region can be potentially used as a diagnostic for finding hydrated minerals or carbon compounds, although the possibility has not been fully investigated so far. This study opens a new door to ground-based spectroscopy in the NUV region.
In this work, we investigate the NUV-VIS spectra of these families in the frame of the PRIMASS survey, and we discuss their composition to further study the role of phyllosilicates and explain the presence of NUV absorption. In addition, we discuss the importance of using well-characterized solar analog stars in the NUV region, and the problems we have found with many of the most commonly used stars in the planetary science community. The paper is organized as follows: spectroscopic observations (including solar analogs), data reduction, and asteroid classification are described in Section 2; spectral slope calculations, solar analog correction, and comparison with spectrophotometry from the previous survey ECAS are presented in Section 3; finally, we discuss the results and summarize conclusions in Sections 4 and 5, respectively.
Asteroid observations at TNG
The NUV-VIS spectra at the 3.58-m Telescopio Nazionale Galileo (TNG), located at the Roque de Los Muchachos Observatory (ORM) on the island of La Palma (Spain), were obtained with the Device Optimized for the LOw RESolution (DO-LORES) spectrograph. The instrument is equipped with a 2048 × 2048 pixels detector and a 0.25 "/pixel plate scale, which yields a 8.6' × 8.6' field of view. The low-resolution LR-B (blue) and LR-R (red) grisms were used, covering the 0.34 -0.80 µm and the 0.45 -1.01 µm spectral ranges with a dispersion of 2.5 and 2.6 Å/pixel, respectively. We used 1.5" or 2.0" slits oriented at the parallactic angle and set the tracking of the telescope to the asteroids' proper motion. The observations on the nights of February 6, 7, and 8, 2012 were done in the framework of B-type asteroid study by de León et al. (2012) in which they analyzed the spectral behavior of a sample of 45 B-type asteroids in the near-infrared. The idea was to expand that study to the NUV region, observing those B-types and also some members of the Themis collisional family. Observations on the nights of October 30, and 31 and November 1, 11, and 12, 2010, were originally published by de , who they studied the visible spectra of members of the Polana-Eulalia family complex and presented NUV spectra for some of them. For this paper, we downloaded the corresponding raw spectra (of the asteroids and the solar analog stars) from the TNG archive 1 and did a new data reduction to account for identified problems in the behavior of the solar analogs in the NUV which we explain in Sec. 3.1. To enlarge our NUV spectral sample, we did a further search in the TNG archive. We looked for any asteroid spectra obtained with the LR-B grism that were observed only on nights when the solar analog star Hyades 64 was also observed (see Sec. 2.3). The enlarged sample includes asteroids from other collisional families, rocky asteroids (non-carbonaceous), and even a couple of Trojans. This dataset includes the observations by Cellino et al. (2020), but we applied the different data reduction procedure. It is important to note here that, although we do not use some of these spectra for our scientific discussions, we have decided to keep them in the study as they are a valuable and trustworthy data set of NUV-VIS asteroidal data that could be useful for future studies. Table A.1 shows the observation conditions of the asteroids, including the date and UTC start time, the apparent visual magnitude of the asteroid at the time of observation (m V ), the exposure time for each of the grisms (LR-B and LR-R), the airmass (AM), and the phase angle.
Observation of (162173) Ryugu at GTC
On December 6, 2020, the Japanese spacecraft Hayabusa2 successfully returned samples from the carbonaceous-like asteroid (162173) Ryugu to Earth. When the spacecraft dropped the sample container, Ryugu was approaching Earth, which made observations from the ground very favorable. We therefore obtained low-resolution NUV-VIS spectra of Ryugu using the 10.4-m Gran Telescopio Canarias (GTC), also located at ORM, under program GTC75-20B. The spectra were obtained using the Optical System for Imaging and Low Resolution Integrated Spectroscopy (OSIRIS) camera spectrograph (Cepa et al. 2000;Cepa 2010) installed at the GTC. The optical spectrometer OSIRIS is equipped with two 2048 × 4096 pixel detectors and a total unvignetted field of view of 7.8' × 7.8'. We used the 1.2" slit and the R300B grism with a dispersion of 5 Å/pixel, which covers 0.36 -0.85 µm. The observations were conducted by orienting the slit along the parallactic angle to minimize the effects of atmospheric differential refraction and the telescope tracking was set to the asteroid's proper motion. A series of three spectra was obtained, with an offset of 10" in the slit direction in between individual spectra. We applied the same procedure to the stars. Observational details are shown in Table 1. To obtain the asteroid's reflectance spectrum, we observed solar analog stars SA 93-101 and SA 98-978 at a similar AM. In the following sec- tion, we further describe the importance of properly selecting these stars.
Star observations
In asteroid spectroscopy, we need to remove the solar contribution from the observed asteroid spectra. To do this, the solar analog stars instead of the Sun are commonly used because the Sun is too bright for telescope observations. Historically, planetary scientists have broadly used G-type stars as solar analogs as they are known to be spectrally very close to the Sun in visible wavelengths (it should be noted that, after new observations, some of them were recently reclassified from F to G type, see Table 2). Several G-type stars listed in Landolt (1973Landolt ( , 1983Landolt ( , 1992 are commonly used as solar analogs based on their photometric colors and temperatures. However, as a consequence of our interest in the NUV region, we have discovered that many of these widely used stars are either not well-characterized below 0.45 -0.5 µm or do not have a spectral behavior in the NUV region similar to that of the Sun. It is widely acknowledged that it is hard to find a solar analog in the NUV wavelength range (Hardorp 1978). This is because small variations in the CN and CH abundances and the metallicity of G-type stars introduce significantly large differences in the flux around 0.387 µm and 0.43 µm, and a photon flux below 0.5 µm, respectively (Hardorp 1978;Porto de Mello et al. 2014). To minimize this problem, previous photometric surveys avoided the use of solar analogs. They instead observed well-characterized standard stars and computed their relative flux to the Sun (Chapman & Salisbury 1973;DeMeo & Carry 2013) or they used only the well-characterized solar analogs by Hardorp (1980) to define the zero point of the color system (Tedesco et al. 1982). Another way to avoid the problem is to use solar analogs that are well characterized in the NUV. Hardorp (1978Hardorp ( , 1980 found several solar spectral analogs in the NUV: Hyades 64 (HD 28099), Hyades 106 (HD 29461), Hyades 142 (HD 30246), 16 Cyg B (HD 186427), HD 44594, and HD 191854. Later, Neckel (1986) confirmed Hyades 64, 16 Cyg B (HD 186427), and HD 44594 are very close to the Sun in UBV color and Farnham et al. (2000) confirmed that Hyades 64 (HD 28099), Hyades 106 (HD 29461), 16 Cyg B (HD 186427), and HD 191854 behave similar to the Sun when observed with the HB narrowband filter designed for comet observations. Among them, Hyades 64 (HD 28099) and 16 Cyg B (HD 186427) are commonly acknowledged as the best-matched solar analogs down to the NUV. ECAS adopted the mean color of four stars from Hardorp's solar analogs as the zero point of their photometric system (Tedesco et al. 1982). This means that ECAS's photometric colors have carefully taken into account the NUV color of the Sun. Thus, the photometric surveys (24 color survey and ECAS) are trustworthy data for studying NUV reflectance.
Hyades 64 was observed every night with the TNG under the same conditions as those described in Sec. 2.1. We also observed five commonly used Landolt (1973) G-type stars (Table 2) and checked whether they are spectrally good solar analogs in the NUV.
Additionally, we collected data on these stars from previous observations done by authors who used different telescopes, such as the 2.56-m Nordic Optical Telescope (NOT) and the 2.54-m Isaac Newton Telescope (INT) at ORM (Table 2). In the case of the INT, we obtained the spectra using the Intermediate Dispersion Spectrograph (IDS) spectrograph together with the low resolution grism R150V and a wide slit (3"). At NOT, we used the Alhambra Faint Object Spectrograph (ALFOSC) and the low resolution grism #4, with a 5" slit. The date and time of observation, the AM, and the telescope used for each solar analog are shown in Table 3. We use Hyades 64 as a reference for a good solar analog in the NUV. The subsequent analysis of the other stars compared to Hyades 64 is presented in Sec. 3.1.
Data reduction
We applied the standard procedures to the obtained images, such as bias subtraction, flat-field correction, wavelength calibration, and extraction of one-dimensional spectra from two-dimensional images. The wavelength calibration and extraction of spectra were conducted using "apall" and "identify" functions in the Image Reduction and Analysis Facility (IRAF) (Tody 1986). Atmospheric extinction correction was applied using the standard extinction coefficients for ORM 2 .
The asteroids' reflectance spectra were obtained by dividing the observed asteroid flux by a spectrum of a solar analog. As we explain in Sec. 3.1, we used Hyades 64 for the TNG observations. When both the LR-B and LR-R spectra were available, we joined the blue and red parts of the spectra using the common wavelength interval of 0.6 -0.7 µm. For asteroids (6698) and (13100), which were observed on two different nights, we averaged the two spectra together. Finally, the spectra were binned every 3 Å. All the observed spectra are shown in Fig. A.1. We also show smoothed spectra obtained by running the median filter using a window of ∼ 30 nm for a better visualization. We describe the procedure to obtain the reflectance spectra of Ryugu in Sec. 3.4.
Solar analogs
In this sub-section, we provide a description of our study of the spectral behavior of the observed solar analogs in the NUV. As we mentioned in Sec. 2.3, we consider Hyades 64 to be a reference star for the NUV. For each night of observation, we therefore divided the spectra of Landolt's stars by that of Hyades 64 and normalized the obtained ratio at 0.55 µm in order to show their spectral variation in the NUV (Fig. 1). The results are consistent even when using different telescopes and instruments. The first remarkable result is that the spectra of Landolt's stars at wavelengths above 0.55 µm are very similar to those of the Sun. On the other hand, below 0.55 µm the spectral differences are very important. We observe a strong excess in the CN A&A proofs: manuscript no. output Even when the spectral type is similar to that of our Sun, if the iron abundance is less than the Sun's (< 0 dex), the flux in UV can be significantly higher (Buser & Kurucz 1992). We should note that if one uses SA 93-101, SA 98-978, SA 107-684, or SA 112-1333 as a solar analog to derive the reflectance spectra of asteroids, the flux excess will artificially create a fake absorption in the NUV, in other words, a drop in reflectance. Among the stars analyzed here, the one that presents the closest spectral slope to that of the Sun in NUV-VIS is SA 102-1081, although it still has a significant deficiency in its NUV flux compared with the Sun.
Our results show that only one out of the five commonly used solar analog stars studied here can be used to get reflectance spectra of asteroids in the NUV even though there is still up to a 10% difference in the shortest wavelength. This should be noted when interpreting results on previous studies using Landolt's solar analogs in the NUV. A good example of this case is presented in Sec. 4.2, where we revisit the results obtained in de for the Polana-Eulalia complex family. We remark that these stars can be used to obtain reflectance spectra of asteroids in the visible (0.5 -0.9 µm) and the near-infrared (0.8-2.4 µm). Spectral slopes for these Landolt's stars were investigated precisely and statistically by Marsset et al. (2020), and they found them to be consistent with the Sun with an uncertainty of 4.2% µm −1 .
Thus, in this study, we use Hyades 64 as the solar analog to derive asteroid reflectance spectra down to the NUV. We devote some effort to finding more solar analogs in the NUV region for the further study of reflectance spectroscopy in the following.
Comparison of our observations with ECAS
Observations of spectral reflectance in the wavelength range between 0.34 and 1.04 µm by ECAS greatly advanced our understanding of the compositional distribution of asteroids (Tholen 1984;Zellner et al. 1985). Zellner et al. (1985) obtained more than 900 photometric reflectance spectra using eight broadband filters covering this wavelength interval: , and z (1.04 µm). They treated carefully the NUV reflectance photometric spectra using solar analogs that were well characterized in the NUV. A total of 18 out of the 67 asteroids presented in this paper were also observed in the frame of the ECAS survey. Thus, we can make a comparison and validate our methodology. Figure 2 shows their obtained spectra with the TNG, together with the spectrophotometric observations by ECAS (Zellner et al. 1985), as well as SMASS II (Bus & Binzel 2002b) and S 3 OS 2 (Lazzaro et al. 2004). We note that spectra from SMASS II and S 3 OS 2 cover only visible wavelengths.
In general, we obtained consistent results compared with ECAS spectrophotometry in NUV-VIS, except maybe for asteroids (246) Asporina, (268) Adorea, and (588) Achilles. Some of our targets show in their NUV spectra a clear turn-off point, in other words, a position in wavelength where the slope changes its value drastically. That is case for asteroids (47) Aglaja, (62) Erato, (88) Thisbe, and (229) Adelinda, which have a turn-off point in the NUV at around 0.4 µm. These turning points were not clearly observed in their ECAS spectra because of the low wavelength resolution. We also note that some ECAS spectra have an excess in the b filter that our spectra do not show. This may be because the zero point of the ECAS color index was defined by four solar analogs and they might have some systematic error (Tedesco et al. 1982). The central wavelength of the b filter (0.44 µm) is located very close to the CH absorption band. Thus, this band needs to be carefully interpreted. Our spectra show good agreement with other surveys at visible wavelengths considering the range of spectral variations between surveys. Only (246) Asporina shows much redder spectra than the other three surveys. Differences in the phase angle (α) could be invoked to explain the observed difference in spectral slope (phase reddening): our spectrum was obtained at a phase angle of ∼22 • , while the SMASS II and S 3 OS 2 spectra were obtained at a phase angle of ∼8 • . This corresponds to a change in spectral slope of 1%/10 3 Å/ • for 8 • < α < 22 • , computed in the range 0.48 -0.72 µm, following the same procedure as that described in Luu & Jewitt (1990). They obtained a change of 0.18%/10 3 Å/ • for 0 • < α < 40 • for a sample of near-Earth and main belt asteroids. Our change in slope is five times larger than the one in Luu & Jewitt (1990), suggesting phase reddening cannot be the sole explanation for the difference in spectral slope. In addition, the ECAS data are in good agreement with both SMASS II and S 3 OS 2 spectra, but were obtained at a phase angle of ∼ 16.6 • .
NUV-VIS spectra of Themis, Polana, and Eulalia families
Members of the Themis collisional family in our sample were identified using the list from Nesvorný et al. (2015), available in the Planetary Data System (PDS). As explained in Sec. 2.1, members of the Polana-Eulalia family complex were taken from de . In that paper, the authors searched for spectral differences between the members of the Eulalia family and the so-called "New Polana" family, identified by Walsh et al. (2013). We also collected spectrophotometric data from ECAS of asteroids belonging to the Themis family and the Polana-Eulalia family complex. We computed the NUV and VIS slopes by linear least square fitting for the wavelength ranges 0.36 -0.55 µm and 0.55 -0.85 µm, respectively. For the ECAS photometric spectra, the errors in the slope were calculated from 100 samples created by the bootstrap method according to the deviation given for each ECAS filter. The list of Themis, Polana, and Eulalia family members with TNG spectra and ECAS photometric data are shown in Tables 4 and 5, respectively. The asteroid (5924) Teruo was initially classified as belonging to the Nysa-Polana family by (Nesvorný et al. 2015), but it was classified as belonging to neither the New Polana nor Eulalia families by (Walsh et al. 2013). Thus, because of its low albedo, we decided to list (5924) Teruo as an uncategorized Polana-Eulalia family member in Table 4. We also include in Table 4 other dark, carbonaceous-like asteroids and our taxonomical classification (see Sec. A.1). Other information we included are the Bus and Tholen taxonomies (Bus & Binzel 2002a;Tholen 1984) from ECAS, SMASS II and S 3 OS 2 spectra, albedo and diameter from the AKARI survey (Usui et al. 2012), and the NUV and VIS slopes.
Computing a reliable NUV-VIS spectrum of (162173) Ryugu
We followed the same data reduction procedure for the GTC spectra as that described in Sec. 2.4 for the TNG up to the extraction of one-dimensional spectra of both Ryugu and the G-type stars SA 93-101 and SA 98-978. The R300B grism at OSIRIS-GTC provides a full spectrum from 0.36 to 0.85 µm. Solar analog star Hyades 64 could not be observed because it is too bright for a 10-m class telescope (m V = 8.1) and it would saturate even with sub-second exposures. As concluded in Sec. 3.1, the observed Landolt stars showed a significant variation from a solarlike spectral behavior in the NUV. To correct for such variation in SA 93-101 and SA 98-978, we calculated the Ryugu's reflectance spectrum, R Ryugu , as follows: where F GTC Ryugu is the Ryugu spectrum from the GTC, F GTC SA is the solar analog spectra from the GTC (with SA being SA 93-101 and SA 98-978), F TNG SA is the solar analog spectra from the TNG, and F TNG H64 is the Hyades 64 spectrum from the TNG. The ratio F TNG SA /F TNG H64 for SA 93-101 was obtained on three different nights, while for SA 98-978 it was obtained on two different nights (see Table 2). We used an average of all the ratios for each solar analog star. The final reflectance spectra of Ryugu was binned by 10 Å (Fig. 3). The two spectra obtained against each Landolt star show good agreement, exhibiting a flat or possibly upturned slope in the NUV region. These spectra are also consistent with what was observed by Hayabusa2 (Sugita et al. 2019;Tatsumi et al. 2020).
Discussion
In this section, we discuss the NUV-VIS characteristics of the Themis family and Polana-Eulalia family complex and the two sample-return mission targets, (162173) Ryugu and (101955) Bennu. We compare the members of the Themis, New Polana, and Eulalia families in the NUV-VIS space obtained from our observations and ECAS data ( 0.85 µm, respectively. To evaluate the the NUV absorption, we used the difference of spectral slopes between NUV and VIS:
Themis family
The Themis collisional family consists of about 2,400 to 4,300 members and is located in the outer main belt at 3.1 au (Nesvornỳ et al. 2005;Spoto et al. 2015). The age of this collisional family can be estimated based on the relation between objects' semimajor axis and size among the family members, which was mainly configured by thermal forces via the Yarkovsky effect (Farinella & Vokrouhlicky 1999;Bottke Jr et al. 2006). The collisional age of the Themis family was estimated to be 2 Gyr by Marzari et al. (1995) and 2.4 -3.8 Gyr by Spoto et al. (2015), which puts it among the oldest families in the main belt.
The diameter of the parent body of the Themis family has been estimated to be 390 -450 km (Marzari et al. 1995). The largest asteroid in the Themis family is (24) Themis, with a diameter of 198 km. Free water ice or NH 3 -bearing phyllosilicates indicated by a 3.1 µm band have been detected on the surface of Themis (Campins et al. 2010a;Rivkin & Emery 2010). Later, Takir & Emery (2012) confirmed the 3.1 µm band and classified Themis as part of the rounded group, one of the four groups they identified based on the shape of the 3-µm absorption band. Usui et al. (2019) showed an absorption of 10.7% at 2.76 µm associated with OH in hydrated minerals and a broad 3.07 µm absorption of 11.9% from the AKARI data. The density value has been estimated at 1.31 ± 0.62 g/cm 3 (Vernazza et al. 2021). This density is comparable to the bulk density of CI chondrites, 1.57 g/cm 3 , and CM chondrites, 2.27 g/cm 3 (Flynn et al. 2018). Considering only 13% of Themis family members show the 0.7 µm feature (De Prá et al. 2020), the majority of the members might be composed of CI-like material rather than CM-like material. However, we cannot rule out the possibility that two lithologies, CI-like and CM-like, are inside the parent body of the Themis family and Themis itself, considering that some members have the 0.7-µm band and the peak wavelength of the OH band locates at a rather longer wavelength.
The thermal properties of eight Themis family members were investigated by Licandro et al. (2012). Emissivity spectra from 5-14 µm exhibit a plateau at about 9 to 12 µm for five members. This plateau feature is similar to that of comets and Pand D-type asteroids (Vernazza et al. 2015), but the emissivity strength is smaller for the Themis family Licandro et al. (2012). This feature may indicate the presence of small grain olivine and/or pyroxene (Emery et al. 2006;Vernazza et al. 2015).
From ECAS, about half of the Themis family members were classified as B types (Zellner et al. 1985). Later taxonomic classification based on complementary spectroscopic works came up with a consistent result, with a mean visible spectral slope S VIS = −0.02 ± 0.16 µm −1 , and it was found that ∼ 13% of the asteroids among the Themis family showed the 0.7-µm band absorption (Mothé-Diniz et al. 2005;De Prá et al. 2020). Our analysis provides a mean VIS slope of S VIS = −0.04 ± 0.23 µm −1 and a mean NUV absorption of A NUV = 0.60 ± 0.31 µm −1 . Our visible slope is consistent with previous studies.
Even though a significant fraction of the Themis family members have negative visible spectra, their near-infrared spectral slopes tend to be positive and thus they have concave shapes Article number, page 7 of 18 A&A proofs: manuscript no. output
Polana-Eulalia family complex
Previous studies found that primitive near-earth asteroids Ryugu and Bennu, targets of the sample return missions Hayabusa2 and OSIRIS-REx, respectively, are almost certainly (>90%) delivered from the inner main belt (Campins et al. 2010b(Campins et al. , 2013Bottke et al. 2015). The Polana-Eulalia family complex is the largest low-albedo family in that region. This family is also known to overlap in proper elements space with an S-type asteroid family, Nysa (Cellino et al. 2001). Moreover, the peculiar spectra of the two biggest asteroids, the E-type (44) Myr, respectively. The diameter of the parent body of Eulalia was estimated to be ∼ 100 km based on the size frequency distribution of the family members, which is consistent with the smoothed particle hydrodynamics simulations (Walsh et al. 2013).
Even though (43962) 1997 EX13 and (14112) 1998QZ25 are dynamically classified as members of the New Polana and the Eulalia families, respectively, they are much brighter (with an albedo of 0.17 and 0.22, respectively) than the rest of the family members and are taxonomically classified as S-complex asteroids (see Table A.2). Thus, we consider them to be outliers and exclude them from the analysis. (8424) Toshitsumita could also be an outlier because of its high albedo of 0.17, although it was taxonomically classified as a C type. We cannot discard the possibility of uncertainty in albedo measurement. Therefore, we still Although asteroid (142) Polana is classified as an F type in the Tholen taxonomy, de León et al. (2016) found a minor fraction of F types among the Polana-Eulalia family complex. On the contrary, and using the same TNG data that they used, we found that most asteroids in the Polana-Eulalia family complex are classified as F types (Table 4). The main reason for this discrepancy is the use of solar analog stars. While de divided the spectrum of the asteroid by the spectra of each solar analog and then averaged these ratios to get the final reflectance spectrum, we only used Hyades 64, which we knew had a solar-like spectral behavior in the NUV region. Our result is quite consistent with what Tholen (1984) found in the ECAS data, and demonstrates the importance of properly selecting solar analogs for NUV studies. We also found that most members of the Polana-Eulalia family complex have very shallow or no NUV absorption down to 0.35 µm.
Regarding the visible wavelengths, we reached the same conclusion as de . The authors found similar visible spectral slopes for New Polana and Eulalia family members. We also found that the New Polana members and the Eulalia members cannot be distinguished in the NUV-VIS space, although we found that the NUV part of the spectra is flatter than in de León et al. (2016) by using Hyades 64 as the solar analog. The average VIS slope is S VIS = −0.00 ± 0.16 µm −1 for the New Polana family and S VIS = −0.01 ± 0.34 µm −1 for the Eulalia family. The average of NUV absorption is A NUV = −0.06 ± 0.23 µm −1 for the New Polana family and A NUV = −0.06 ± 0.46 µm −1 for the Eulalia family.
Furthermore, the near-infrared spectroscopic investigations of the Polana-Eulalia family complex suggest that both families show a concave shape, with a spectral slope of 6.8 ± 6.8%/µm from 0.9 to 2.2 µm, and that there is no significant difference between the two families Pinilla-Alonso et al. (2016). Our NUV investigations also consistently come to the same result. There is a significant difference in the NUV absorption, A NUV , between the Themis family and the Polana-Eulalia family complex (Fig. 4, lower panel). Except for three objects, members of the Polana-Eulalia family complex are well separated from members of the Themis family in this space: the Themis family shows higher NUV absorptions than the Polana-Eulalia family complex even though the visible spectral slopes are distributed in a similar range of values. Both the Themis family and the Polana-Eulalia family complex show the trend expanding from redder VIS slopes and less NUV absorption to bluer VIS slopes and more NUV absorption. From the upper panel of Fig. 4, we see no apparent difference between the two families in the albedo versus VIS slope space.
From our observation with the GTC (Fig. 3), Ryugu is classified as an F type rather than as a C type in Tholen's taxonomy. The NUV-VIS reflectance spectrum of Ryugu does not show any significant absorption in the NUV down to 0.36 µm, which is more similar to the characteristics of the Polana-Eulalia complex family than those of the Themis family. Although many spectroscpic observations have been done for Ryugu (see (Tatsumi et al. 2020)), (Binzel et al. 2001;Vilas 2008;Perna et al. 2017) reached the reflectance spectra down to the NUV. The spectrum obtained by (Binzel et al. 2001) shows a concave shape, which is different from the spectra obtained by the Hayabusa2 spacecraft. This might be because of the high AM condition. The spectra obtained by (Vilas 2008;Perna et al. 2017) show a good agreement with the spacecraft-based observation in the VIS. While (Vilas 2008) shows quite flat spectra up to 0.39 µm, (Perna et al. 2017) shows slight downturns in the shorter wavelengths down to 0.35 µm, which is contrary to what we observed. They used as solar analogs, which are not studied in this paper. These stars also need to be evaluated in the NUV before further interpretations can be carried out. Spectral slopes of Ryugu in the NUV and VIS are S NUV = −0.17 ± 0.07 µm −1 and S VIS = 0.11 ± 0.02 µm −1 , respectively, overlapping with the Polana-Eulalia family complex (Fig. 4). This suggests that Ryugu might originate from the Polana-Eulalia complex, which is located in the inner main asteroid belt where the majority of near-Earth asteroids come from (Bottke Jr et al. 2002).
Another target of a sample return mission (OSIRIS-REx), asteroid (101955) Bennu, is also a dark carbonaceous near-Earth asteroid (Lauretta et al. 2019). Bennu was observed using ECAS equivalent color filters from a ground-based telescope (Hergenrother et al. 2013). The visible wavelengths >0.44 µm were found to be consistent with the observations by the OSIRIS-REx Visible and IR Spectrometer (OVIRS) and the multiband camera MapCam on board OSIRIS-REx (Hamilton et al. 2019;Del-laGiustina et al. 2020). When we compare Bennu's color with that of the Themis and Polana-Eulalia families, it is found to overlap with the Polana-Eulalia family. Based on the dynamical evolution of Bennu, it was hypothesized that Bennu originated from that of the Polana-Eulalia family complex (Campins et al. 2010b;Bottke et al. 2015). Additionally, the in situ observations by the OSIRIS-REx spacecraft revealed fragments possibly from (4) Vesta on Bennu's surface (DellaGiustina et al. 2021;Tatsumi et al. 2021a). This finding also strongly suggests that Bennu originated in the inner main asteroid belt, at 2.1-2.5 au. Our observations consistently point toward the conclusion that Bennu has similar NUV-VIS characteristics to those of the Polana-Eulalia family complex.
Although both Ryugu and Bennu could originate from the Polana-Eulalia complex family in terms of similarity in the NUV-VIS spectroscopy, the spectra in the 3-µm region of these asteroids are different. While remote-sensing observations by Hayabusa2 of Ryugu showed a sharp OH band centered at 2.72 µm (Kitazato et al. 2019), which was confirmed by the Ryugu sample analysis that showed a sharp and deep OH band centered at 2.71 µm (Yada et al. 2021;Pilorget et al. 2022), the remote-sensing observations by OSIRIS-REx of Bennu showed a broad OH band centered at 2.74 µm (Hamilton et al. 2019). This difference in the central wavelengths and the OH band shapes might reflect the presence of different phyllosilicates, for example, whether they are Mg-bearing or Fe-bearing. Thus, if the two asteroids originate from the same parent body, there should be layers with varying temperature or water-rock conditions inside of the parent body. This will be revealed by the analyses of the samples from both Ryugu and Bennu. Based on the different composition of the exogenic fragments found on both asteroids (Tatsumi et al. 2021b), if they are not from the same parent body, it is more plausible that Bennu comes from the Polana-Eulalia family complex, and that Ryugu comes from a different parent body. It should be noted that the near-Earth environment has a much higher temperature, and more photon and ion irradiation from the Sun than the main asteroid belt, and that this may cause the different reflectance spectra in the NUV region (Hendrix & Vilas 2019). We need further investigations to constrain the origin of F-type asteroids such as Ryugu and Bennu.
Asteroids with the 0.7-µm absorption bands
The asteroids in our sample that have both the blue and the red part -(106) Dione, (175) Andromache, (207) Hedda, and (1534) Nasi -show the 0.7-m absorption band. We measured the band depth (in percent) by removing the slope computed between the two local maxima around 0.55 µm and 0.90 µm (Table 6). These asteroids are classified as G or C types according to Tholen's taxonomy. All the spectra exhibit an absorption in the NUV, with the turning point around 0.52 to 0.56 µm, which is at longer wavelengths than those of B or F types. The NUV absorptions A NUV of these asteroids are in the range 0.6 -1.6 µm −1 , while other asteroids without the 0.7-µm band show A NUV < 0.4 µm −1 . Both the 0.7-µm and the NUV absorption are caused by the intervalance charge transfer transitions of iron (Vilas 1994). Thus, the abundance of Fe-rich phyllosilicates on asteroids strongly affects the turning point of the NUV absorption. In other words, B or F types may contain less or no Fe-rich phyllosilicates, resulting in a turnoff at shorter wavelengths. The abundance of Fe-rich phyllosilicates can be more precisely assessed by observations in the 3-µm region. This needs further investigation in the future. (Zellner et al. 1985).
Summary
We investigated the NUV-VIS reflectance spectra of dark, carbonaceous asteroids, most of them members of the Themis, New Polana, and Eulalia collisional families, but we also included other dark asteroids, rocky, silicate-rich asteroids, and asteroids belonging to other families. To minimize the problems identified when observing in the NUV (0.35 -0.50 µm), we observed the asteroids at low AM and used Hyades 64 as a solar analog to obtain the asteroid reflectance spectra. We presented new data obtained with the DOLORES spectrograph at the TNG, using the LR-B (NUV) and the LR-R (VIS) grisms, and also revisited raw spectra previously published by de . In addition, we searched the TNG archive for other asteroids observed with the same instrumental configuration and with observations of Hyades 64 on the same night. All in all, we collected data for 67 asteroids. A total of 18 out of 67 asteroids were commonly observed at the TNG and by the ECAS survey (Zellner et al. 1985). Their comparison showed a good agreement in the NUV. Their VIS reflectance spectra were also consistent with spectra from other spectroscopic surveys, such as SMASS II (Bus & Binzel 2002b) and S 3 OS2 (Lazzaro et al. 2004). Our observations and collected data confirm the first systematic spectroscopic survey of asteroids in NUV-VIS.
To further study the importance of using proper solar analogs in the NUV region, we observed five of the commonly used Landolt's G-type stars together with Hyades 64 using three different instruments and telescopes: DOLORES@TNG, AL-FOSC@NOT, and IDS@INT. The ratios between Landolt's Gtype stars and Hyades 64 showed strong variations in the NUV, even though the VIS spectra were consistent with Hyades 64. The CN band exhibited the largest variation among the stars. We find that the metallicity plays a big role in increasing or decreasing the relative flux in the NUV. Among the five studied Landolt's stars, SA 102-1081 was the closest to the solar spectrum, but still showed a depletion in the CN band and the NUV flux. Thus, we need to carefully select a solar analog to derive the NUV-VIS reflectance spectra of asteroids.
The Themis family and the Polana-Eulalia family complex are known to have neutral to blue spectra in visible wavelengths. Our analysis showed that NUV spectra exhibit differences between these families: the Themis family has a deeper NUV absorption than the Polana-Eulalia family complex. Although de found that most of the members of the Polana-Eulalia family complex were classified as B type in the Tholen's taxonomy, we find that they are indeed mostly classified as F types, showing a neutral reflectance spectrum from NUV to VIS. This is because de used multiple solar analogs, including Landolt's stars, which are not representative of the solar spectrum in the NUV, while we used only Hyades 64, which is known to have very similar spectral behavior in the NUV to that of the Sun. On the other hand, we reached the same conclusion that the sub-families of the complex, the New Polana and the Eulalia families, are not spectrally distinguishable. Thus, they might originate from the same parent body. In an upcoming paper, we study carbonaceous asteroids in the NUV using spectrophotometric surveys (Tatsumi et al. in prep.), suggesting that the NUV absorption observed in asteroids belonging to the Polana, Eulalia, and Themis families might be related to Fe-rich phyllosilicates.
We successfully observed (162173) Ryugu down to 0.36 µm using the GTC in 2020. (162173) Ryugu is the target of the Hayabusa2 sample return mission. We find that the reflectance spectrum of Ryugu shows a flat NUV or a slight increase, which is consistent with spacecraft observations (Sugita et al. 2019;Tatsumi et al. 2020). Thus, Ryugu is classified as an F type rather than a C type by Tholen's taxonomy. Based on our observations, we conclude that Ryugu's spectrum is quite consistent with the reflectance spectra of the Polana-Eulalia family complex. Moreover, the spectrophotometric observation of (101955) Bennu by Hergenrother et al. (2013) suggests that Bennu is also consistent with the Polana-Eulalia family rather than the Themis family.
Appendix A: Observations of asteroids with the TNG
In this section, we show all the observations presented in this paper made with the TNG telescope. We note that all the asteroid reflectance spectra were derived by dividing the spectra of solar analog Hyades 64. Tables A.1 and A.2 describe the observational conditions and the physical properties of the target asteroids, respectively. The procedure of taxonomic classification is described in Sec. A.1. Figure A.1 shows all the asteroid reflectance spectra presented in this study.
Appendix A.1: ECAS taxonomy
Using the spectrophotometric data obtained by ECAS, based on principal component analysis, Tholen (1984) introduced 12 asteroid spectral types or classes. This taxonomy is so far the only one that takes the NUV region into account. . Tholen (1984) find that especially dark asteroids have a large variation in the NUV and classified them into C, D, T, P, B, F, and G classes. Thus, we classify our spectra based on Tholen's taxonomy.
To classify the asteroids, we computed the discrete spectra through the ECAS filters by convolving the reflectance spectra from the TNG with the transmission curves of the ECAS filters that cover a common wavelength range: u, b, v, and w for the blue part; x and p for the red part. We used the entire spectral range for the classification, in other words, the joined blue (LR-B) and red (LR-R) parts of the spectra. Sometimes, there was a mismatch in the slopes of red and blue parts and they were not able to join. We evaluated the difference in the blue and red slopes computed in the common wavelength region (0.6 -0.7 µm), and if the slope difference was out of the 1.5 interquartile range, we considered them to be outliers. If the red part was not available or if the slope difference was in the outlier range, we proceeded to classify the asteroid using only the blue part of the reflectance spectra.
The next step was to compare this with the reference spectra of the ECAS taxonomy available on the PDS 3 . We used χ 2 to assess the differences between the reference spectra, and the observed asteroid spectra giving three possible taxonomic classes as the first approximation. To discern which taxonomic class was the correct one, we carried out a visual inspection of the spectra, looking for specific features such as the wavelength position of the maximum in reflectance or the presence of absorption bands. We also used the albedo information from the AKARI survey (Usui et al. 2012) to discern between some S (albedo> 0.1) and T (albedo< 0.1) candidates and between E, M, and P candidates. Finally, for those asteroids with χ 2 larger than the χ 2 between taxonomies, very similar χ 2 for different taxonomies, or high dispersion at key wavelengths, such as u and b, we decided to keep all possible taxonomic classes. The results of our classification are shown in the third column of Table A.2. We also show in this table previous taxonomical classifications, when available, from ECAS, SMASS II or S 3 OS 2 spectra (based on both the Bus and Tholen taxonomies). The table also includes the asteroid H magnitude, diameter, and albedo (from the AKARI survey), proper orbital elements semimajor axis (a), eccentricity (e), and inclination (i) extracted from the Lowell Minor Planet Service webpage 4 , and family membership from Nesvorný et al. (2015) (except for Polana-Eulalia family members, extracted from Walsh et al. 2013). For subsequent compositional analyses (slope computation and comparison to Ryugu and Bennu) we do not use those asteroids that have a rocky or silicate-rich classification (S, Q, A, V, K, R, or L types). | 2022-05-30T01:16:00.513Z | 2022-05-27T00:00:00.000 | {
"year": 2022,
"sha1": "7de53c9832e3b01e8e88eeb2d0503c28c9c9ae35",
"oa_license": "CCBY",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2022/08/aa43806-22.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cb042b00630fcb4c76ad301fa7f4dc1b13d3431c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
213736352 | pes2o/s2orc | v3-fos-license | Red foxes ( Vulpes vulpes ) and coyotes ( Canis latrans ) in an urban landscape: prevalence and risk factors for disease
Urbanized areas contain fragmented landscapes and abundant resources, resulting in concentrated and increased wildlife populations in relatively close contact with other wildlife species, humans, and their domestic pets, thereby posing novel disease risks and facilitating inter-specific disease transmission. We trapped and radio-collared 15 red foxes ( Vulpes vulpes ) and 14 coyotes ( Canis latrans ) in the urban landscape of Madison, Wisconsin, to determine the prevalence of disease among these canids and to examine how these canids were using the landscape. Using Fisher’s exact probability tests, we found that coyotes had a significantly higher seroprevalence of Lyme disease ( P ¼ 0.002) and a higher prevalence of canine heartworm disease ( P ¼ 0.02) than foxes. Red foxes did not select specific habitat types in the urban landscape, but coyotes selected for forest and grass cover types, and avoided developed sites. Understanding the prevalence of disease in urban canid populations is important because diseases affecting urban canids cause morbidity and mortality and are transmissible to domestic dogs, and vice versa. Additionally, urban canids may serve as sentinels for zoonotic diseases such as Lyme disease and leptospirosis.
Introduction
Urbanization has intensified in recent decades, with 85% of the human population in the United States living in urban landscapes (McCleery et al. 2014). In addition to a dense concentration of people in human-modified landscapes, dense concentrations of wildlife species share the same areas. Two such species are the red fox (Vulpes vulpes) and the coyote (Canis latrans). Both species exist in sizeable populations within and throughout most urban areas in North America (Gehrt et al. 2010;Bateman and Fleming 2012). Recent increases in urban canid populations have occurred because many canid species are generalists and can thrive by utilizing a variety of habitats and food resources, thus allowing them to colonize humandominated environments (Gehrt et al. 2010;McCleery et al. 2014).
One consequence of urbanization is increased interactions between humans, domestic animals, and wildlife. Veterinarians, V C The Author(s) 2019. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com wildlife managers, and the general public in urban areas are concerned about interactions between wild canids and domestic pets, including direct and indirect disease transmission across species (Grinder and Krausman 2001;Malmlov et al. 2014). Urban wildlife is in close proximity to humans, and therefore viewed as a risk factor for zoonotic disease transmission (McCleery et al. 2014). This is especially relevant considering over 60% of emerging infectious diseases are zoonotic, with over 75% of these originating in wildlife (Jones et al. 2008). Additionally, fragmentation of natural areas in urbanized landscapes and the pattern of use of these habitats by wildlife may lead to increased densities of vectors and hosts, possibly resulting in a higher prevalence of disease in urban areas (Bradley and Altizer 2007).
Most studies to determine the seroprevalence of diseases in red fox and coyotes have been conducted in rural areas, finding varying prevalence rates of canine distemper virus (CDV), canine adenovirus (CAV), canine heartworm, canine parvovirus (CPV), Lyme disease (borreliosis), and leptospirosis. Given this known disease portfolio, Akerstedt et al. (2010) suggested that wild canids should be viewed as potential sources of infection to unvaccinated dogs. Similarly, Frö lich et al. (2000) suggested that free ranging foxes may become infected with certain diseases from domestic dogs. Additionally, several of the diseases carried by urban canids such as leptospirosis and Lyme disease can be directly or indirectly spread to humans (Leighton and Kuiken 2001).
Identifying and understanding the prevalence of disease in red foxes and coyotes is important because disease may cause significant morbidity and mortality in urban canid populations but has received relatively little research attention. Furthermore, understanding disease prevalence in wild canids in urban areas is especially important because these canids may act as disease reservoirs, increasing risk of infection for domestic pets, and because domestic pets might transmit disease to urban canids. Urban canids also can be used as sentinels to predict the prevalence of zoonotic diseases such as Lyme disease and leptospirosis. Therefore, the objectives of our study were as follows: (i) identify disease exposure in urban red foxes and coyotes and (ii) determine if specific land cover characteristics and the presence of domestic dogs could explain the presence or absence of select diseases in these wild canids.
We hypothesized that disease prevalence and exposure would be higher for canids that spent more time in land covers conducive to disease transmission and with increased potential interactions with domestic dogs.
Capture and monitoring
Using cable restraints, we live-captured red foxes and adult coyotes from January 2015 to April 2018. For full capture details, see Mueller et al. (2018). All ethical capture procedures and trapping regulations for cable restraints were followed (Association of Fish and Wildlife Agencies 2017), and all animal handling methods were approved by the University of Wisconsin Animal Care Use Committee (Protocol A01559), and the Wisconsin Department of Natural Resources (WDNR) (Permit # SCP-SOD-001-2014).
We fitted each animal with ear tags and a very high frequency (VHF) radio collar (Advanced Telemetry Systems, Isanti, MN; Model # M1950 for red fox and M2220B for coyote) or Global Positioning System (GPS) collar (Lotek Wireless Fish & Wildlife Monitoring, Newmarket, ON; Model #G5C175C). Once an animal was radio-collared and released, it was located 2-3 times within the first 5 days to ensure it was moving and alive. Following the initial 5-day period, each VHF-collared animal was located at least once per week for the entire duration that the radio collar functioned and remained on the animal. Each location was triangulated based on the intersections of at least three telemetry bearings taken within a maximum of 15 minutes of each other to reduce error based on animal movement (Schmutz and White 1990). Animals also were located using Global Positioning System readings if individuals were visually observed. To ensure the accuracy of triangulations, telemetry bearings were plotted and the estimated location of the animal was shown on a laptop computer to proof locations in the field (unpublished data, Radio-Tracker, John Cary, University of Wisconsin, Madison, WI). During the weekly location of each animal, each animal was tracked for a 5-hour period, where it was located once per hour during that period. Weekly tracking periods were systematically rotated around the 24-hour clock to ensure that temporal variation in activity was captured. All GPS collars collected locations at hourly intervals and followed the same data collection schedule as VHF radio collars.
Sample collection and processing
For each trapped canid, we collected up to 6.0 ml of blood from the cephalic vein. We placed 1.5 ml in an EDTA tube and the remainder in a serum separator tube. Testing was completed at the University of Wisconsin-Madison School of Veterinary Medicine Clinical Pathology Laboratory, the Companion Animal Vaccine and Immuno Diagnostic Service Laboratory (CAVIDS), and the Wisconsin Veterinary Diagnostic Laboratory (WVDL).
Vector-borne disease screening
We used the SNAP 4Dx Plus (IDEXX Laboratories, Westbrook, ME) test, a rapid qualitative ELISA test that uses EDTA whole blood, to simultaneously detect heartworm (Dirofilaria immitis) antigen and antibodies against Lyme disease (Borrelia burgdorferi), Anaplasma phagocytophilum, A. playts, Ehrlichia canis and E. ewingii (Bowman et al. 2009). Additionally, Dirofilaria immitis microfilariae were noted when visualized in manual slide examination.
Serology
Samples with limited serum were first screened with the Canine VacciCheck kit (Spectrum Labs, Phoenix, AZ), a semiquantitative modified ELISA enzyme labeled assay for CAV, CDV, and CPV. This assay measures on a scale from 0 to 6, with three set to a protective antibody titer of 1:16 for CAV, 1:32 for CDV, and 1:80 for CPV (Mazar et al. 2009). If samples had any result greater than 0 on this assay, the sample proceeded to undergo a gold standard quantitative confirmatory test, as described below. Based on crude data from a field and experimental trial of this test compared to gold standard quantitative confirmatory tests, using a score of 1 or greater as positive for exposure and using our titer cutoffs of !1:16 for CDV and !1:20 for CAV and CPV, there were very few false-negatives with sensitivity of 99.2% (CDV), 98.2% (CAV), and 99.4% (CPV) (Mazar et al. 2009).
Hemagglutination inhibition assay for CPV 2 antibody
We washed porcine red blood cells prepared in Alsever's solution (Sigma A3551) four times with a buffer of 2M NaH2P04 and 9% NaCl at pH of 6.8 prepared in Milli-Q water. Packed red blood cells were placed in buffer with 0.1% Bovine Serum Albumin (Sigma A9647) to make a 1% pRBC solution. Sera were 2-fold serially diluted in a 96-well plate with buffer. The prepared 1% pRBCs were added to all wells, and the assay was incubated at 4 C for 4-24 hours. The dilution of the last well where hemagglutination was inhibited was the reported titer. Duplicate positive and negative controls were run along with samples. We considered a titer of !1:20 for parvovirus to indicate previous exposure (see Table 2).
Serum virus neutralization for CAV 1 and 2 and CDV antibody
Serum samples were 2-fold serially diluted in a 96-well plate with appropriate cell culture medium (CAV:DMEM, CDV:MEM). A control plate with positive and negative reference sera were run concurrently. The appropriate viral stock was diluted to 100 TCID50 and added to all test and control sera wells. A Viral Backtiter was placed on the control plate by serially diluting virus with culture medium as a check for appropriate dilution of stock. The remaining rows of the control plate were cell control, without added virus. The plates were incubated for 1 hour at 37 C. Cells from appropriate cell lines (CAV:MDCK, CDV:VERO) were added to control and test plates. Plates were incubated at 37 C 5% CO 2 for 4-5 days and were examined for the presence of cytopathic effects. The antibody titer was the highest dilution of serum that neutralized the virus and prevented infection of cells and resulting cytopathic effects. Titers !1:20 were considered as seropositive for CAV, and !1:16 was considered seropositive for CDV (see Table 2).
6-Serovar microscopic agglutination test for Leptospira
The microscopic agglutination test for the detection of Leptospira antibodies in animal sera was run at WVDL using a protocol developed and distributed by the National Veterinary Services Laboratory (NVSL) (NVSL 1987). This assay undergoes annual proficiency testing distributed by the World Organization for Animal Health (OIE). The serovars tested are six most commonly found within canids in the midwestern United States: Leptospira interrogans bratislava, L. i. icterohemorrhagiae, L. i. canicola, L. i. pomona, L. i. grippotyphosa, L. i. autumnalis. Titer results >1:100 were interpreted as consistent with current or past exposure based on previous research (Table 1), with titers >1:1600 suggestive of recent infection based on currently accepted cutoffs for a single titer in domestic dogs (Miller et al. 2011). Red foxes (Vulpes vulpes) and coyotes (Canis latrans) in an urban landscape | 3 types were aggregated into four categories: water (i.e. open bodies of water, woody wetlands, and emergent herbaceous wetlands), forest (i.e. deciduous, evergreen, mixed forest, and shrub and scrub), developed (i.e. lawn grasses and impervious surfaces accounting for 20-100% of cover), and non-woody undeveloped land cover (i.e. grassland and herbaceous, pasture and hay, and cultivated crops, hereafter referred to as grass).
Spatial data variables
We
Statistical analysis
We used a Fisher's exact probability test to determine if exposure to CPV, CDV, CAV, Lyme borreliosis, and leptospirosis varied significantly between species. We also used a Fisher's exact test to determine if infection with heartworm differed by species. We used two approaches to assess the potential impacts of habitat selection on disease exposure. First, we calculated selection ratios for each aggregated land cover type in the study area (McDonald et al. 2012). We compared the proportion of the study area in each aggregated land cover type to the proportion of telemetry locations within each land cover type for each animal. We used a t-test to determine if selection ratios differed significantly from a mean of 1, which would represent habitat use in exact proportion to its availability. Second, we developed resource selection functions for each species under a useavailability framework (Johnson et al. 2006). In order to account for potential error in our telemetry data due to animal movements, we created a buffer around each telemetry location by using the average distance each sex of each species moved between 1-hour telemetry locations (Mueller 2017). The buffer distance for foxes was 508.66 m for males and 272.77 m for females, and the buffer distance for coyotes was 346.49 m for males and 390.07 m for females (Mueller 2017). Within each individual buffer, we quantified the abundance of domestic dogs and the proportion of each aggregated land cover type. We generated random locations throughout the study area in equal proportion to the number of locations gathered for each species by sex, resulting in a 1:1 ratio of used and available locations. All covariates were centered and standardized prior to analyses. We did not include telemetry data gathered during the winter months (November-February) as we assumed during these periods that the active vectors for disease transmission (mosquitos and ticks) were inactive. We further limited our analyses to animals for which we had at least 10 locations.
Prevalence of disease
We captured and radio-collared 15 red foxes (10 males and 5 females) and 14 coyotes (7 males and 7 female) in our study area. Of these canids, none of the foxes were infected with heartworm, while five (35.7%) coyotes were infected. Only one (6.7%) fox had exposure to Lyme borreliosis. Nine of the fourteen (64.3%) radio-collared coyotes tested positive for Lyme exposure. All 14 coyotes and 15 foxes were seronegative for ehrlichiosis and only 1 of the 15 (6.6%) foxes was seropositive for Anaplasma antibody. Not all samples taken from the canids at the time of their capture contained a sufficient amount of blood to run all of the disease tests, so not all canids were able to be tested for exposure to parvovirus, adenovirus, distemper, or leptospirosis. Of the 15 foxes and 11 coyotes that were tested for CPV antibody, 8 foxes (53.3%) and 5 coyotes (45.5%) tested positive. For CAV, 1 of 10 (10%) coyotes and 2 of 14 (14.3%) foxes were seropositive. Three of eleven (27.3%) coyotes and two of thirteen (13.3%) foxes were seropositive for CDV antibody. Lastly, 1 of 14 (7.1%) coyotes and two of fifteen (13.3%) foxes had exposure to leptospirosis, with one fox having a titer of 1:3200 against L. i. grippotyphosa and 1:1600 against L. i. autumnalis, titers consistent with active or recent infection. Exposure to CPV (P ¼ 0.50), CAV (P ¼ 0.63), CDV (P ¼ 0.35), and leptospirosis (P ¼ 0.53) did not differ between the two species. However, coyotes had a higher prevalence of both Lyme antibody (P ¼ 0.002) and canine heartworm antigen (P ¼ 0.02) than red foxes.
Habitat selection
Red foxes appeared to exhibit random selection for land cover across the study area, with no selection ratios differing significantly from 1 (Fig. 2). Coyotes exhibited significant selection for both forest and grass cover types and avoided developed areas (Fig. 2). When including the distribution of domestic dogs and accounting for animal movements, we found that foxes exhibited random selection of all cover types while coyotes exhibited moderate (P ¼ 0.068) avoidance of developed cover types (Table 1).
Canine heartworm
Canine heartworm is a mosquito-borne disease caused by the nematode Dirofilaria immitis. Pulmonary and respiratory function can be affected when the parasites inhabit the heart and arteries of the canid (Anderson 2001;Nelson et al. 2003). Thirty-six percent of the coyotes trapped in Madison, WI, were infected with heartworm, and heartworm in coyotes was significantly greater than in red foxes (0%). The abundance of the parasite may differ by physical location, amount of precipitation, and density of mosquito vectors (Wixsom et al. 1991). Previous studies have reported that canids living near wooded wetlands and rivers have the highest prevalence of heartworm (Pappas and Lunzman 1985;Gortazar et al. 1994;Nelson et al. 2003). Holzman (1992) found that 47% of coyotes in farmland and forests in southern Georgia were infected. In urban areas, canine heartworm may also increase in prevalence because of an increased abundance of definitive hosts such as domestic dogs (Paras et al. 2014). Although we saw an increased prevalence of heartworm in our urban study area, that was not true in urban Tucson, AZ, where coyotes were unaffected by heartworm (Table 2; Grinder and Krausman 2001). The arid environment in Tucson may not support mosquito habitat, thereby accounting for the lack of heartworm in coyotes. We found that none of the foxes in our study were infected with heartworm. The trend of heartworm being less prevalent in foxes has been found in other studies as well (Stuht and Youatt 1972;King and Bohning 1984;Pappas and Lunzman 1985;Wixsom et al. 1991). Overall, foxes tend to show lower levels of heartworm infection compared to coyotes or domestic dogs (Wixsom et al. 1991). Why prevalence differs between canid species is largely unknown, but it is speculated that the red fox is not an effective host for the parasite, and the parasite is unable to reach sexual maturity and will not persist by infecting only foxes (Stuht and Youatt 1972;Pappas and Lunzman 1985). Stuht and Youatt (1972) speculated that the parasite requires other canid hosts, such as coyotes and domestic dogs, to infect and produce microfilaria which then can be transmitted to foxes. Perhaps screening the blood of foxes for both microfilariae and for antibody against heartworm is warranted to better understand the role foxes play in transmission.
Lyme borreliosis
Lyme borreliosis is caused by the bacterium Borrelia burgdorferi carried in ixodid ticks and is the most prevalent vector-borne disease in the United States (Schwartz et al. 2017). In the United States, Lyme disease is most prevalent in the Northeast and in the upper Midwest, but it also has been reported along the Pacific coast (Kugeler et al. 2015;Schwartz et al. 2017). We found that coyotes (64%) were more likely to be exposed to Lyme borreliosis than foxes (7%).
Urban canids are important sentinels for this zoonotic disease because the prevalence of Lyme disease for these canids has been found to predict the prevalence of Lyme disease in humans (Olson et al. 2000). Domestic dogs also have been used as a sentinel species (Lindenmayer et al. 1991;Olson et al. 2000). Olson et al. (2000) found coyotes to have lower seropositivity than domestic dogs, despite coyotes seeming to have a greater risk of exposure to Lyme disease.
The lower prevalence of Lyme disease in red foxes may be due to foxes being resistant to the bacterium. Heidrich et al. (1999) concluded that the red fox is a minor reservoir for the bacterium because research in Germany showed that only seven of 100 red fox skin samples tested positive for B. burgdorferi. Dumitrache et al. (2015) and Kahl and Geue (1998) posited that red foxes likely play a minor role in the persistence of Lyme disease.
Canine parvovirus
CPV is a highly contagious viral disease that was first reported in domestic dogs in the late 1970s (Goddard and Leisewitz 2010;Zourkas et al. 2015). Transmission among wild canids and domestic dogs is facilitated because CPV is long-lasting in the environment and is easily acquired through direct contact with contaminated feces, urine, and other bodily secretions or through contact with infected fomites or environments (Osterhaus et al. 1980;Zourkas et al. 2015). We found that approximately 50% of the coyotes and foxes tested were exposed to CPV, although many other studies have identified much higher prevalences in the United States (Table 2). Canuti et al. (2017) found that CPV was more prevalent in coyotes than in foxes; however, we found no difference in exposure by species. Transmission may also increase in urban areas through increased contact between domestic dogs and other urban wildlife (Truyen et al. 1998). However, wild canid-domestic dog interaction was not a significant variable in any of our models, so it is likely that contact between wild canids and domestic dogs is uncommon within our study area, perhaps because domestic dogs in Madison, WI, are vaccinated, thus decreasing the risk for urban wildlife to contract parvovirus from indirect or direct contact with domestic dogs.
Leptospirosis
Leptospira interrogans is a bacterial pathogen that is most commonly transmitted to wildlife and humans through contaminated drinking water (Leighton and Kuiken 2001). We found low prevalence of exposure to leptospirosis in red foxes (7%) and coyotes (13%). Prevalence was similar to many other studies (Table 2), but lower than that reported by Amundson and Yuill (1981) for red foxes (47%) trapped in southwestern Wisconsin. Amundson and Yuill (1981) showed that juvenile foxes were Red foxes (Vulpes vulpes) and coyotes (Canis latrans) in an urban landscape | 5 more commonly exposed, potentially explaining why our results showed lower exposure in adult foxes. The infective titer documented in one red fox indicates that red foxes in urban environments may serve as risk factors for human leptospirosis since foxes regularly used developed areas in Madison, WI.
Canine distemper
CDV is transmitted primarily through contact with oral, respiratory, or ocular fluids, and the infective virus can be shed up to 90 days after exposure (Williams 2001). We found relatively low exposure to CDV in urban red foxes (13%) and coyotes (27%), and prevalence did not vary significantly with species. Prevalence for red foxes was similar to that reported in previous studies in Wisconsin (Table 2, Amundson and Yuill 1981). Prevalence in coyotes was similar to previous studies of urban coyotes ( Table 2, Grinder and Krausman 2001). Low levels of canid exposure to CDV may be explained by the widespread, routine vaccination of domestic dogs against the virus.
Canine adenovirus
CAV, also known as infectious canine hepatitis, is caused by a double-stranded DNA virus and is transmitted via urine, nasal and conjunctival secretions, and feces (Woods 2001). Prevalence for CAV was low in coyotes (10%) and red foxes (14%) in our study. In the western United States, prevalences above 50% are common in adult coyotes (Table 2, Cypher et al. 1998). Only 3% of red foxes tested positive for CAV antibody in Wisconsin (Amundson and Yuill 1981), and our results reflect similar lower prevalence rates. In infected foxes, the virus may be transmitted among unvaccinated domestic dogs and wild canids (Woods
Comparison to other studies
In general, coyotes and foxes in Madison, WI, were not exposed to many of the pathogens we evaluated. When compared to other urban canids, seroprevalence was lower for CAV and CPV (Table 2). Coyotes trapped in Madison had similar exposure to CDV as coyotes trapped in Tucson, AZ, and red fox showed similar seroprevalence to CDV as foxes sampled previously in Wisconsin and in Norway (Table 2). Canids trapped in Madison, WI, were not exposed to L. interrogans as often as foxes sampled in Wisconsin previously (Table 2). Coyotes sampled in Madison, WI, were similar to rural Texas coyotes in exposure to Lyme borreliosis ( Table 2). The disease prevalences found in the urban canids in Madison, WI, were not always similar to other urban canid disease prevalences and, in some situations, more closely aligned with prevalences found in rural environments. Madison, WI, may not be a typical urban environment because urban green space is relatively common throughout the city. In 2016, 13.4% of Madison's adjusted city area was parkland (Harnick et al. 2017). In comparison, Milwaukee, WI (the largest city in Wisconsin) had 10.3% parkland and Tucson, AZ, had 3.2% parkland in the adjusted city area (Harnick et al. 2017). The 100 largest urban cities in the United States averaged 3.7 parks per 10 000 residents in 2016; but Madison, WI, has about 11.6 parks per 10 000 residents (Harnick et al. 2017). Additionally, Madison, WI, hosts three relatively large natural areas (the UW Arboretum, UW Lakeshore Nature Preserve, and the Owen Conservation Park) which add considerable acreage of contiguous green space in comparison to other urban cities. Disease prevalences found in our study may be similar to disease prevalences in rural coyotes because many of our coyotes were trapped in and remained near the conservation parks and used the urban green spaces in Madison, WI. These urban green spaces may mimic rural areas in that perhaps they have similar mosquito or tick abundances and these urban green spaces likely serve as transmission sites for vector-borne diseases amongst the developed land cover of an urbanized landscape.
Habitat relationships with disease exposure
We hypothesized that disease prevalence and exposure would be higher for canids that spent more time in land covers conducive to disease transmission. The spatial partitioning employed by our study species seemed to affect the disease risk for urban coyotes and red foxes. We found radio-tracked coyotes in the Madison area selected for natural areas and avoided areas with moderate to high development. The habitat selection exhibited by these coyotes may help explain why coyotes had a higher prevalence of vector-borne diseases (canine heartworm and Lyme disease). The natural areas that coyotes selected for overlapped with habitat conducive to supporting the tick and mosquito vectors for these diseases. Additionally, vectors may become more concentrated in the fragmented natural areas of urban landscapes (Bradley and Altizer 2007). Tick abundance has been found to be positively associated with deciduous forests and negatively associated with grasslands (Guerra et al. 2002), but in an urbanized landscape, forest and grassland land cover are often fragmented. The fragmentation of these habitats may cause these areas to have high densities of ticks and an abundance of white-footed mice (Peromyscus leucopus), which are reservoirs for Lyme disease (Bradley and Altizer 2007), thereby resulting in a higher prevalence of Lyme disease in urban areas (Bradley and Altizer 2007). For example, five out of the six (83%) coyotes trapped in the UW-Madison Arboretum, a 485hectare natural area consisting of wooded wetlands, deciduous forest, and grassland habitats, tested positive for heartworm antigen and Lyme antibody. Tick abundance has been found to be associated with forested land cover (Guerra et al. 2002;Wood and Lafferty 2013), and, in our study, coyotes were found to use natural areas more than the red foxes in the study area (Mueller 2017).
In contrast, red foxes selected for areas with low development and avoided natural areas. More developed areas (where red foxes were commonly found and where humans live and work) are typically treated with pesticides and offer less suitable habitat for ticks and mosquitoes due to mowed vegetation and less standing water, rendering these sites inadequate habitat for vectors.
Contrary to our hypothesis that disease prevalence and exposure in urban coyotes and red foxes would increase with increased potential interaction with domestic dogs, we did not find a relationship between potential contact with domestic dogs and home ranges of urban canids. First, domestic dogs tend to be more active during the day and reside inside at night, whereas urban coyotes and foxes are active at night and to a lesser degree around crepuscular hours, possibly limiting physical proximity between dogs and other canids. Second, coyotes tend to use urban green spaces, such as UW-Arboretum and Owen Park, while foxes more often use developed sites. We would expect that foxes would interact more often with domestic dogs, but our analysis did not render domestic dog contact as an important factor.
Conclusions
When compared to coyotes and foxes studied in other locations, urban red foxes and coyotes in Madison appear to be in relatively good health (Table 2). Coyotes, which selected for urban forest and grassland habitats, had higher seroprevalence of Lyme and higher infections of heartworm than red foxes. Both of these diseases are vector-borne, and the habitats used by coyotes are adequate habitat for ticks and mosquitoes as well. The developed areas more often used by foxes are likely treated for mosquitoes and ticks either by municipal governments or by homeowners. Additionally, red foxes may be poor definitive hosts for heartworm, and so coyotes are more likely to influence the transmission cycle in Madison. Overall, coyotes in urban Madison were found to have a low pathogen load. Coyotes were more likely to be exposed to Lyme disease, therefore, they may contribute to the natural Lyme disease transmission cycle in urban areas. However, coyotes were less likely to inhabit developed urban environments, so domestic animals and humans have relatively limited opportunity for interaction with coyotes in Madison. Red fox exposure to Leptospira interrogans could pose possible threats to domestic animal and human health because they select for urban developed habitats in closer proximity to people and their pets. Additionally, one fox had a titer indicative of recent exposure to L. interrogans. Further research into modes of leptospirosis and Lyme disease transmission among hosts in urban areas is warranted.
Red foxes (Vulpes vulpes) and coyotes (Canis latrans) in an urban landscape | 7 Data Availability Data will be made available upon request. | 2019-11-28T12:36:23.941Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "dc04c6430cbaf10289281e269b9841350a42c2a8",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jue/article-pdf/5/1/juz022/31144976/juz022.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7041aae4dab8b2c4ebc52d46d6eac9340cee3b45",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
257354886 | pes2o/s2orc | v3-fos-license | Genomic and evolutionary relationships among wild and cultivated blueberry species
Background Blueberries (Vaccinium section Cyanococcus) are an economically important fruit crop in the United States. Understanding genetic structure and relationships in blueberries is essential to advance the genetic improvement of horticulturally important traits. In the present study, we investigated the genomic and evolutionary relationships in 195 blueberry accessions from five species (comprising 33 V. corymbosum, 14 V. boreale, 81 V. darrowii, 29 V. myrsinites, and 38 V. tenellum) using single nucleotide polymorphisms (SNPs) mined from genotyping-by-sequencing (GBS) data. Results GBS generated ~ 751 million raw reads, of which 79.7% were mapped to the reference genome V. corymbosum cv. Draper v1.0. After filtering (read depth > 3, minor allele frequency > 0.05, and call rate > 0.9), 60,518 SNPs were identified and used in further analyses. The 195 blueberry accessions formed three major clusters on the principal component (PC) analysis plot, in which the first two PCs accounted for 29.2% of the total genetic variance. Nucleotide diversity (π) was highest for V. tenellum and V. boreale (0.023 each), and lowest for V. darrowii (0.012). Using TreeMix analysis, we identified four migration events and deciphered gene flow among the selected species. In addition, we detected a strong V. boreale lineage in cultivated blueberry species. Pairwise SweeD analysis identified a wide sweep (encompassing 32 genes) as a strong signature of domestication on the scaffold VaccDscaff 12. From this region, five genes encoded topoisomerases, six genes encoded CAP-gly domain linker (which regulates the dynamics of the microtubule cytoskeleton), and three genes coded for GSL8 (involved in the synthesis of the cell wall component callose). One of the genes, augustus_masked-VaccDscaff12-processed-gene-172.10, is a homolog of Arabidopsis AT2G25010 and encodes the protein MAINTENANCE OF MERISTEMS-like involved in root and shoot growth. Additional genomic stratification by admixture analysis identified genetic lineages and species boundaries in blueberry accessions. The results from this study indicate that V. boreale is a genetically distant outgroup, while V. darrowii, V. myrsinites, and V. tenellum are closely related. Conclusion Our study provides new insights into the evolution and genetic architecture of cultivated blueberries. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-023-04124-y.
V. corymbosum is the prime source of germplasm in cultivated highbush blueberry [3]. Domesticated V. corymbosum collections have been used extensively in breeding programs. During the last 50 years, several southern species were hybridized with V. corymbosum to develop the southern highbush (SHB) cultivars, specifically to lower the number of chilling hours required to flower and adapt to the warmer climates [4][5][6]. Such breeding efforts produced a range of interspecific hybrids, thus reducing the genetic contribution of V. corymbosum from an average of 97% in the original northern highbush (NHB) blueberry accessions to 72% in the average SHB cultivar [7]. Commercial blueberry varieties have also been derivatives of wide hybridization for developing homoploid hybrids with desirable traits. Recently developed hybrids are the result of a cross between domestic cultivars and wild species to attribute a wider genetic variance to these plants with hopes for plants to inherit superior traits [8]. Heteroploid crosses are partially fertile, which offers the possibility of introgression into cultivated germplasm. Blueberries are recently domesticated perennial fruit-bearing plants, and their popularity has been growing because of their palatability and the health benefits associated with their consumption [9]. Blueberries possess several essential vitamins, secondary metabolites, antioxidants (e.g., anthocyanins, flavonols, hydroxycinnamic and hydroxybenzoic acids, and proanthocyanins [10].
Blueberry cultivars can be grouped according to their low-temperature requirement as high, moderate, and low chilling. Subjecting perennial plants to chilling conditions for an extended period stimulates post-winter flowering initiation [9]. When first domesticated, blueberries were predominantly grown in the northern parts of the United States, possibly because of the suitable edaphoclimatic conditions that promoted the growth of plants, such as pH 4.8 and temperatures of 0-7 °C [11]. Increasing temperatures, droughts, and adverse weather conditions significantly affect blueberry production [12]. Drought is a major factor in decreasing yields, and high temperatures can negatively affect pollination and fruit development. High atmospheric UV levels also can negatively affect blueberry production and fruit quality [4,13,14].
Interspecific hybridizations with wild southern lowbush species led to varieties that are tolerant to diverse climates [4], thus allowing them to maintain their average yield and fruit quality (SHB blueberries). Consistent increase in atmospheric temperatures over the past few decades might have enhanced adaptation to higher temperature stress in the fruit development phase, an absolute necessity to keep or increase the world's blueberry production. These traits can be introgressed from diverse species, such as V. darrowii or V. tenellum native to neotropical regions or V. myrsinites, which is a tetraploid [1].
Blueberry was not cultivated until the early 1900s, so it is one of the most recent berry crops to be domesticated. In 1908, Dr. F.V. Coville, a US Department of Agriculture researcher, studied wild blueberries and selected plants bearing large-sized berries for breeding. Later, he made crosses among the best to develop the first 15 commercial varieties of blueberries. The "Bluecrop, " "Blueray" and "Earliblue" varieties that he developed are still widely grown and highly popular cultivars today. Thus, he revolutionized blueberry production. Several NHB and SHB cultivars are currently cultivated across the United States, Canada, and many other countries. Several researchers have established genetic relationships among the wild and cultivated blueberry species using morphological characteristics and intercrossability.
Since the identification of advanced molecular markers, the population genetic structure of diploid and tetraploid blueberries has been addressed using various DNA marker systems [15][16][17][18][19][20][21]. Understanding blueberry evolution, migration events, and species boundaries between wild blueberry species are essential to continue breeding new cultivars for environmental adaptability while maintaining genetic diversity [22]. These boundaries are governed by gene flow and speciation. Shared polymorphisms define species boundaries that could be sympatric and allopatric [23]. Vaccinium species from the Cyanococcus section are widespread throughout eastern North America, striving in diverse environments, thus widening the genetic divergence among the species [5,24]. Wild species in the neotropics and subtropics generally tolerate heat and drought [8]. These wild species could be a repository of favorable alleles for future use in introgression programs. Recently, we have shown that the southern species V. darrowii exhibits a differential response of morpho-physiological and molecular mechanisms compared to the V. corymbosum plants under heat stress [25]. Thus, analyzing the genetic structure of species can help identify diverse lineages and play an important role in breeding.
Reduced representation libraries combined with nextgeneration sequencing have combined SNPs and scoring methods in a single process to identify diversity-targeted studies [26] effectively. Genotyping-by-sequencing (GBS) resolves genomic level differences that can be converted into markers that can be used for several downstream applications [26,27]. Large genomic datasets can now be used to analyze the genetic structure of a population, species evolution, genetic lineages, and selection signals. In the present study, we selected 33 highbush (northern and southern) V. corymbosum accessions and 162 clones representing four blueberry species -V. boreale from a northern region versus V. darrowii, V. tenellum, and V. myrsinites from a southern region -to understand their genetic relationships and level of inter-species admixture. The selected blueberry accessions/species were sequenced using GBS, and high-quality SNPs were used to detect the genetic lineages and identify species boundaries among the selected blueberry accessions. In addition, we revealed migration events and identified strong selection signals related to domestication in blueberry.
GBS summary
GBS of 195 blueberry accessions yielded ~ 751 million raw reads of 75 bp long ( Table 1). The average number of raw reads per sample was 3.7 million. Barcode tags with a minimum of three read counts were used for SNP calling. The V. corymbosum cv. Draper v1.0 reference genome was used for read mapping [28]. Specifically, we used the 12 scaffolds representing each blueberry chromosome's most extended haplotype. About 588 million reads (an average of 2.9 million reads per sample) were mapped to the reference genome sequence. The overall mapping percentage to the reference genome was 79.7% ( Table 1).
The SNPs were filtered by using 1) read depth, DP > 3, 2) MAF > 0.05, and call rate > 0.9. After quality filtering and trimming, 60,518 SNPs were obtained from the sequencing data. The average number of SNPs per kilobyte of genome length ranged from seven to 10. The scaffold VaccDscaff 1 had the highest number of SNPs (6200), and the scaffold VaccDscaff 12 had the lowest (4091) ( Table 2).
Principal component analysis
A total of 60,518 SNPs were used in PCA for genetic differentiation of the selected blueberry accessions. The first two principal components (PCs) accounted for 29.2% of the total genetic variation (axis X = 21% and axis Y = 6.9%). All 195 blueberry accessions formed three major clusters on the PCA plot (Fig. 1A). Group I included all 14 V. boreale accessions, whereas group II included all 33 V. corymbosum accessions, which represent the cultivated pool used in the study. The remaining 148 accessions belonging to V. darrowii, V. myrsinites, and V. tenellum clustered into group III. Group I, comprising V. boreale species, was the most distant species. In contrast, groups II and III were closer to each other than to group I. Within the V. corymbosum group, the accessions were tetraploid except for NJOPB-8, which was diploid and was separated from the tetraploid V. corymbosum cluster (Fig. 1A).
To understand how V. boreale, V. darrowii, V. myrsinites, and V. tenellum species are genetically related to V. corymbosum, we performed a set of focused PCA, which included only polymorphic SNPs from the subsets of Taxon. The first two PCs for V. corymbosum vs V. boreale, V. corymbosum vs V. darrowii, V. corymbosum vs V. myrsinites, and V. corymbosum vs V. tenellum accounted for 6.5, 15.5, 8.9, and 9.2% of the genetic variation, respectively (Fig. S1). PCA separated individual species and formed two major clusters, one with cultivated V.
corymbosum and the other with sub-clusters of wild species. Because the objective of this study was to identify bridge accessions for use in introgression, we co-located NJ87-29-23, a known V. corymbosum accession, to be shared between the clusters of V. boreale vs V. corymbosum (Fig. S1). In addition, three V. corymbosum cultivars, 'Biloxi' , 'Sunshine blue' , and 'Pink Lemonade' , were distant from the remaining V. corymbosum cultivars.
Genetic diversity and population differentiation
We estimated nucleotide diversity to assess genetic variation among the accessions of each species (Table 3). Genome-wide nucleotide diversity was highest for V. tenellum and V. boreale (0.023 each) and lowest for V. darrowii (0.012). To understand the degree of genetic differentiation between the selected blueberry accessions, we used F ST analysis with 95% confidence intervals. The highest pairwise genetic differentiation was between V. myrsinites and V. boreale (F ST = 0.42), and the lowest (Table S1). These SNPs could be helpful to locate negative selection footprints.
Phylogenetic analysis
The genetic relationships of the selected 195 blueberry accessions are illustrated in the neighbor-joining (NJ) tree in Fig. 1B. The 195 accessions clustered into three separate groups: 1) V. corymbosum, 2) V. boreale, and 3) a group with a mixture of V. darrowii, V. myrsinites, and V. tenellum accessions. The southern wild species formed a single cluster regardless of their ploidy. The phylogenetic clustering analysis indicated that V. darrowii, V. myrsinites, and V. tenellum do not have clear species boundaries revealing reticulate evolution. The results of the NJ tree were highly consistent with those of PCA (Fig. 1A).
To identify sub-populations within species and estimate genetic relationships within them, NJ trees were developed separately for each species (Fig. S3). V. darrowii, V. myrsinites, and V. tenellum accessions were divided into sub-populations, each separated into three sub-clusters, whereas V. boreale and V. corymbosum accessions were each grouped into two clusters. In the V. corymbosum, NHB formed a separate group, but 6 NHB accessions grouped along with the SHB group. Thus, there was no clear separation of SHB accessions from all the NHB accessions.
Admixture analysis
Principal component analysis (PCA) and NJ trees singled out V. corymbosum and V. boreale accessions as separate groups and differentiated them from V. tenellum, V. darrowii, and V. myrsinites accessions. For genetic differentiation among V. tenellum, V. darrowii, and V. myrsinites accessions, we used admixture analysis with the Landscape and Ecological Association Model [29]. We applied a 10-factor replication to detect clustering (K = 1:10) and performed the analysis when the parameter K was between 2 and 6 ( Fig. 2A). At K = 2, the accessions comprising V. corymbosum and V. boreale were first separated (with one exception) from the southern wild accessions and formed subgroup I (Fig. 2B). At K = 3, V. boreale accessions were separated from subgroup I and formed subgroup III. Therefore, the V. boreale genome seems to be distinct from the other species included here. At K = 4, subgroup II was further divided into subgroup V (comprising V. tenellum accessions) and subgroup VI (comprising V. darrowii and V. myrsinites accessions), with some degree of admixture. At K = 5 and K = 6, the V. darrowii, and V. myrsinites accessions did not form subgroups despite differences in the ploidy levels, which indicates that these two species are highly admixed. Cross-entropy curve ( Fig. 2A) suggested that K-4 showed a plateau of the cross-entropy curve, indicating a statistically significant lineage pattern. Thus, the population can be grouped into four subpopulations. These results are primarily by the phylogenetic and PCA analysis. Further, the study suggested that very few accessions may have a highly homogenous genetic background, and some admixture exists in several accessions, which may be due to the high frequency of interspecific hybridization that is common in the Vaccinium genus [21].
TreeMix analysis
Lineage sorting is an evolutionary trajectory that brings species together and is an essential mechanism for domestication and adaptation. To model lineage sorting between the species, TreeMix analysis was conducted by inferring ancestry via maximum likelihood (ML) analysis that uses genome-wide allele frequencies and then identifies potential gene flow from a residual covariance matrix. The ML tree (Fig. 3A) suggested four migration events between the selected populations. An independent event was shared between V. boreale to V. corymbosum, thus indicating lateral gene flow with the common ancestor. This event had the highest migration weight of 0.59 (Fig. 3A). Another migration event occurred from V. boreale to V. tenellum, which had a migration weight of 0.36. These two migration events indicated gene flow from V. boreale to the other two species (V. corymbosum and V. tenellum) and V. boreale appears to be ancestral to these two species. The third migration event occurred from V. tenellum to V. darrowii, with a migration weight of 0.14.
The fourth migration event with a migration weight of 0.18 occurred between two subtrees, as shown in the Fig. 3A.
The residual fit of maximum likelihood tree in Fig. 3A is plotted in the Fig. 3B. The results from these plots suggested that V. darrowii and V. myrsinites are closest to each other than all the other pairs and are hence candidates for admixture events, which we observed in our study. Further, V. corymbosum was seen closer to V. myrsinites than the V. darrowii, and V. tenellum.
Pairwise selective sweep analysis
To identify genomic loci underlying the domestication of the blueberries, we used pairwise selective sweep analysis and detected signatures of selective sweeps for each species in comparison with V. corymbosum. The analysis revealed several genomic regions for all the species (Fig. S4). The most prominent selective sweep detected across all four wild species was the genomic regions on the scaffold VaccDscaff 12 (Fig. 4), spanning 17,197,565 to 17,912,802 bp. This genomic region harbored 32 genes that were annotated in this region. Details of these genes, including Arabidopsis homologs and Gene Ontology terms, are given in Table 5. A closer look at the functions of these genes suggests that most of these genes are related to primary metabolic pathways, including biosynthetic and signaling processes. Six of the 32 genes encoded DNA topoisomerase 2, causing topological changes needed to resolve meiotic recombination. Four genes encoded interaptin, and three encoded callose synthase ( Table 5). The gene augustus_masked-VaccDscaff12-processed-gene-172.10, encoding the protein MAINTENANCE OF MERIS-TEMS-like (Arabidopsis homolog AT2G25010.1), is required to maintain genome stability and cell division activity in meristematic cells [30]. These genes may underlie the molecular genetic basis of domestication or favorable horticultural traits that may have contributed to the improvement in blueberry.
Discussion
Introgression with bridge accessions is needed to widen genetic diversity among the blueberries. We performed this study to identify species boundaries and bridge accessions to widen the genetic diversity among the cultivated pool and compare species pairwise to resolve reticulation and gene flow. We used genome-wide SNPs in collections of five critical species historically used for introgression and breeding. The current study determined genetic relatedness and admixture in the selected accessions, which will be helpful to understand the role of introgression and wide hybridization in the US blueberry breeding history. This study allowed us to track lineages in the present-day domesticated V. corymbosum cultivars admixed due to interspecific hybridizations.
Genomic relationships in SHB and NHB accessions
Southern highbush cultivars have been developed by introgressing V. darrowii into northern highbush V. corymbosum background. The primary goal was to introduce adaptation for lower chill requirements [31]. The phylogenetic analysis based on the nuclear genomewide SNP data indicated that SHB and NHB are more closely related to each other [21,32]. In PCA [33]. and phylogenetic analysis (Fig. S3E), we observed that several of the NHB accessions clustered along with SHB cultivars, which could be due to recurrent backcrosses while developing SHB cultivars. Thus, our study resolved the relationship between SHB and NHB [21].
Vaccinium boreale is genetically distant outgroup
V. boreale is a small, deciduous shrub growing 1-9 cm tall, spreading by shallow rhizomes to form dense colonies of many individuals. It is native to the northern part of the United States and parts of southeastern Canada. It is mainly restricted to alpine, subalpine (non-forested, tenellum, and V. boreale appears to be ancestral to these species. This observation was further strengthened by the current admixture analysis revealing that V. boreale and V. corymbosum share the single clade at K = 2 level but were separated at K = 3 [33]. Interestingly, V. boreale, V. angustifolium, and V. myrtilloides cohabitate across the Allegheny range of Canada indicating this region may be the primary center of diversity. In this study, admixture analysis revealed that V. corymbosum species are genetically related to V. boreale and might have shared lineages. Furthermore, the detection of shared ancestry of V. boreale in V. corymbosum in TreeMix analysis can be considered further evidence for genetic relatedness between these two species. Thus, the migration events could be noted in this study from V. boreale to V. corymbosum and in the subtrees involving V. corymbosum and V. myrsinites possibly indicate genetic reticulation involving northern to southern species. More extensive set of collections from each of these species will be needed to place V. corymbosum in the phylogenetic network accurately.
Genetic differentiation within southern blueberry species
An extensive geographic distribution range and outcrossing rates may have significantly contributed to the high genetic diversity observed in southern species-V. darrowii, V. myrsinites, and V. tenellum [34]. V. darrowii, and V. myrsinites, were noted in this study as highly diverse, whereas V. tenellum accessions had a relatively narrow genetic background. In this study, however, they might be admixed with V. pallidum/V. vaccillans in coastal North Carolina, and hence further admixture analysis including these two species will be necessary. The TreeMix analysis indicated gene flow from V. tenellum to V. darrowii, implying that V. tenellum might be ancestral to V. darrowii.
Southern blueberry species represent a rich genetic resource for selecting accessions with valuable horticultural traits [34]. Admixed populations resulting from the wide hybridization of V. darrowii and V. myrsinites are evergreen xerophytic, fire-adapted plants [4,7,8] and occupy wide ranges [5]. V. myrsinites is probably an autotetraploid of V. darrowii, and thus other than chromosome counts or hybridization experiments, grey glaucescence on new growth flushes, and fruit of V. darrowii can be used to separate the two species. New growth flushes on V. myrsinites are shiny green while immature berries are shiny green or greenish red [35]. Our analysis provides valuable information about genetic reticulation among the selected accessions. Seven Vaccinium species, including V. darrowii, V. corymbosum, and V. tenellum, are known as ancestral to cultivated SHB and NHB genomes [7]. Also, two more species, V. myrtilloides, and V. pallidum, have been reported to partially contribute to the current gene pool of highbush cultivars [21,36]. Further research involving wild V. corymbosum species will be needed for further understanding of the gene flow among northern and southern species.
Strong selective sweeps of domestication identified on scaffold 12
Domestication studies is a complex evolutionary process by which cultivars are selected that differ from their wild progenitors in quality, yield, or adaptation [37]. Although the selection is primarily based on a preferred phenotype of interest, such as fruit taste and size, it involves the presence and interactions of genes associated with the desired phenotype selected at the genetic level. Thus, identifying strong signatures can lead to the discovery of candidate genes involved in the domestication processes. However, the positive selection signatures between wild and cultivated blueberry species must be identified to reveal the genes underpinning the evolution of domesticated blueberry V. corymbosum. Positive selection accumulates beneficial alleles, shifts allele frequencies in the population, and leaves a signature over time in the genome [38]. Such patterns of advantageous mutations fixed or selected during domestication can be revealed by analyzing extensive genomic data. In this study, a 715kb region that harbored 32 genes of primary metabolic pathways was a common sweep for selective signatures in all the pairwise comparisons. This selected region might relate to domestication, including adaptation to a particular climate and favorable fruit traits. Many candidate genes from this region were involved in primary metabolism. Thus, further functional genomic analysis of these genes would have great significance in understanding the domestication process of cultivated blueberries.
Conclusion
The present study demonstrated that GBS is reliable for identifying high-quality SNPs for investigating the genetic relatedness of blueberry species. The identified genomewide SNPs in the selected blueberry accessions were successfully mapped to the tetraploid Draper reference genome and used to elucidate the genetic structure. Also, we resolved that V. boreale is genetically distinct from the other species. We further identified migration events that provided insights into the evolutionary trajectories important for domestication and adaptation. The genomic region of the selective sweep identified on the scaffold Vac-cDscaff 12 comprised 32 genes, which could be crucial for the domestication of cultivated blueberries. Furthermore, PCA, phylogenetic, and admixture analyses resolved shared genetic lineages revealing collections of V. myrsinites and V. darrowii are highly admixed and do not exhibit distinct species boundaries. The observations made in this study may help understand the genetic relationships among the related species, enhance the breeding of horticultural traits in cultivated blueberries.
Plant materials
A total of 195 blueberry accessions (Tables S2, S3
DNA extraction
The homogenization of the leaf tissue and DNA extraction were performed as described in [33]. Leaf tissue ranging from 100 to 120 mg was placed in a 2-ml tube containing a metallic bead. The tissue samples were crushed by using TissueLyzer-II (Qiagen, USA) and used in DNA extraction, performed using the DNeasy mini-plant and DNA plant pro kits (Qiagen, USA) following the manufacturer's instructions. The DNA purity was verified with a Nanodrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and quantified with both Nanodrop and Qubit (Thermo Fisher Scientific, Waltham, MA, USA). The quality of each DNA sample was also verified in 1% agarose gels, stained with ethidium bromide, and visualized under UV light. Samples were stored at − 20 °C.
Library preparation, sequencing, and data analysis
DNA samples were normalized to a final concentration of 10 ng/μl and GBS was performed with the ApeKI restriction enzyme [39]. The library preparation and postsequencing analysis were performed according to established protocols. The library's quality and quantification were validated using the Bio-analyzer 2100 (Thermo Fisher Scientific, Waltham, MA, USA) and Qubit 4 fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA). The library was sequenced by using the NextSeq500 platform with paired-end sequencing chemistry. Sequencing reads were aligned to the Vaccinium corymbosum cv. 1 Draper (tetraploid) v1.0 genome sequence ( [28]; http:// gigadb. org/ datas et/ 100537). The first 12 scaffolds were used as a reference to align the reads. SNP calling was performed using GB-eaSy (https:// github. com/ dpwic kland/ GB-easy). The SNPs obtained were filtered with minor allele frequency (MAF) > 0.05%, missing data (call) rate > 90%, and read depth (DP) > 3 as criteria before analysis.
Admixture analysis
The admixture analysis involved using a least-squares optimization approach implemented in the sNMF function of the R package LEA [29,43]. The number of clusters obtained resulted from the cross-entropy criterion, based on the prediction of a fraction of masked genotypes (matrix completion), and the cross-validation approach. The number of K populations was evaluated from 1 to 6 clusters, with ten replications performed for each run. The best K value was chosen when the cross-entropy curves exhibited a plateau.
Detection of pairwise selective sweeps
We used an open-source tool, SweeD 4.0.0 [44], to analyze the site frequency spectrum (the distribution of the expected number of polymorphic sites), for pairwise selective sweep analysis. The selective sweeps between V. corymbosum and each of the wild blueberry species were identified based on the composite likelihood ratio tests. The genes in the selective sweep regions were retrieved from the Draper genome sequence gff file (http:// gigadb. org/ datas et/ 100537) using the coding sequence coordinates [28]. The obtained FASTA sequences were annotated using NCBI and TAIR databases to obtain the protein information and Gene Ontology annotation terms (biological process, molecular function, and cellular component). The Arabidopsis homologs were added to understand the annotations of the genes from the selective sweep regions.
TreeMix analysis
To model the gene flow and identify the migration events between the selected blueberry species, we used TreeMix22 v.1.12 [45]). The programs infer population splitting and mixing patterns from genome-wide allele frequency data. For a given set of allele frequencies, the program will return the maximum likelihood tree for the collection of populations and attempt to infer potential gene flow from the residual covariance matrix. In this study, we used five migration events for modeling. | 2023-03-06T14:41:22.192Z | 2023-03-06T00:00:00.000 | {
"year": 2023,
"sha1": "856d5b320f3b23f2301448b3ada6b16d12ba50bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "856d5b320f3b23f2301448b3ada6b16d12ba50bc",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233185174 | pes2o/s2orc | v3-fos-license | Aerosols, airflow, and airspace contamination during laparoscopy
Lay summary Laparoscopic surgery has been undermined throughout the COVID-19 pandemic by concerns that it may generate an infectious risk to the operating team through aerosolization of peritoneal particles. There is anyway a need for increased awareness and understanding of the occupational hazard for surgical teams regarding unfiltered escape of pollutants generated by surgical smoke and other microbials. Here, the aerosol-generating nature of this access modality was confirmed through repeatable real-time methodology both qualitatively and quantitively to inform best practice and additional engineering solutions to optimize the operating room environment.
Introduction
The COVID-19 pandemic has focused attention on the infectious transmission risk of laparoscopy for operating room (OR) staff 1,2 . In addition, there needs to be increased understanding of the potential dangers from surgical smoke pollution in surgical theatres despite positive-pressure room ventilation; this issue is now being prioritized by the Joint Commission 3 among other groups 4,5 . Better empirical understanding of aerosols, airflow impact, and airspace contamination of laparoscopy would inform best practice as well as its appropriateness for classification among aerosol-generating procedures, while promoting and inspiring methods for hazard mitigation. Here, the methodology for such advancement is described, along with early findings from such evaluations including during surgery 6,7 .
Methods
With institutional ethics approval (AEROSOLVE study, institutional review board reference 1/378/2172) and individual participant consent (from both OR team members and, when involved, patients), flow visualization studies were performed in the OR before and during elective laparoscopic operations. First, formal smoke studies were used to detail room ventilation dynamics around the operating table during surgical simulation scenarios with and without positive-pressure room ventilation (25 room air exchanges per h). For this, an Air-Trace smoke generator (Concept Engineering, Maidenhead, UK) created low levels of isokinetic, isothermal smoke via a 25-mm duct. The scenarios replicated personnel and equipment conditions for surgical procedures with varying complexity of set-up (open inguinal hernia repair, laparotomy, laparoscopic appendicectomy, laparoscopic cholecystectomy, and laparoscopic anterior resection) with the smoke generator hose positioned at the simulated operative site.
Thereafter, flow studies were performed during actual elective operations with varying degrees of intraoperative electrocautery (6 procedures; 3 laparoscopic cholecystectomies, 2 laparoscopic appendicectomies, 1 laparoscopic parastomal hernia repair; standard pneumoperitoneal pressure setting 12 mmHg). For this, a light sheet generated by a galvanometer optical laser scanner was used to illuminate a two-dimensional (2D) slice of the surgical airspace during surgery and imaged with an 8 K Ultra-High Definition camera (Canon EOS R5 with RF 35 mm f/1.8 lens; Canon, Ohta-ku, Tokyo, Japan) whose absence of diffraction limitation enabled resolution of droplets larger than 2 mm (visible as scintillations within the laser sheet). Simultaneous extracorporeal airspace sampling was performed during these operations after investigator training using a particle counter (model 8306; Particles Plus, Stoughton, Massachusetts, USA) to measure 30-s periods both at baseline (before surgical incision) and then episodically during the procedure by positioning the device's isokinetic probe inlet 10 cm from the target area 8 . This device cumulatively measures particles by laser diode with differential counting (0.3-25 mm) at 0.1 Cubic Feet per Minute flow (2.83 Litres per Minute).
Results
Smoke studies revealed effective dissipation of smoke by positive-pressure room ventilation with the OR fully empty, as expected. However, during simulated operative scenarios, smoke behaviour was significantly different, with evident upwards drift from the operating site enveloping members of the surgical team (Video 1 and Appendix S1). Increased crowding of the operating table with people and equipment caused increased local air stagnation. Intraoperative footage during patient operations showed smoke and particles (evidenced as scintillations) moving similarly during surgery, notable even with the instruments in situ, as well as during manoeuvres including trocar instrumentation, and venting and specimen removal ( Fig. 1 and Video 1). Aerosol and particle leakage into the OR airspace was most evident during the operative phase of intra-abdominal dissection using hook cautery.
Particle counts confirmed increasing particulate concentration after initiation of the operation, reaching extracorporeal airspace levels in excess of 1Â10 6 particles per m 3 , the majority being 0.3-0.5 mm and, during cholecystectomy, 5-10 mm ( Table 1). Counts were particularly increased during electrocautery dissection of the gallbladder from the liver bed during cholecystectomy compared with dissection of the mesoappendix during appendicectomy, which in turn was associated with higher counts than were observed during intra-abdominal reduction of a parastomal hernia (done without cautery dissection). Trocar venting caused the highest concentration of particulate effluvium.
Discussion
This study focused on establishing methods and indicative data regarding the operative airspace particulate contamination occurring during laparoscopy, using open procedures as control in a simulation study as well as in two common general surgical laparoscopic operations that employ electrocautery to different extents (versus a laparoscopic operation without electrocautery). This is important as many surgical teams feel any such occupational hazard to be either theoretical or mitigated anyway by room ventilation and perhaps standard surgical masks 9 .
However, OR ventilation standards and indeed commissioning assume that empty theatres and surgical masks protect only primarily against large fluid droplet inhalation (and are loose fitting).
The evaluations in this study reflect actual workspace conditions of OR teams, corroborating simulation data with live intraoperative flow visualization and sensitive particle counting (necessary as the laser sheet provides only a 2D slice and so underestimates total particle concentration). Together, these show that the surgical team is exposed to considerable amounts of particles and pollutants during laparoscopy. Indeed, aerosol (containing gas and particles) leaks continuously from the patient during laparoscopic operations, with such flue comprising the constituents of the pneumoperitoneal gas including any noxious components present. The local OR airspace pollution is particularly marked during the cautery dissection phase of the operation, and occurs constantly rather than just at the time of instrument insertion and removal 10 . Particle counts increase during the operation even without cautery and trocar venting causes the greatest effluvium stream. A substantial proportion of the aerosolized particles are less than 5 mm in size and so may remain airborne indefinitely unless removed 11 . All airborne particles smaller than 10 mm can be inhaled, with those greater than 2.5 mm depositing within the nose, pharynx, trachea, and bronchi, whereas smaller particles reach the bronchioles and alveoli where those smaller than 0.1 mm are absorbed into the circulatory system 9 .The smoke simulation studies show that modern OR positive-pressure ventilation sufficient to meet official requirements (https://www.gov.uk/government/collections/ health-technical-memorandum-disinfection-and-sterilization) is not powerful enough to counteract the local airspace environment created by surgical teams carrying out their work. This means that there is relative stagnation of haze in the operative airspace above the abdomen during laparoscopic operation, with entrainment towards surgical team members likely induced by movement, body heat, and electrostaticity 12 .
Although investigation of COVID-19 infectivity owing to laparoscopic access is ongoing, the pandemic has already caused considerable reflection regarding the aerosol-generating capability of laparoscopy, and encouraged re-examination of practice and equipment from this new perspective 13,14 . Most attention has focused on surgical smoke extraction 15,16 , although to date most recommendations have been based on theoretical extrapolations 17,18 from expert groups and industry without independent, empirical data to guide evolved thinking and practice regarding laparoscopic care 19 . Importantly, the pandemic has also educated, equipped, and enabled familiarity of OR teams with better respiratory protection (such as N95/ FFP2/FFP3 masks) and smoke extraction principles and systems, including laparoscopic devices. The opportunity therefore presents to continue to protect surgical personnel in both these terms and others (including considering OR redesign). Further work is of course needed to expand this work to additional operations (including more major resectional operations of longer duration, greater energy device use, and larger-diameter instrumentation, including trocars and staplers, which likely exacerbate the levels of airspace contamination 10 ) and indeed other surgical specialties.
The implications of the present study regarding aerosolization at laparoscopy extend beyond the present pandemic. The results provide insight into both mechanism and degree as well as assessment methodology for future evaluations, including those of mitigation strategies. Although practice advice 1 and personal protective equipment 20 have a positive impact, there should also be much confidence that improved awareness and smart engineering innovations can ameliorate current laparoscopic access equipment and environmental standards, ensuring better occupational hygiene for OR teams. 6. Cahill RA, Dalli (208) 127 839 (2) 5650 (0) 2118 (0) 674 726 (157) 171 371 (532) Values in parentheses are ratio of counts in operative phase to background counts (BKD, counts after patient positioning, preparation, and draping but before incision). Dissection during appendicectomy and cholecystectomy was performed using hook monopolar point diathermy. | 2021-04-09T06:19:11.367Z | 2021-04-08T00:00:00.000 | {
"year": 2021,
"sha1": "d4b8e93feb1168e6f43b8b569e74f97c5f68a865",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bjs/article-pdf/108/9/1022/40664806/znab114.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a886aaef40b987d31184a0f31c05e200beb2c1ea",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119282937 | pes2o/s2orc | v3-fos-license | The slow decline of the Galactic recurrent novae T Pyxidis, IM Normae, and CI Aquilae
A distinguishing trait of the three known Galactic recurrent novae with the shortest orbital periods, T Pyx, IM Nor, and CI Aql, is that their optical decline time-scales are significantly longer than those of the other recurrent systems. On the other hand, some estimates of the mass of the ejecta, the velocity of the ejecta, and the duration of the soft X-rays emission of these systems are of the order of those of the other recurrent systems and the fast classical novae. We put forth a tentative explanation of this phenomenon. We propose that in these systems part of the material transferred from the companion during the first few days of the eruption remains within the Roche lobe of the white dwarf, preventing the radiation from ionizing the ejecta of the system and increasing the optical decline time-scale. We explain why this phenomenon is more likely in systems with a high mass transfer rate and a short orbital period. Finally, we present a schematic model that shows that the material transferred from the companion is sufficient to absorb the radiation from the white dwarf in these systems, ultimately supporting this scenario as quantitatively realistic.
INTRODUCTION
Classical novae are the result of a thermonuclear runaway induced in the envelope of a mass-accreting white dwarf. The white dwarf is not destroyed by the process and the system may undergo subsequent eruptions. Recurrent novae, the rarest subclass, are those systems that have been observed to experience more than one historical eruption, with a recurrence time-scale of a few decades. These are characterised by a high mass transfer rate as well as a high white dwarf mass (see e.g. Anupama 2008). Only ten Galactic recurrent novae have been directly observed during at least two of their eruptions; the statistics of these observations are reported by Schaefer (2010) and are partially summarised here in table 1.
In this paper we highlight an interesting feature of the data in table 1: the three recurrent novae with the shortest orbital periods T Pyx, IM Nor, and CI Aql have decline time-scales t3 significantly longer than all the other systems. In section 2 we argue that there is some evidence for the other observed properties of these systems to be in line with the other recurrent novae. We discuss the mass of their ejecta in sections 2.1 and 2.2, and the duration of their soft X-rays emission phases and the velocity of the ejecta in section 2.3. In section 3 we put forward a tentative explanation for this phenomenon. Finally, in section 4 we summarize our findings. E-mail: andrea.caleo@astro.ox.ac.uk
T PYX, IM NOR, CI AQL, AND THE OTHER NOVAE
The latest outburst of the recurrent nova T Pyx occurred in 2011 and the development of its light curve and spectrum has been followed more thoroughly than for any other recurrent novae to date (e.g. see Shore et al. 2012 andShore et al. 2013 for the optical and UV spectra, De Gennaro Aquino et al. 2014 andChomiuk et al. 2014 for the X-rays, Nelson et al. 2014 for the radio, and Patterson et al. 2014 for the time-evolution of the orbital period). This allows astronomers to compare the features of T Pyx with those of other classical and recurrent novae.
The mass of the ejecta
It is difficult to accurately determine the mass of the ejecta of a nova. It has usually been estimated based on photoionization models alone, which requires panchromatic observations (at least including the ultraviolet and optical resonance and principal recombination lines). Recently, a new technique has been applied to several bright novae including T Pyx, to obtain density, filling factor, and mass estimates based on electron density determinations using forbidden lines from the nebular stage spectra (see Shore et al. 2012 andShore et al. 2013). In brief, either or both of the isoelectronic forbidden lines of [N II] and [O III] provide determinations of the density and electron temperature. The electron density map is obtained by line profile ratios instead of using integrated fluxes c 2014 RAS arXiv:1502.06763v1 [astro-ph.SR] 24 Feb 2015 Table 1. The Galactic recurrent novae from Schaefer (2010) with the addition of the latest eruptions of U Sco (2010) and T Pyx (2011). The systems are listed in order of orbital period. We have not included the eruption of T Pyx close to 1866, inferred by Schaefer, Pagnotta & Shara (2010). RN V peak (mag) V min (mag) t 3 (d) P orb (d) Eruption years T Pyx 6.4 15.5 62 0.076 1890,1902,1920,1944,1967 23 1863, 1906, 1917, 1936, 1945, 1969, 1979, 1987, 1999 , 1907, 1933, 1945, 1958, 1967, 1985V745 Sco 9.4 18.6 9 510 1937, 1989V3890 Sgr 8.1 15.5 14 519.7 1962, 1990 assuming that the nebular spectra are (roughly) isothermal. The line profiles are modeled adopting a bipolar structure with a linear velocity law (Ribeiro et al. 2013) to obtain the geometry of the isothermal ejecta. Knowledge of the distance from the object allows us to convert the Hβ and Hα fluxes to emission measures that are then used to obtain the filling factor using the independently obtained electron densities and volumes. This is then used to obtain the total mass of the ejecta. The temporal development of the electron density can also be determined using multi-epoch modeling (Schwarz 2014). Studies have been conducted for CI Aql by Iijima (2012) and for T Pyx by Shore et al. (2013), resulting in the value Mej ≈ 2 · 10 −6 M for both systems. This number is on the lower end of the scale for ejecta masses of classical novae. Shore et al. (2013) also quote a value for the filling factor of T Pyx, f ≈ 3 · 10 −2 . To our knowledge, there is no estimate of the mass of the ejecta of IM Nor.
We note that the value of the mass of the ejecta of T Pyx is controversial. While the spectroscopic value is low, there are indications that support a higher value, the most suggestive of which is based on the orbital period change undergone by the system during the eruption (Patterson et al. 2014) and gives a mass of the ejecta Mej ≥ 3 · 10 −5 M . We argue that the dynamics of the binary system during the eruption is complex and would deserve further consideration, discussing this issue in appendix A.
Another indication of a possible high value for the mass of the ejecta of T Pyx is provided by radio observations (Nelson et al. 2014). The peculiarity of T Pyx as a nova is evident in this spectral range, as the radio flux began rising surprisingly late (∼ 50 days after the outburst), and no simple model of instantaneous, homologous explosion is able to satisfyingly fit the data and provide an estimate for Mej. However, a model designed to fit to the later part of the light curve gives Mej ≈ 4 · 10 −4 M , and more elaborate models for the whole light curve suggest a range of (1 − 30) · 10 −5 M .
M ej − t2 correlation
Low-mass ejecta have lower density than high-mass ones and they are also expanding at higher speed so that their density will drop faster. The ionization of the ejecta will therefore be fast if the mass of the ejecta is low, and the time for the decline will be accordingly short, so that a correlation between these quantities can be expected. Despite the uncertainties in the masses, Della Valle et al. (2002) found a correlation using a sample of 18 novae with more than one independent source for the mass of the ejecta of most systems. These results are shown in figure 1. Della Valle et al. (2002) Figure 1. Optical decline time-scale t 2 and mass of the ejecta for a set of novae; M ej is measured in units of 10 −5 M and t 2 in days. The approximate positions of the recurrent systems T Pyx, CI Aql, and U Sco, based on estimates of t 2 from the light curve of T Pyx, on the light curves reported by Schaefer (2010) and on the masses of the ejecta estimated by Shore et al. (2013), Iijima (2012) and Diaz et al. (2012) (M ej > 3 · 10 −6 M for U Sco) have been marked in red. V382 Vel is highlighted in the article by Della Valle et al. (2002) because that system was one of the main objects of study of their article. The dashed curve is given by equation (1) and it is the best linear fit for the Log(M ej ) − Log(t 2 ) relation. provide the fitting relation (with a 95% confidence level): with Mej measured in 10 −5 M and t2 in days. Applying equation (1) with Mej = 2 · 10 −6 M gives a very short predicted decline time-scale: the higher value in the interval permitted by equation (1) is t2 1 d, while the observed decline time-scales can be estimated from the light curves as t2 ≈ 20 d for CI Aql and t2 ≈ 30 d for T Pyx. This places CI Aql and T Pyx significantly off the curve of equation (1). We conclude that the optical decline time-scales of these systems are significantly longer than those of systems with a similar value of Mej.
X-Rays emission
Classical novae, including T Pyx, IM Nor, CI Aql, and other recurrent systems, show an extended strong soft X-rays emission after the outburst which is modeled as continuing nuclear reactions in the white dwarf envelope. The peak photon energy is Epeak < 1 keV (hence the often used term supersoft), and the FUV/X-rays spectrum resembles a blackbody distribution only at low resolution (see Orio, Covington &Ögelman 2001 andSchwarz 2011). Since the ejecta of the system are also expelled from the white dwarf at the beginning of the eruption, a correlation is expected between the duration of the supersoft emission, the mass of the ejecta, and the decline time-scale of the system. Schwarz (2011) performed a correlation study of the velocity of the ejecta, which is related to their mass and can be directly obtained spectroscopically, and the duration of the X-rays emission. We show their results in figure 2 where we have marked the position of T Pyx (see section 2.3.1), CI Aql, and IM Nor. In the sense of this correlation, the supersoft turn-off times of these three systems are not especially anomalous. Schwarz (2011) have also studied the correlation between the decline time-scale t2 and the turn off of the X-rays emission. Their results are shown in figure 3, where we have again marked the position of T Pyx, CI Aql, and IM Nor. Although the scatter is significant and there is a scarcity of systems with long decline time-scales, some correlation is evident. T Pyx, IM Nor, and CI Aql are clearly deviant in this correlation. We conclude from the analysis of figures 2 and 3 that while the X-rays emission and ejecta velocities of these systems are similar to those of the other recurrent novae, their decline time-scales t2 are anomalous.
A word of caution: emission from T Pyx
The X-rays emission from T Pyx has been studied with the Swift and Suzaku facilities. The results are discussed by De Gennaro Aquino et al. (2014) and Chomiuk et al. (2014). The X-rays behaviour of T Pyx is anomalous, including the time evolution of the hard-to-soft X-rays emission ratio which is used by Schwarz (2011) to define the turn off time of the phase of soft X-rays emission. De Gennaro Aquino et al. (2014) suggest that the emission received from To include T Pyx in the correlation studies of the other novae, we adopted the turnoff time toff ≈ 140 d, again given by De Gennaro Aquino et al. (2014). The previous arguments would still hold for the other two systems if T Pyx were to be excluded for its peculiarity.
REPROCESSING OF THE RADIATION DURING THE OUTBURST
In light of these observations, we propose the following scenario for the peculiar behaviour of T Pyx, IM Nor, and CI Aql: • Mass transfer from the companion star onto the white dwarf resumes immediately after the eruption with transfer rateṀ .
• A fraction f of the transferred material orbits in the Roche lobe of the white dwarf, with density distribution ρ(r, t).
• In most systems, after a time ∆t that is possibly as short as a few days, radiation from the white dwarf ionizes the ejecta and the optical luminosity declines.
• In a few systems, whose particular properties we will discuss, the accreted material of mass fṀ ∆t absorbs the radiation through photoionization before it hits the ejecta, impeding their ionization. The energy is then reemitted at a much lower temperature, and is less effective in ionizing the ejecta.
We illustrate this scenario schematically in figure 4, which shows the state of the system at the start of the optical emission (on the left) and after a time ∆t (on the right). For convenience, we have not drawn the figure to scale: the radius of the ejecta a few days after the eruption in a scaled figure would be considerably larger.
This screening effect is favoured in systems with a high mass transfer rate, that implies higher density within the Roche lobe of the white dwarf. The mass transfer rates of T Pyx and CI Aql are known to be high. The conventionally accepted values areṀT Pyx = (1 ÷ 5) · 10 −8 M yr −1 (Selvelli et al. 2008) andṀCI Aql ≈ 10 −7 M yr −1 (Hachisu, Kato & Schaefer 2003). There are no estimates ofṀ for IM Nor. A recent analysis by Godon et al. (2014), based on the fit of disc models to the data of far ultraviolet spectroscopy, proposes even higher values ofṀT Pyx, of order 10 −6 M yr −1 .
We propose that this effect is also favoured for systems with a low orbital period for the following reason. A low orbital period is related to a smal separation, and a small volume of the Roche lobe of the white dwarf. Thus, for a given accumulated mass, the region around the white dwarf should be at higher density ρ(r, t). Since the effectiveness of the orbiting material in absorbing the radiation depends on the recombination rate of its ionized atoms, which is proportional to ρ 2 , a high density is a key element for the occurrence of this screening effect.
It is very difficult to devise a comprehensive model of the irradiation of the ejecta taking into account the transfer of material into the Roche lobe of the white dwarf, because the hydrodynamics of the flow of matter during the peak of the irradiation is very complex. Such model is beyond the scope of our present paper. Instead, we present a schematic model for estimating the amount of material required for the shielding of the ejecta to occur.
The model
We assume that the mass transfer rate after the eruption is the same as in the quiescent state (see appendix B for discussion), and that the radiation from the white dwarf is time independent and has a blackbody distribution. We assume that material around the white dwarf has a spherically symmetric distribution ρ(r) and uniform temperature Tm. Our aim is to determine the minimum value of the fraction of material f that is needed to absorb the radiation incoming from the white dwarf after a time ∆t since the eruption. Any result f > 1 would imply that the effect we are describing is unrealistic.
There are few a priori constraints for the value of f from hydrodynamics. To our knowledge, there are no models of accretion disc formation during the peak of the irradiation of a nova. General simulations, as those by Lanzafame, Belvedere & Molteni (2006) (see their figure 2), still allow a great range of values for f , which varies from effectively 0 for high viscosity material to about 10 −1 for very low viscosity values.
We adopt the following fixed numerical values for the system: • For the white dwarf, a mass of 1 M , and a radius of 0.01 R . For the secondary, a mass of 0.2 M (these are reasonable but debatable values for the T Pyx system; see Uthas, Knigge & Steeghs (2010)).
• Time that would be required for the ionization of the ejecta without the intervening matter ∆t = 5 d, which is a typical decline time-scale of very fast classical novae (Strope, Schaefer & Henden 2010).
• Temperature of the material in the Roche Lobe of the white dwarf: Tm = 10 4 K. This value is consistent with energy balance in the Roche lobe; see appendix C1 for discussion.
• For the opacity, we neglect the neglect the metals in the chemical composition of the transferred material and adopt for the hydrogen and helium abundances: X = 0.75, Y = 0.25. This facilitates the calculations adopting a lower limit for the total opacity. See appendix C2 for a discussion.
We label as Nν (r) the number of photons per unit frequency at a distance r from the white dwarf, and proceed to determine Nν (r) as the number of photons emitted from the surface of the white dwarf minus those absorbed by the material in the Roche lobe. The photons are absorbed by processes of photoionization when they strike the atoms; we neglect the scattering processes because they are approximately elastic and have a smaller effect on Nν (r).
At a temperature Tm = 10 4 K, part of the hydrogen is ionized because of the collisions and does not contribute to the absorption. The amount of ionized atoms depends on the density of the material. Our model takes this effect into account by considering a reduced number of H atoms, given by the fraction of atoms that are not collisionally ionized, as derived by solving the Saha equation. We label such fraction as F (ρ). At Tm = 10 4 K, for the densities relevant to our scenario, only a negligible fraction of the He atoms are ionized by collisions. We acknowledge that our description constitutes an oversimplified view of the complex interaction between the radiation from the white dwarf, the collisions between the atoms and those between the atoms and the electrons, and the recombination processes, but we proceed in this way as our aim is to obtain an order-of-magnitude estimate of f rather than an exact value.
We label as Bν (TWD) the black body spectrum at the white dwarf surface temperature, as σi(ν) the photoionization cross section for photons of frequency ν striking atoms of species i, as αi(T ) the case B recombination coefficient of the species i, as nP, nα and ne the number densities of H, He nuclei (i.e. protons and α particles) and electrons. In what follows,ni(r) is the number density of the species i that contribute to the recombination -photoionization balance, so thatnP = F (ρ)nP andnα = nα. We denote as RRL the radius of the Roche lobe and compute it by means of Eggleton's formula (Eggleton 1983).
Analytical formulae for αH , αHe and for the cross section σ H→H + (ν) are given in Draine (2011). The recombination coefficient α He ++ is derived from the hydrogen recombination coefficient as described in section 4.2 of Osterbrock & Ferland (2006). The He photoionization cross section σ He→He + (ν) is interpolated from the data by Yan, Sadeghpour & Dalgarno (1998). The photoionization cross section of He + is given in section 5.4 of Cox (2000).
The number densities of protons (either free or in H atoms) and α particles (either free or in He atoms) are given by: where mP and mα are the masses of the H and He nuclei respectively.
The equations that give the number of photons radiated from the white dwarf and their absorption by the material in the Roche lobe are: The equations that give the balance between the recombination and . Schematic illustration of the scenario discussed in section 3 for T Pyx, IM Nor, and CI Aql. The optical emission starts at the end of the fireball phase. The material in the Roche lobe of the white dwarf has been blown away by the eruption, but the mass transfer from the companion through the L 1 point is uninterrupted. After a few days, the expansion of the ejecta has caused their density to drop. However, the material in the Roche lobe of the white dwarf suffices to absorb most of the radiation (illustrated by arrows in the figure) and re-emit it at a much lower temperature, decreasing its effectiveness. The ejecta are not ionized and the optical emission continues. The figure is not to scale: the radius of the ejecta a few days after the eruption in a scaled figure would be considerably larger.
Solving equations (5) -(11) in the variables Nν ,nH,n H + ,nHe, n He + ,n He ++ , ne with the boundary condition (4), we determine Nν (r). We thus obtain β, the energy that is free to escape the Roche lobe:
Results
To solve equations (5) -(11), a density distribution law ρ(r) is required, for a total mass in the Roche lobe fṀ ∆t. Since the recombination rate depends on the square of the density, the required value of f will depend strongly on the density profile. Unfortunately, the distribution of the material depends on the hydrodynamics of the accretion process and is therefore difficult to predict in advance.
We discuss the simplest case of uniform density distribution ρ(r) =ρ. It is important to note that this is the least favourable choice since the effectiveness of recombination grows strongly with ρ and the radiation is therefore more easily absorbed if there are regions of high density and regions of low density rather than a uniform profile. As an illustrative example, we also present results for the case ρ(r) ∝ r −3/2 . The r −3/2 profile has physical significance because it is the density distribution that would result in the case of steady, free-fall, spherical accretion onto the white dwarf, and represents an extreme case worth studying.
We consider the material in the Roche lobe to be effective in absorbing the radiated photons if 90% of the emitted energy is absorbed, i.e. if β = 0.1. We denote as fc the fraction of transferred material that has to orbit in the Roche lobe for this to occur and give values of fc for T Pyx and CI Aql in the uniform density and ρ(r) ∝ r −3/2 cases in table 2. uniform density ρ(r) ∝ r −3/2 T Pyx fc = 2.5 · 10 −2 fc = 1.1 · 10 −2 CI Aql fc = 6.0 · 10 −2 fc = 2.6 · 10 −2 Table 2. Fraction fc of transferred material that needs to stay in the Roche lobe of the white dwarf of T Pyx and CI Aql in order to absorb 90% of the radiated energy.
In all cases, a fraction fc < 0.1 is sufficient to absorb the outgoing radiation. For ρ(r) ∝ r −3/2 , fc is of order 10 −2 . If the density distribution differs greatly from the ones we studied with our model, even smaller values of f might suffice. Thus, the transferred material seems to be sufficient to shield the ejecta. In light of these results, we argue that the scenario we described is realistic.
We show in table 3 the f − β correspondence for some values around fc for both T Pyx and CI Aql, in the case of uniform density distribution. β depends strongly on f so that a moderate variation in the amount of absorbing material can have a large effect.
We argued that a longer orbital period Porb should lead to higher value of fc. We show some values of Porb − fc in table 4, assuming a mass transfer rateṀ = 3 · 10 −8 M yr −1 , as for T Pyx, and uniform density. As we expected, fc grows with Porb. For systems with an orbital period of several days, not even all of the transferred material would suffice to shield the ejecta.
The results for β depend uniquely on the product fṀ , not on the individual values of f andṀ . For the valuesṀT Pyx 10 −6 M yr −1 proposed by Godon et al. (2014), lower values of f are required to shield the ejecta, with fc of order 10 −3 for the uniform density model and 10 −4 for the ρ ∝ r −3/2 model.
CONCLUSION
In this paper, we discussed some properties of the three recurrent novae with the lowest orbital periods, T Pyx, IM Nor, and CI Aql, noting that they have the longest optical decline time-scales but the mass and velocity of their ejecta and the duration of their X-rays emission are similar to those of systems with more rapid optical declines. P orb (d) fc 0.05 1.5 · 10 −2 0.1 3.4 · 10 −2 0.2 7.6 · 10 −2 0.3 1.2 · 10 −1 0.5 1.6 · 10 −1 1 3.3 · 10 −1 2 6.7 · 10 −1 3 1.0 Table 4. Fraction fc of material corresponding to β = 0.1 for the mass transfer rate of T Pyx, as a function of the orbital period, in the case of uniform density distribution.
We put forth a scenario to explain the reason for this occurrence. We propose that some of the material transferred from the secondary star during the eruption contributes to the absorption of the radiation from the white dwarf and shields the ejecta before their ionization occurs. Quantitative evaluation of this scenario indicates that for a system with a large mass transfer rate and short orbital period, the amount of transferred material is indeed sufficient. A more thorough analysis of this problem requires understanding the hydrodynamic of the transfer process. Unfortunately the situation is so complex, not to mention the presence of the ejecta and the irradiation, that the feasibility of any model on this subject is questionable.
The detailed physics of the ejecta has not been addressed in this study, but it plays a key role in determining the photometric properties of the system. While it is clear that the radiation reprocessed from the material in the Roche lobe of the white dwarf is reemitted at a much lower temperature, and is therefore less effective in ionizing the ejecta, its effect on the ejecta should be studied to gain a deeper understanding of the light curve of the system.
Another key aspect that remains to be investigated is the consequences of the shielding effect on the spectrum of the system. Although the spectrum of T Pyx during various phases of its outburst has been observed (see Shore et al. 2012, Shore et al. 2013, the major obstacle in understanding the behaviour of this particular system is the complexity of the spectra rather than the scarcity of the data. The analysis of the spectra prompted Shore et al. (2013) to suggest that T Pyx is peculiar with respect to most of the classical novae and that its 'extended opaque phase could be due to the formation of a cool common envelope after the explosion, causing a recombination wave to move outward through the ejecta that also extinguishes the XR emission. It could then slowly clear as the WD settles into a stage of quasi-static nuclear burning and develops a supersoft source'. The scenario we described provides an explanation for how such an envelope could be formed and motivates further thought on the spectra of the system. Patterson et al. (2014) reported a campaign to track the photometric wave of T Pyx and follow the evolution of its orbital period, gathering data over the 1996-2011 period. They reported a gradual increase of the orbital period during the quiescent phase and a jump of +0.0054(6)% at the eruption. They claim that this implies that the mass of the ejecta of T Pyx is at least 3 · 10 −5 M . We argue here that the variation of the orbital period doesn't imply a lower bound to the ejected mass. Patterson et al. (2014) state that 'during the eruption, mass loss should increase P orb , and angular-momentum loss should decrease it'. A positive change in the period would imply that the mass loss effect wins and the mass loss is at least 3 · 10 −5 M . Although no derivation is shown, this result appears to be based on the analysis by Livio (1991), which in turn is based on that by Shara et al. (1986). However, a tacit but fundamental assumption is at the core of the paper by Shara et al.: the formulas reported, most notably their equation (5), are only valid if the orbit of the system before and after the eruption is circular. While it is possible to imagine a gradual circularization of the orbit after the eruption, so that this assumption might hold for the study of the long-term evolution of the system, the eccentricity right after the eruption is likely non-zero.
In general, the variation of the period of a system in a circular or elliptical orbit depends on the change of the masses of its components and the mechanical energy of the system. Equation (11) of van den Heuvel (1992) relates the change in the period P of a binary system to the change of its total mass M and its energy per unit reduced mass : We express this result in terms of the masses m1, m2 of the components of the system and in terms of the total energy E = µ (where µ = (1/m1 + 1/m2) −1 is the reduced mass): For mass loss from the primary star, dm1 = −Mej, dm2 = 0: where δP is the period variation and δE is the change of mechanical energy of the system. It is very complex to even estimate δE from models of the ejection mechanism (during which nuclear energy is converted to thermal energy through degenerate H burning at the surface of the white dwarf, and then to mechanical energy of the ejecta and possibly the binary system); even the kinetic energy of the ejecta is currently not very well known (see e.g. the discussion in Shara et al. 2010). Without an estimate of δE, equation (A3) cannot provide an estimate of Mej.
APPENDIX B: MASS TRANSFER DURING THE ERUPTION
In the analysis of section 3, the mass transfer rate from the companion to the white dwarf during the first few days of the eruption was considered to be the same as that at quiescence. We discuss here this assumption. The secondary star is strongly irradiated during the outburst. It could be argued that this would cause an expansion of the star and an increase in the mass transfer rate. The problem of irradiationinduced mass transfer in novae has been widely discussed in the literature; however, the focus has always been on the long term evolution of the system rather than the first few days after the outburst. The most influential work in the field was conducted by Kovetz, Prialnik & Shara (1988). These authors determined the depth reached by the irradiation and argue that the main scattering process in the atmosphere of the star is electron scattering. They obtained for the mass of the irradiated layer Mirr ≈ 5 · 10 −8 M , where M is the mass of the companion. The companion would then inflate on a time-scale of ≈ 0.1 yr, overfilling its Roche lobe and causing a significant increase in the mass transfer rate.
In contrast, we are considering the evolution of the system on a very short time-scale, the first few days after the outburst. Since the star would not have had time to fully expand, the scenario proposed by Kovetz, Prialnik & Shara (1988) does not apply to our case. Moreover, we argue that the incoming photons are absorbed and re-emitted by processes of photoionization rather than electron scattering and give an independent estimate of Mirr.
We achieve an order-of-magnitude estimate of Mirr by assuming that the atmosphere of the companion is composed of hydrogen, with a density profile similar to that of a non-Roche-lobe-filling main sequence star of the same mass (we acknowledge that this is inaccurate in a neighbourhood of the L1 point). We make use of the density profile of a star with mass M ≈ 0.2 M , effective temperature Teff = 3200 K, gravity Log(g) = 5, and solar chemical composition extracted from the NextGen models (see Hauschildt, Allard & Baron (1999)). We evaluate the effect of a monochromatic irradiation at the peak wavelength of the emission from the white dwarf, at TWD = 7 · 10 5 K as in section 3.1. The resulting peak photon energy is Epeak ≈ 60 eV.
Proceeding as in section 3.1, we label as Nγ(z) the number of photons per unit area and unit time that reach depth z in the atmosphere, asnH(z) andn H + (z) the number densities of H and H + that contribute to the recombination -photoionization balance, as σph the photoionization cross section of H atoms at Epeak ≈ 60 eV, as F (ρ) the fraction of H atoms that would not be collisionally ionized at density ρ and temperature Tm, and as α(Tm) the type B recombination coefficient of hydrogen. As in the case of section 3.1, we assume Tm = 10 4 K; see appendix C1 for discussion. We label as a the distance between the white dwarf and the secondary star. The equations that determine Nγ(z) are: nP(z) = F (ρNextGen(z)) ρNextGen(z) mP .
We solve equations (B2) -(B5) with the boundary condition (B1), and define the penetration depth zirr as the depth at which 90% of the incoming energy has been absorbed. The result is zirr ≈ 10 7 cm. Assuming uniform irradiation of the exposed of the secondary star, we estimate Mirr as: This result is six orders of magnitude lower than that of Kovetz, Prialnik & Shara (1988). The effect of the irradiation on the secondary star is therefore very small and the mass transfer rate is unlikely to be affected by it. In section 3.1, we argued that the temperature Tm of the material in the Roche lobe of the white dwarf is Tm ≈ 10 4 K. We discuss here this assumption.
We use an energy argument to determine an approximate value for Tm. The material in the Roche lobe is heated because of the radiation incoming from the white dwarf. It dissipates this energy by processes of photo-recombination to excited states that release photons that are less energetic than the ones from the white dwarf. The mean free path of these photons in the Roche lobe is: Tm) .
We use Kramer's formula for the bound-free opacity κbf, assuming metallicity Z = 0.02 and H abundance X = 0.75. For the case of uniform density in the Roche lobe of T Pyx described in section 3.2, with f = fc = 2.5 · 10 −2 , this gives lmfp = 3 · 10 9 cm. The radius of the Roche lobe is of order rRL ∼ 10 10 cm. This means that, depending on the position in the Roche lobe, a significant fraction of the re-emitted photons are not re-absorbed and contribute to the cooling of the system; most of them, however, are not lost, so that our scenario is only approximately correct. However, κbf(ρ, Tm) ∝ T −3.5 m depends strongly on Tm, so that, for a temperature T = 1.2 · 10 4 K, lmfp and rRL are of the same order. To obtain a temperature estimate, we assume transparency for photons from recombinations and that these are the main source of cooling.
The energy per unit time and volume absorbed from the incoming radiation is: The energy per unit time and volume emitted by the material is: where Λ(T ) is the cooling function. We use the values tabulated by Sutherland & Dopita (1993) for solar composition (i.e. [F e/H] = 0). Such cooling function was computed under conditions valid for the interstellar medium rather than a stellar environment with considerably higher density (of order ρRL ∼ 10 −10 g cm −3 in the Roche lobe of the white dwarf); however, in both cases the main contribution to the cooling function at T 10 4 K is from hydrogen recombination to excited states (see e.g. Dalgarno & McCray (1972)), so that the predicted cooling rate is still approximately correct. Figure 8 of Sutherland & Dopita (1993) shows the cooling function for a large range of temperatures and various chemical compositions.
We solve the equation εem(r, T ) = εabs(r) in the variable T for every r with the values of dNν (r) dr , n + H (r), ne(r) for the uniform density case for T Pyx with f = fc. This temperature profile is not self-consistent, as εabs has been obtained by assuming a uniform Tm = 10 4 K profile, and will not reproduce in details the real profile in the Roche lobe. The temperature maximum in figure C1 is located where the bulk of the radiation is absorbed from the material in the Roche lobe; here a significant number of atoms are neutral and the cooling is most efficient.
Our results show that the temperature in the Roche lobe, though not uniform, does not deviate significantly from 10 4 K. This is due to the fact that the cooling function Λ(T ) varies very steeply with T for T 10 4 K (see again figure 8 of Sutherland & Dopita 1993): a temperature profile that spans a range of a few 10 3 K is suited to describe regions with significantly different rates of emitted energy. On the other hand, the recombination coefficients αi and the fraction of non collisionally ionized H atoms F (ρ) do not depend on T in a similar manner, and they can be safely computed at the uniform temperature Tm = 10 4 K. This is why we have not considered the effects of a temperature gradient in the Roche lobe. Solving equations (5) -(11) of section 3.1 with the higher value T = 1.2 · 10 4 K gives very similar results for the value of fc and for the temperature profile of figure C1. Figure C1. Temperature profile obtained imposing εem(r, T ) = ε abs (r) for the case of uniform density for T Pyx with f = fc.
Although we have focused the discussion of this appendix on just the case of section 3, a similar argument holds for the assumption Tm = 10 4 K of appendix B. In that case, almost all of the photons originated in recombination to excited states contribute to the cooling of the system, because the depth zirr ∼ 10 7 cm reached by the irradiation is at low optical depth, as can be seen from the data in the NextGen model (τ 10 −4 ). We conclude that for both problems, Tm ∼ 10 4 K is an acceptable approximation and the temperature gradient is not very significant.
C2 Metals in the Roche lobe of the white dwarf
In section 3.1, we neglected the absorption of photons originated at the surface of the white dwarf by elements heavier than He. We discuss here this assumption.
Inspection of the solutions of equations (5) -(11) in the β = 0.1 cases described in section 3.2 shows that, in all of these cases, most of the atoms, whether of H or He, are ionized. In this situation, the number of photons per unit volume and time absorbed for effect of the species X, equal to the number of recombinations that take place, is approximated by: where αX is the recombination coefficient of the species X. The relevance of the species X to the absorption depends therefore on the value of the product n X + αX . Assuming a metallicity of order 10 −2 , the total number density of the heavy elements with atomic number Z ≥ 6 is of order 10 −3 times that of the H atoms. Some recombination coefficients αX have been tabulated, including those of the most common hydrogenic atoms (Storey & Hummer 1995). They are generally higher than the H recombination coefficient αH, but most often well within a factor of 10 2 from it. We conclude that the contribution of the heavy elements to the absorption is of order 10 −1 , or lower, compared to that of the H atoms. It is therefore possible to neglect it in a schematic model as the one of section 3.1. | 2015-02-24T11:14:25.000Z | 2015-02-24T00:00:00.000 | {
"year": 2015,
"sha1": "bdb5a53213b4dd3e52d126b3c45de4cfab6bce62",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/449/1/25/4132694/stv265.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bdb5a53213b4dd3e52d126b3c45de4cfab6bce62",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5962290 | pes2o/s2orc | v3-fos-license | Phosphorylation of Akt and ERK1/2 Is Required for VEGF-A/VEGFR2-Induced Proliferation and Migration of Lymphatic Endothelium
There is growing evidence that vascular endothelial growth factor-A (VEGF-A), a ligand of the receptor tyrosine kinases VEGFR1 and VEGFR2, promotes lymphangiogenesis. However, the underlying mechanisms by which VEGF-A induces the growth of lymphatic vessels remain poorly defined. Here we report that VEGFR2, not VEGFR1, is the primary receptor regulating VEGF-A-induced lymphangiogenesis. We show that specific inhibition of VEGF-A/VEGFR2 signaling with the fully human monoclonal antibody r84 significantly inhibits lymphangiogenesis in MDA-MB-231 tumors. In vitro experiments with primary human dermal lymphatic endothelial cells (LECs) demonstrate that blocking VEGF-A activation of VEGFR2, not VEGFR1, significantly inhibits VEGF-A-induced proliferation and migration of LECs. We show that VEGF-A stimulation of LECs leads to the phosphorylation of VEGFR2 (Tyr 951, 1054, 1059, 1175, and 1214) which subsequently triggers PKC dependent phosphorylation of ERK1/2 and PI3-K dependent phosphorylation of Akt. Additionally, we demonstrate that inhibitors that suppress the phosphorylation of ERK1/2 and Akt significantly block VEGF-A- induced proliferation and migration of LECs. Together, these results shed light on the mechanisms regulating VEGF-A-induced proliferation and migration of LECs, reveal that VEGFR2 is the primary signaling VEGF-A receptor on lymphatic endothelium, and suggest that therapeutic agents targeting the VEGF-A/VEGFR2 axis could be useful in blocking the pathological formation of lymphatic vessels.
Introduction
Lymphatic vessels are required for the absorption of intestinal lipids, transport of immune cells, and return of tissue fluid and macromolecules to the blood vascular system [1]. Impaired function of the lymphatic system or an insufficient number of lymphatic vessels can cause the accumulation of fluid and protein in tissues and result in the debilitating disorder lymphedema [2]. Conversely, new lymphatic vessels form in many pathological settings and participate in the progression of several human diseases [2]. These observations have fueled intense research efforts to identify the molecular mechanisms regulating lymphangiogenesis so that therapies can be developed to promote or inhibit this process.
The study of lymphangiogenesis gained momentum following the discovery of the first lymphatic growth factor, vascular endothelial growth factor (VEGF)-C. VEGF-C is indispensable for the proper development of the lymphatic system in several animal models and induces inflammatory and tumor lymphangiogenesis [3,4,5,6,7,8].
Although VEGF-C is a robust lymphatic growth factor, it does not act alone. Other members of the VEGF family were recently shown to stimulate the growth of lymphatics [7]. The most prominent member of this family is VEGF-A, a ligand of the receptor tyrosine kinases VEGFR1 and VEGFR2 [9].
VEGF-A is a crucial regulator of embryonic and pathological hemangiogenesis. Inactivation of a single allele of VEGF-A in mice leads to lethality around embryonic day 11.5 because of severe defects in blood vessel development [10,11]. VEGF-A is also a major regulator of pathological hemangiogenesis that occurs in inflammatory diseases, diabetic retinopathy, and tumors [9]. VEGFR2 is the primary receptor controlling VEGF-A stimulated growth of blood vessels. Mechanistically, VEGF-A/VEGFR2 signaling induces hemangiogenesis by promoting blood endothelial cell (BEC) proliferation, survival, and migration in part through the activation of the mitogen-activated protein kinase/ extracellular-signal-regulated kinase-1/2 (ERK1/2) and phosphatidylinositol 3-kinase (PI3-K)/Akt signal transduction pathways [9]. Other additional pathways regulating these cellular processes have been extensively studied and defined in BECs. In contrast, the mechanisms underlying VEGF-A-induced lymphangiogenesis remain poorly defined and controversial.
Interestingly, the in vivo response to VEGF-A is strikingly different for lymphatic and blood vessels. Adenoviral mediated delivery of VEGF-A to the ear skin of mice leads to the dramatic enlargement of lymphatic vessels and impairment in lymphatic vessel function [12,13]. Transgenic overexpression of VEGF-A in the skin of mice also causes lymphatic vessels to preferentially increase in caliber rather than number during settings of inflammation [14,15]. Conversely, VEGF-A expression in the skin of mice induces sprouting hemangiogenesis resulting in an increase in density of blood vessels [13]. This contrasting effect of VEGF-A on lymphatic and blood vessels raises the possibility that the mechanisms underlying VEGF-A-induced lymphangiogenesis are different than those underlying VEGF-A-induced hemangiogenesis.
It has recently been reported that VEGF-A directly promotes the proliferation and migration of lymphatic endothelial cells (LECs) [16,17,18,19,20,21]. Additionally, VEGF-A stimulates the phosphorylation of PLC-c, Akt and ERK1/2 in LECs [22,23,24]. However, the extent to which VEGFR1 and VEGFR2, both of which are expressed by LECs [12,13,21,25,26,27], contribute to these events has not been fully delineated. Furthermore, experiments with LECs have not included inhibitors of these molecules/pathways to define the functional significance they serve in promoting VEGF-A-induced processes.
The present study explores the function of VEGF-A/VEGFR2 signaling in promoting the proliferation and migration of LECs. To accomplish this, the novel anti-VEGF-A antibody r84 was used. r84 is a fully human monoclonal antibody that specifically binds VEGF-A and prevents it from activating VEGFR2, but not VEGFR1, in a dose-dependent manner [28]. Here we show for the first time that VEGF-A activation of VEGFR2 directly stimulates LEC proliferation and migration through the PI3-K and ERK1/2 signaling pathways. These experiments shed light on the mechanisms underlying VEGF-A-induced proliferation and migration of LECs and reveal that the circuitry of VEGF-A/ VEGFR2 signaling is conserved between LECs and BECs.
Blocking VEGF-A activation of VEGFR2 is sufficient to suppress lymphangiogenesis
We previously reported that r84 significantly inhibits hemangiogenesis in MDA-MB-231 tumors [29]. However, we did not examine lymphangiogenesis in this study. To evaluate the effect of r84 on lymphangiogenesis, MDA-MB-231 tumors from our previous study were stained with an antibody against LYVE-1. LYVE-1 positive area was significantly lower in tumors from r84 treated mice (2.2360.986; n = 5) than in tumors from control IgG treated mice (7.0361.013; n = 6)( Fig. 1A-C). These data reveal that specifically blocking VEGF-A activation of VEGFR2 with r84 is sufficient to suppress lymphangiogenesis in vivo.
In vitro experiments were then performed with primary human LECs to evaluate the effect of r84 on VEGF-A-induced cellular processes required for lymphangiogenesis. Immunofluorescence staining for the lymphatic marker PROX1 demonstrated that our cultures consisted of a highly pure population of lymphatic endothelium ( Fig. 2A-C). Additionally, reverse-transcription PCR confirmed LEC expression of VEGFR1 and VEGFR2 (data not shown).
The Cell Titer Blue assay revealed that VEGF-A significantly induced the proliferation of LECs (P,0.05; Fig. 2D). VEGF-Ainduced proliferation was not inhibited by a functional blocking antibody against VEGFR1 or by a non-specific control IgG (Fig. 2D). Conversely, blockade of VEGF-A activation of VEGFR2 with r84 significantly (P,0.05) inhibited VEGF-Ainduced proliferation of LECs (Fig. 2D).
We next examined the effect of r84 on LEC migration. Transwell migration assays demonstrated that VEGF-A significantly induced the migration of LECs (P,0.05; Fig. 2E). VEGF-A-induced migration was resistant to a functional blocking antibody against VEGFR1 and to a non-specific control IgG, but was completely blocked by r84 (P,0.05; Fig. 2E). These data indicate that VEGF-A activation of VEGFR2, not VEGFR1, directly drives LEC proliferation and migration.
VEGFR2 is the primary signaling VEGF-A receptor in LECs
To achieve a better understanding of how the VEGF-A/ VEGFR2 axis promotes LEC proliferation and migration, we analyzed VEGF-A-induced signaling in LECs. VEGF-A triggers the auto-phosphorylation of VEGFR2 on several tyrosine residues that regulate its kinase activity and serve as docking sites for adapter proteins that promote specific signal transduction cascades (Fig. 3A). The phospho-tyrosine profile of VEGFR2 has not been examined previously for VEGF-A-stimulated LECs. Failure in the The entire area of each LYVE-1 stained tumor section was examined at low magnification and the percent of LYVE-1 positive area was determined for each field using NIS-Elements imaging software. Ten fields with the highest LYVE-1 positive percent area were averaged together to yield a final score for each tumor and group means were tested for significance by an unpaired student's t-test. The percent of LYVE-1 positive area of control tumors (7.0361.013) was significantly greater than r84 treated tumors (2.2360.986). Asterisk = P = 0.0042. doi:10.1371/journal.pone.0028947.g001 VEGF-A/VEGFR2-Induced Lymphangiogenesis phosphorylation of one of these key tyrosine residues could dramatically impact kinase activity or downstream signaling in LECs. VEGF-A stimulation of LECs resulted in the phosphorylation of Tyr 951, 1054, 1059, 1175, and 1214 (Fig. 3B). These results indicate that the phospho-tyrosine profile of VEGFR2 is similar between VEGF-A stimulated LECs and BECs.
VEGF-A promotes PKC dependent phosphorylation of ERK1/2 in LECs
Rapid activation of ERK1/2 in VEGF-A-stimulated BECs is controlled primarily by Protein Kinase C (PKC) rather than Ras [30]. To determine whether this circuitry was conserved in lymphatics, primary LECs were stimulated with VEGF-A in the presence of the PKC inhibitor GF109203X (GFX). Western blot analysis showed that ERK1/2 activation was completely blocked by PKC inhibition whereas Akt activation was unaffected (Fig. 4). These data indicate that the topology of the network driving early ERK1/2 activation in BECs and LECs are similar to one another. Furthermore, PKC is not required for Akt phosphorylation.
Phosphorylation of ERK1/2 and Akt is required for VEGF-A-mediated LEC proliferation and migration
To determine the function ERK1/2 and Akt phosphorylation serves in VEGF-A-induced processes, LECs were stimulated with VEGF-A in the presence of either PD098059 or LY294002, inhibitors that selectively block MEK1 and PI3-K, respectively. Treatment of LECs with PD098059 (5 mM) completely blocked VEGF-A-mediated activation of ERK1/2 without affecting the phosphorylation of proteins upstream of MEK (Fig. 5A). Likewise, LY294002 (10 mM) specifically inhibited VEGF-A-induced phosphorylation of Akt but not ERK1/2 (Fig. 5B). The effect of PD098059 and LY294002 on VEGF-A-induced proliferation of , reduced-serum media (negative control), or with VEGF-A (100 ng/ml) in the presence or absence of r84 (500 molar excess), a functional blocking antibody against VEGFR1 (500 molar excess), or control IgG (500 molar excess). r84 blocked VEGF-A-induced proliferation/viability of LECs whereas the other antibodies had no effect. E: LECs were seeded in the upper chamber of a transwell insert and allowed to migrate overnight toward EGM-2MV (positive control), reduced-serum media (negative control), or VEGF-A (100 ng/ml) in the presence or absence of r84 (500 molar excess), a functional blocking antibody against VEGFR1 (500 molar excess), or control IgG (500 molar excess). The number of LECs that migrated to the lower chamber was counted and normalized to the positive control. r84 blocked VEGF-A-induced migration whereas the other antibodies had no effect. For panels C and D, significance tested by ANOVA. Asterisk P,0.05 compared to VEGF-A. ns = not significant compared to VEGF-A. doi:10.1371/journal.pone.0028947.g002 LECs was then evaluated by the Cell Titer Blue assay. VEGF-A significantly induced LEC proliferation compared to the negative control (P,0.05) and was not affected by the addition of DMSO (vehicle for both PD098059 and LY294002) to the media (Fig. 5C,D). PD098059 (5 mM) and LY294002 (10 mM) significantly inhibited VEGF-A-induced proliferation of LECs (Fig. 5C,D). Transwell migration assays were then performed to determine the effect of PD098059 and LY294002 on VEGF-Ainduced migration. VEGF-A significantly stimulated LEC migration (P,0.05) and was not affected by the presence of DMSO in the media. However, VEGF-A-induced migration was suppressed by PD098059 (5 mM) and LY294002 (10 mM, Fig. 5E,F).
Discussion
Exhaustive investigation of the effect of VEGF-A on BECs has helped elucidate the signaling pathways regulating VEGF-Ainduced hemangiogenesis. In contrast, the mechanisms underlying VEGF-A-induced lymphangiogenesis have not been widely examined and are poorly defined. The present study demonstrates for the first time that VEGF-A activation of VEGFR2 directly stimulates ERK1/2 and PI3-K/Akt mediated proliferation and migration of LECs. We propose that these cellular processes function together to drive VEGF-A-induced lymphangiogenesis.
To determine the role VEGF-A activation of VEGFR2 serves in lymphangiogenesis we used the monoclonal anti-VEGF-A anti-body r84 which specifically blocks mouse and human VEGF-A activation of VEGFR2, but not VEGFR1. r84 significantly suppressed lymphangiogenesis in vivo, suggesting that blockade of VEGF-A activation of VEGFR2 is sufficient to inhibit lymphangiogenesis. However, it is unclear whether r84 directly inhibits lymphangiogenesis in vivo by preventing VEGF-A from activating VEGFR2 on LECs or indirectly suppresses lymphangiogenesis by affecting other cell types in the tumor microenvironment. Macrophages are reported to promote lymphangiogenesis and their recruitment to the tumor microenvironment is suppressed by r84 [29]. Additionally, fluid lost by leaky blood vessels in the tumor microenvironment may stimulate lymphangiogenesis. Anti-VEGF-A therapy reduces the permeability of blood vessels thereby silencing this potential trigger of lymphangiogenesis. Unfortunately, uncoupling VEGF-A's effect on multiple cell types is technically vexing. Future experiments using Cre/lox technology to specifically ablate Vegfr2 in LECs may help elucidate the extent to which this receptor directly promotes VEGF-A-induced lymphangiogenesis.
Although VEGF-A was reported to stimulate LEC proliferation and migration, it was previously unclear whether both VEGFR1 and VEGFR2 regulated these processes. We show that specific blockade of VEGF-A's interaction with VEGFR2 inhibits LEC proliferation and migration. Conversely, inhibition of VEGFR1 does not affect VEGF-A-induced proliferation or migration of LECs. To our knowledge, this is the first time the function of VEGFR1 has been examined in cultured LECs. These data reveal that VEGF-A activation of VEGFR2, not VEGFR1, directly promotes cellular processes required for lymphangiogenesis.
To identify the mechanisms by which VEGF-A activation of VEGFR2 stimulated LEC proliferation and migration, we analyzed VEGF-A-induced signaling pathways in LECs. We first focused on VEGF-A-induced auto-phosphorylation of VEGFR2. Tyrosines 951, 1054, 1059, 1175, and 1214 of VEGFR2 are phosphorylated following VEGF-A stimulation of BECs. We show for the first time that the same tyrosine residues become phosphorylated after treating LECs with VEGF-A. Of these tyrosines, Tyr 1175 may be the most important. Tyr 1175 is essential for proper VEGFR2 signaling in BECs. Knock-in mice in which Tyr 1173 (equivalent to Tyr 1175 in human VEGFR2) has been substituted to phenylalanine exhibit vascular defects similar toVegfr2 null mice [31]. Surprisingly, replacement of Tyr 1212 (equivalent to Tyr 1214 in human VEGFR2) with phenylalanine does not affect vascular development in mice [31]. Phosphorylation of Tyr 1175 leads to the recruitment and activation of PLCc in BECs [32]. Subsequently PLCc stimulates ERK1/2 activation via PKC [30]. The adaptor molecule Shb also binds to Tyr 1175 and is required for VEGF-A-induced activation of PI3-K signaling in BECs [33]. We show that VEGF-A/VEGFR2 activation in LECs stimulates PKC dependent phosphorylation of ERK1/2 and PI3-K dependent phosphorylation of Akt. These signaling events may be due to signaling initiated from Tyr 1175 of VEGFR2 in LECs.
The mutant phenotypes of several lines of genetically modified mice have recently implicated ERK1/2 as being an important signaling molecule in lymphangiogenesis. Mice expressing a Figure 3. VEGFR2, not VEGFR1, regulates VEGF-A-induced activation of PLC-c, ERK1/2, and Akt in LECs. A: Diagram adapted from [43] depicting phosphorylation sites of the intracellular domain of VEGFR2. B,C: Lysates of primary human dermal LECs were made after stimulating LECs with recombinant human VEGF-A (100 ng/ml) for 2, 5, or 10 minutes. The activation of VEGFR2, PLC-c, ERK1/2 and Akt was detected by Western blotting using phospho-specific antibodies. D: Lysates were generated of LECs stimulated with VEGF-A (100 ng/ml, 10 minutes) in the presence or absence of r84 (500 molar excess) or control IgG (500 molar excess). The activation of VEGFR2, PLCc, ERK1/2 and Akt was detected by Western blotting. r84 suppressed phosphorylation of PLC-c, ERK1/2, and Akt in LECs. doi:10.1371/journal.pone.0028947.g003 constitutively active form of Hras exhibit lymphatic hyperplasia which is thought to be due to sustained activation of ERK1/2 [23]. Additionally, Spred1/2 double-knockout mice display hyperplastic lymphatics most likely because of dysregulation of ERK1/2 signaling [34]. Although these data suggest that ERK1/2 has a crucial function in the development of the lymphatic system, the precise role ERK1/2 serves in LECs was not previously defined. The phosphorylation of ERK1/2 is required for growth factor-induced proliferation of several cell types. We show that ERK1/2 phosphorylation is required for VEGF-A-induced proliferation of LECs. ERK1/2 can also influence cell migration by phosphorylating myosin light chain kinase [35]. We show that blockade of ERK1/2 phosphorylation inhibits VEGF-A-induced migration of LECs. Interestingly, inhibition of ERK1/2 activation does not block VEGF-A-induced migration of BECs [36]. This discrepancy may reflect an underlying difference between BECs and LECs.
PI3-K/Akt signaling is thought to be important in lymphangiogenesis. VEGFR3 promotes the survival of LECs by phosphorylating of Akt in a PI3-K dependent fashion [37]. Furthermore, Akt1 mutant mice exhibit a hypoplastic network of lymphatic vessels [24]. We show that the phosphorylation of Akt is required for promoting VEGF-A/VEGFR2-induced viability and migration of LECs. This is most likely due to the activation of prosurvival pathways and endothelial nitric oxide synthase (eNOS) in LECs. In BECs, eNOS promotes VEGF-A-induced migration in a PI3-K/Akt dependent manner [38,39]. PI3-K/Akt signaling also stimulates the phosphorylation of eNOS in LECs [38,40]. Interestingly, guanylyl cyclase (GC), the only known NO receptor, is required for LEC migration [41]. These observations suggest a mechanism by which PI3-K/Akt signaling could regulate VEGF-A-induced migration of LECs.
In conclusion, we show for the first time that VEGF-A activation of VEGFR2, not VEGFR1, directly drives LEC proliferation and migration via the PI3-K and ERK1/2 signaling pathways. These data reveal that overlapping signaling pathways drive VEGF-A-induced cellular processes in BECs and LECs. Therefore, therapeutic agents targeting the VEGF-A/VEGFR2 axis could be useful to prevent the pathological formation of blood and lymphatic vessels.
Animal Experiments
Experiments performed with mice were performed in accordance with a protocol (APN 0974-07-05-1) approved by the IACUC of the University of Texas Southwestern Medical Center.
Cell culture
Primary adult human dermal lymphatic endothelial cells (LECs) were purchased from LONZA (CC-2810). The certificate analysis sheet supplied by LONZA for each vial of cells indicated that greater than 95% of the cells were LECs (CD31 and podoplanin double-positive). This was determined by FACS. Cells were cultured on rat-tail collagen 1 (50 mg/ml) or 1% gelatin coated plastic ware in EGM-2MV media (LONZA CC-3125). Cells were not used past passage 6.
Immunofluorescence staining of frozen sections
Frozen sections were fixed in acetone at 220uC and then briefly air-dried. PBS was used to dissolve OCT and then samples were blocked with 20% Aquablock (East Coast Biologics, PP82-P0691) in TBST. Primary antibody diluted in TBST+5% BSA was added and allowed to incubate overnight at 4uC. Slides were then washed with PBS+0.05% Tween20 and incubated for one hour with the appropriate secondary antibody (Jackson ImmunoResearch) diluted in TBST+5% BSA. Following another round of washes with PBS+0.05% Tween20, coverslips were mounted with ProLong Gold with DAPI (Invitrogen, P36931). Slides were analyzed using a Nikon Eclipse E600 microscope and images captured using NIS-Elements imaging software.
Immunocytochemistry
LECs were cultured in 4-well chamber slides. Cells were then fixed with methanol, washed with PBS, permeabilized with PBS+0.1% TX-100, and then blocked with TBST+20% Aquablock. Anitbodies diluted TBST+5% BSA were added and allowed to incubate overnight at 4uC. Cells were then washed with PBS and incubated overnight with the appropriate secondary antibodies. Following another round of washes with PBS, coverslips were mounted with ProLong Gold with DAPI. Figure 5. ERK1/2 and Akt regulate VEGF-A-induced proliferation and migration of LECs. A,B: Lysates were generated of LECs1) maintained in starvation media, 2) treated with VEGF-A (100 ng/ml, 10 minutes), 3) pretreated with DMSO (Veh) in starvation media for one-hour then stimulated with VEGF-A (100 ng/ml, 10 minutes), or 4) pretreated with the MEK inhibitor PD098059 (PD) or PI3-K inhibitor LY294002 (LY) for onehour then with stimulated with VEGF-A (100 ng/ml, 10 minutes). PLC-c, ERK1/2, and Akt activation was detected by Western blotting. C,D: Cell viability/proliferation was measured with Cell Titer Blue reagent after culturing LECs for 48 hours in EGM-2MV media (positive control), reducedserum media (negative control), or with VEGF-A (100 ng/ml) in the presence of DMSO (Veh), PD, or LY. PD and LY blocked VEGF-A-induced proliferation/viability of LECs. E,F: LECs were seeded in the upper chamber of a transwell insert and allowed to migrate overnight toward EGM-2MV (positive control), reduced-serum media (negative control), or VEGF-A (100 ng/ml) in the presence or absence of DMSO (Veh), PD, or LY. DMSO, PD, and LY were also included in the upper chamber of the transwell insert. The number of LECs that migrated to the lower chamber were counted and normalized to the positive control. PD and LY inhibited LEC migration toward VEGF-A. For panels B,C, E, and F, significance was tested by ANOVA. Asterisk P,0.05 compared to VEGF-A. ns = not significant compared to VEGF-A. doi:10.1371/journal.pone.0028947.g005 VEGF-A/VEGFR2-Induced Lymphangiogenesis Proliferation/Viability assays LEC proliferation/viability was evaluated by the Cell Titer Blue assay (Promega G8081). This assay is based on the ability of living cells to convert the non-fluorescent compound resazurin to the fluorescent compound resorufin. LECs (3,000 cells per well) in EGM-2MV were seeded into the wells of a Falcon Optilux Black/ Clear bottom 96-well plate. The next day, cells were serumstarved for 4 hours with OptiMEM reduced-serum media (Invitrogen 11058-021). During this time, recombinant human VEGF-A 165 (R & D Systems 293-VE) was pre-incubated with r84, control IgG, or anti-VEGFR1 antibody for one hour. For the inhibitor experiments, LECs were treated with PD098059, LY294002, or DMSO while being serum starved. Next, EGM-2MV (positive control), OptiMEM (negative control), or recombinant human VEGF-A 165 in the presence or absence of r84, anti-VEGFR1, control IgG, DMSO, PD098059 or LY294002 was added to the appropriate wells. After culturing cells for 48 hours at 37uC, 20 ml of Cell Titer Blue reagent was added to the wells and one hour later fluorescence was measured with a plate reader. The assay was run with 6 replicates for each experimental condition and performed at least twice.
Migration assays
A modified Boyden chamber assay was performed to assess LEC migration. Cell culture inserts (8.0 mm pore size) were placed over wells of a 24-well tissue culture plate containing 500 ml of either EGM-2MV (positive control), OptiMEM (negative control), or recombinant human VEGF-A 165 in the presence or absence of r84, control IgG, or anti-VEGFR1 antibody. Next, 200 ml of LECs (150,000 cells/ml) in OptiMEM reduced-serum media were seeded in the upper chamber of each insert and allowed to migrate overnight. For the inhibitor experiments, the indicated amounts of PD098059 and LY294002 were added to the upper and lower chambers. Cells that didn't migrate were removed from the upper chamber with a cotton swab. The membrane was then fixed and stained with the Diff-Quik stain kit (Dade Behring B4132-1A). The number of migrated cells was counted for 4 areas and values were normalized to the positive control. The assay was performed in triplicate and repeated twice.
Western blot analysis
LECs were cultured on 6-well plates until near confluence, serum-starved overnight with OptiMEM reduced-serum, and then stimulated with recombinant human VEGF-A 165 in the presence or absence of r84, control IgG, DMSO, PD098059 or LY294002. LEC were pre-treated with DMSO, PD098059, or LY294002 for one hour prior to stimulation. Following stimulation, cells were scraped in lysis buffer [mPER (Thermoscientific #78501) +Protease Inhibitor (Thermoscientific #78425) +Phosphatase Inhibitors I and II (Sigma-Aldrich P2850 and P5726)], spun for 10 minutes at 4uC, and then supernatants were transferred to new tubes. Equal amounts of total protein were separated by SDS-PAGE then transferred to PVDF membranes. Membranes were blocked for 30 minutes at room temperature with either TBST+5% BSA or TBST+5% non-fat milk, incubated overnight at 4uC with phospho-specific primary antibodies, washed with TBST, and then incubated for one hour at room temperature with the appropriate HRP-conjugated secondary antibodies. Bound antibodies were detected with the SuperSignal West Dura Extended Duration Substrate detection system (Thermoscientific #34076). Membranes were stripped then reprobed with antibodies to detect total levels of proteins.
Statistical analysis
Data were analyzed using GraphPad Prism statistical analysis software (Version 5.0). All results are expressed as mean6SEM. Significance tested by unpaired student's T-test or ANOVA as indicated in the figure legends. Data were considered significant at P,0.05.
Ethics Statement
Experiments performed with mice were performed in accordance with a protocol (APN 0974-07-05-1) approved by the IACUC of the University of Texas Southwestern Medical Center. Mice were housed in isolation cages located in a pathogen free facility in the NG building on the north campus of UT Southwestern. UT Southwestern has a letter of assurance on file with the Public Health Service, is registered as a research facility with the USDA, and is certified by the AAALAC. Our laboratory participates in voluntary inspections by IACUC and ARC staff at least twice per year. Mice were euthanized at the end of the proposed research or if they were deemed to be suffering. The method of euthanasia consisted of an inhalant overdose of carbon dioxide or isoflurane followed by cervical dislocation. These methods are consistent with the recommendations of the American Veterinary Medical Association (AVMA) Guidelines on Euthanasia. | 2014-10-01T00:00:00.000Z | 2011-12-12T00:00:00.000 | {
"year": 2011,
"sha1": "667ba02f9866a3e0a5982668195eb44cacda40d9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0028947&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "667ba02f9866a3e0a5982668195eb44cacda40d9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
255962806 | pes2o/s2orc | v3-fos-license | Fifteen new nucleotide substitutions in variants of human papillomavirus 18 in Korea
High-risk human papillomavirus (HPV) infection is an essential factor for the development of cervical cancer. HPV18 is the second most common carcinogenic HPV type following HPV16, but the lineages of HPV18 have been less well studied than those of HPV 16. The purpose of this study was to analyze the nucleotide variants in the E6, E7, and L1 genes of HPV18, to assess the prevalence of HPV18 variants in Korea and to explore the relationship between HPV18 genetic variants and the risk for cervical cancer. A total of 170 DNA samples from HPV18-positive cervical specimens were collected from women admitted to a secondary referral hospital located in Seoul. Among them, the lineages of the 97 samples could be successfully determined by historical nomenclature. All the studied HPV 18 variants were lineage A. Sublineages A1 and A4 comprised 91.7% (89/97) and 1.0% (1/97), respectively. Sublineages other than A1 or A4 comprised 7.2% (7/97). We identified 15 new nucleotide substitutions among 44 nucleotide substitutions: C158T, T317G, T443G, A560G, A5467G, A5560C, A5678C, A6155G, G6462A, T6650G, G6701A, T6809C, A6823G, T6941C and T6953C. Among them, 6 substitutions at positions 317, 443, 5467, 5560, 6462, and 6823 resulted in amino acid changes (E6: F71L and N113K; L1: H13R, H44P, A345T, and N465S, respectively). The pathologic results were classified as normal in 25.8% (25/97) of the women, atypical squamous cells of undermined significance (ASCUS) in 7.2% (7/97), cervical intraepithelial neoplasia (CIN) 1 in 36.1% (35/97), CIN2/3 in 19.6% (18/97), and carcinoma in 12.4% (12/97). There was no significant association between the HPV18 sublineages and the severity of pathologic lesion or the disease progression. This study is the first to analyze the distribution of HPV18 variants in Korean and to associate the results with pathologic findings. Although the HPV18 variants had no significant effect on the degree and progression of the disease, the newly discovered nonsynonymous mutation in L1 might serve as a database to determine vaccine efficacy in Korean women.
Introduction
Cervical cancer is the fourth most common cancer among all malignancies in females worldwide and the seventh most common cancer in Korea. According to the World Health Organization's (WHO's) GLOBOCAN project in 2018, 569,847 new cases occur and 311,365 people die annually due to cervical cancer worldwide. In Korea, cervical cancer is the seventh most common cancer, with the development of 3348 new cases and 1029 deaths reported annually [1]. Epidemiologic, genetic, immunological and environmental factors are involved in carcinogenesis, and persistent and high-risk human papillomavirus (HPV) infection is an essential factor for the development of cervical cancer. The most deleterious type is HPV16, and the second is HPV18; these two infections are associated with approximately 70% of cervical cancers [2,3].
HPV is a small double-stranded DNA virus with an 8kb genome containing early expressed genes (E1, E2, E4, E5, E6, and E7), late genes (L1 and L2) and a long control region (LCR) [4]. The capsid proteins L1 and L2 play critical roles in viral structure formation and the infection process. In particular, purified L1, the major capsid protein, can form empty shells that resemble HPV, which are called virus-like particles (VLPs). These VLPs have hypervariable immunodominant loop structures on the surfaces of the virions that induce humoral immunity without oncogenic activity and are thus extensively used in HPV prophylactic vaccines [5][6][7][8]. E6 and E7 are major oncogenes that are highly expressed in tumors and are related to cellular immortalization, malignant transformation, and carcinogenesis. Based on these roles, proteins E6 and E7 are generally regarded as ideal targets for the development of therapeutic HPV vaccines [6,9].
Over 200 HPV types have been identified based on L1 sequences. HPV18 variants were originally grouped into European (E), Asian-Amerindian (AA) or African (AFR) lineages according to E6-E7, L1, and/or LCR sequences [10][11][12][13][14]. This classification has been superseded by a whole viral genome sequencing approach that has defined three major lineages (A, B, and C) and additional sublineages (A1 to A5 and B1 to B3) [15] that can be translated from the historical nomenclature (A1 and A2 are AA, A3 to A5 are E and B/C are AFR) [16,17]. In addition, a recent study published in China proposed new A6 to A8 sublineages and classified them as the E lineage [18].
Methods
From 2010 to 2017, 7992 women admitted to the Seoul National University Boramae Medical Center were tested for cervical HPV genotype more than once. Among them, 3926 (3926/7992 = 49.1%) were positive for HPV, and 170 (170/3926 = 4.3%) were positive for type 18 and negative for other types. HPV detection and typing were performed using a liquid bead microarray, namely, the GeneFinder HPV PCR Kit (Infopia, Seoul, Korea).
Amplification and sequencing of HPV18 E6, E7, and L1 genes were performed using type-specific primers, which are shown in Table 1 [11,24,25]. The cycling conditions were as follows: 5 min at 95°C for initial denaturation; 45 s at 94°C, 45 s at 55°C, and 60 s at 72°C for 35 cycles; and 10 min at 72°C for final elongation. Amplicons were visualized on 2.0% agarose gels stained with ethidium bromide under UV transillumination. PCR products were automatically sequenced using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA) and an ABI 3730xl DNA analyzer (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's instructions. E7 sequencing had a high success rate (165/170), so the sequencing was completed with only 1 primer set (=1 trial). However, E6 and L1 sequencing had low success rates, so we attempted 3 trials each ( Table 1). All data were confirmed by repeating the PCR amplification and sequence analysis at least twice.
Nucleotide sequences were translated by the translatetool of ExPASy (http://web.expasy.org/translate/) for the determination of amino acid changes. PSIPRED v.4.0 (http://bioinf.cs.ucl.ac.uk/psipred/) was used for secondary structure prediction, as it provides a simple and accurate secondary structure prediction method.
Based on cytological and histological evaluations of fresh specimens, the cervical lesions were graded according to their severity as follows: normal, atypical squamous cells of undetermined significance (ASCUS), low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HSIL), cervical intraepithelial neoplasia grade 1, 2 or 3 (CIN1, 2 or 3) and cervical cancer. The histological diagnosis of each case was reviewed by an experienced pathologist who was unaware of the HPV testing results.
Mann-Whitney, Fisher exact and linear by linear association tests were used for comparisons between AA and E lineages. Variables affecting cervical cancer risk were analyzed by a logistic regression model. All statistical analyses were carried out with SPSS, version 22.0 (IBM, Armonk, NY, USA).
The phylogenetic tree analysis was performed using the E6-E7-L1 genes of the 97 Korean HPV18 isolates and 28 already reported variants (Fig. 2). According to the E6-E7 sequence-based previous nomenclature rule [10,11], all the previous AA lineages (BRM01~28, total 89 samples) were matched updated A1 sublineage sequences. However, except for one A4 sublineage (BRM32, 1 sample), the other seven variants (BRM29T There was no association between HPV18 lineages and other types of HPV co-infection (Table 2). Additional infections by other types are not related to the development of cancer (data not shown).
Of the 97 women involved in this study, 54 women underwent an additional follow-up pathological evaluation. Twenty-two women had worsening lesions confirmed by serial pathological tests. There was no association between HPV18 lineages and disease progression (P = .773) ( Table 2).
Discussion
From 2010 to 2017, almost half (49.1%) of requested cervical HPV genotyping tests were positive for any type of HPV in a secondary referral hospital in Seoul, Korea. Among the positive results, the frequency of HPV 18 single positive was 4.2%. The frequency of the present study was comparable to other reports in Korea (total HPV prevalence was 16.7%~40.7%, and the HPV 18 prevalence among HPV-positive women was 0.5%3 .6%) [31][32][33]. The distributions of HPV variants differ among geographic origins, evolutionary dynamics, and pathogenicity. In our population, 6 substitutions, namely, C287G in E6 and G5503A, C5701G, C6460G, C6625G, and C6842G in L1, were found in all HPV18 variants. These 6 substitutions were also found in all HPV18 isolates in southeastern and northeastern China [11,18], which Korea is located next to, but found in 40% of HPV 18 in southwest and central China [24]. These findings support the geographical distribution of HPV lineages.
Previous studies reported that the risk of developing high-grade CIN is significantly increased with the non-European variants [13,34]. One study [35] reported that the AA and European variants had significantly higher associations with pre-invasive lesions than the African variants. In contrast, other studies showed that no significant difference in pre-invasive lesion risk was observed between the variant lineages (A, B, and C) [14,17,26]. Our results are in line with the latter conclusion; there was no statistically significant association between HPV18 lineages and cervical pathologic lesions in Korea. Fig. 2 Phylogenetic tree of the HPV18 variants by the Maximum Likelihood method. The evolutionary history was inferred using the Maximum Likelihood method with 1000 bootstraps in a Tamura-Nei model. All positions with less than 95% site coverage were eliminated; i.e., fewer than 5% alignment gaps, missing data, and ambiguous bases were allowed at any position (partial deletion option). Numbers near the line indicate bootstrap values. Evolutionary analyses were conducted in MEGA X In the aspect of public health, many countries have implemented national policies of HPV vaccination. In Korea, HPV vaccination has been free for 12-year-old girls since 2016, and the government intends to expand the vaccination targets. Three prophylactic vaccines have received licensure from Korea Food and Drug Administration: the AS04-adjuvanted bivalent (HPV16/18) vaccine (Cervarix®, GlaxoSmithKline, Belgium), which was licensed in 2008; the aluminum hydroxyphosphate sulfate (AAHS) adjuvant quadrivalent (HPV6/11/16/18) vaccine (Gardasil®, Merck, US), which was licensed in 2007; and the AAHS adjuvant 9-valent (HPV6/11/16/18/31/33/45/ 52/58) vaccine (Gardasil®9, Merck, US), which was licensed in 2016. These prophylactic HPV vaccines are composed of L1 proteins of multiple HPV combinations [5,7]. The loop structures of the HPV L1 major capsid protein contribute to the epitopes of vaccine-induced cross-neutralizing antibodies. Therefore, amino acid changes in the L1 loop region could be a critical issue for vaccination development. Our data on the genetic diversity of the HPV18 variants in Korea show two nonsynonymous substitutions in the loop structures, L64M within the BC loop (of BRM34) and T149N within the DE loop (of BRM20, BRM31~36). It may be helpful to design second-generation prophylactic HPV vaccines and implement feasible nationwide vaccination programs.
With the development of next-generation sequencing (NGS), the whole-genome sequencing (WGS) of 8 kb HPV became easier, making the analysis of lineages and single-nucleotide polymorphisms (SNPs) relatively faster and more accurate. However, WGS analysis pipelines in microbiological fields have not yet been established systematically, and it is difficult to analyze multiple samples with limited resources. In addition, although WGS identifies more variants and contributes to the construction of more accurate phylogenetic trees than partial sequencing, the data composed of E6-E7-L1 sequences over the past 20 years does not have significantly reduced reliability compared to WGS. Thus far, many studies have been selectively conducted on the oncogenic proteins E6 and E7, and the major capsid protein L1 plays an important role in the prophylactic vaccine, as our study suggests. | 2023-01-18T15:04:32.427Z | 2020-05-24T00:00:00.000 | {
"year": 2020,
"sha1": "51ed702ad470f068d8a63601e3b75c53b916a197",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12985-020-01337-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "51ed702ad470f068d8a63601e3b75c53b916a197",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
119115589 | pes2o/s2orc | v3-fos-license | Uncertainty from Heisenberg to Today
We explore the different meanings of"quantum uncertainty"contained in Heisenberg's seminal paper from 1927, and also some of the precise definitions that were explored later. We recount the controversy about"Anschaulichkeit", visualizability of the theory, which Heisenberg claims to resolve. Moreover, we consider Heisenberg's programme of operational analysis of concepts, in which he sees himself as following Einstein. Heisenberg's work is marked by the tensions between semiclassical arguments and the emerging modern quantum theory, between intuition and rigour, and between shaky arguments and overarching claims. Nevertheless, the main message can be taken into the new quantum theory, and can be brought into the form of general theorems. They come in two kinds, not distinguished by Heisenberg. These are, on one hand, constraints on preparations, like the usual textbook uncertainty relation, and, on the other, constraints on joint measurability, including trade-offs between accuracy and disturbance.
Introduction
Heisenberg's uncertainty relation for the momentum uncertainty ∆P and position uncertainty ∆Q of a particle, ∆P ∆Q ≥ /2, is arguably one of the best known formulas science has ever produced. It's almost as well-known as E = mc 2 , but unlike Einstein's formula, there are actually jokes about the uncertainty relation, like "No, officer, I don't know how fast I was going, but I know exactly where I am". What uncertainty exactly means in physics is not so easy to clarify. An indication of this are the attempts to find a more suitable word, like indefiniteness, ignorance, indeterminacy, or imprecision. This ambiguity already features in Heisenberg's original uncertainty paper from 1927. Therefore, we will go back to this paper and explore these concepts. We will not do this as historians, but rather as modern-day theoretical physicists. That means we will consider the unhistorical question of how Heisenberg's statements look when phrased in modern quantum mechanical language. This will be helpful for understanding the further development of ideas of uncertainty, which we will get to in the second part of this article. 1 Heisenberg's 1927 paper Heisenberg submitted his paper to the journal "Zeitschrift für Physik" in March of 1927. It was published that same year with the title "On the intuitive content of the quantum theory of kinematics and mechanics", or in the original German "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" 1 .
At the threshold of quantum mechanics
The paper appears in the middle of the explosive birth of quantum mechanics. After 26 years of puzzles, things fell in place, and within a breathtakingly short time, the foundations of the new theory were established. Three groups arrived almost simultaneously at formulations, which a modern physicist would recognize as versions of the current theory. These were Born, Jordan and Heisenberg in Göttingen, Dirac in Cambridge, and Schrödinger in Zürich. The first two groups relied directly on Heisenberg's paper [H0] from 1925. Quantum mechanics then reached its full mathematical form in 1927 with Schrödinger's work on the equivalence of the approaches, and especially von Neumann's work as an assistant for Hilbert's quantum mechanics lecture course [22] in the Winter semester of 1926/1927.
The scientific style of the embryonic phase of quantum mechanics was strongly shaped by Niels Bohr. The atomic model from 1913 comprised classical concepts, which were however constrained by ad hoc additional rules. These were complemented by de Broglie's relations, connecting the particle and wave pictures. How easily one can make a misstep in this setting was exemplified by Bohr's fundamental paper itself. What worked so well for Hydrogen [3,Part I] (except for the angular momentum of the ground state), failed completely [27] for larger atoms [3, Part II] and molecules [3,Part III]. Likewise, the attempts of the Sommerfeld school to refine Bohr's ideas via mathematical physics soon got stuck. However, with the new breakthrough, the semi-classical style of old quantum theory became obsolete, along with particle-wave duality and Bohr's and Sommerfeld's quantization rules. With the new quantum theory, no arbitrary rules to force the particles on a fixed path were necessary any more: everything that was true should now, in principle, be justifiable from a single unified framework.
Heisenberg's first important paper [H0] was written shortly before the breakthrough, while the paper [H] on uncertainty followed shortly after. Important parts are still in semi-classical style. The new theory is incorporated in the somewhat cumbersome form of "Jordan-Dirac transformation theory". However, almost nothing can be seen of the mathematical language of Hilbert spaces and operators we use today. That structure was created by Johann von Neumann, and was submitted to the Göttingen Academy in May 1927 [47]. Although both groups were in Göttingen there appears to not have been much contact between the physicists and the mathematicians [*1].
"Anschaulichkeit"
Heisenberg's use of the word "anschaulich" provides a key to understanding his motivation. The German term is actually ambiguous, and Heisenberg tries to achieve his goal partly by playing with those different meanings, roughly translatable as "visualizable", or "appealing to graphical imagination" on the one hand, and "intuitive" on the other 2 [*2]. His own matrix mechanics had just been met with Schrödinger's competing approach. Many physicists found Schrödinger's waves, that moved around in space and time, more visualizable than the abstract matrices of Born, Jordan and Heisenberg. The physicists of that time were well-versed in wave equations from electrodynamics and hydrodynamics, but new very little about matrices [*3]. Wien, the professor of experimental physics in Munich, who nearly flunked Heisenberg in the doctoral exam, had told him that Schrödinger's work would anyhow soon supersede the atomic mysticism by Heisenberg and friends. So providing visualizable content of matrix mechanics was important for Heisenberg to assert his priority and possibly even for the survival of matrix mechanics. He proclaims his success in this regard near the end of the paper [H:196] and relegates the criticism he has thus overcome, along with the critic Schrödinger, to a footnote: "Schrödinger labelled quantum mechanics as a formal theory of daunting, even repulsive, un-visualizability and abstractness" [*4].
How did Heisenberg arrive at visualizable content? It is, after all, rather peculiar that the main new element provided in the paper, which one therefore would expect to contribute to better "visualization", instead imposes a limitation on particle pictures. Heisenberg's most important move is the redefinition of the term: away from the visualization, and towards an abstract intuition. He begins his paper with the sentence [H:172] We believe to have understood a physical theory intuitively if we can imagine the experimental consequences of the theory qualitatively in all simple cases, and if, at the same time, we have recognized that the application of the theory will never contain internal contradictions.
This quotation, which Heisenberg also used in his later years, is remarkably modern. The term Anschauung (literally "looking at something") is stripped here of almost all connotations of imagining a scene or a picture. Like the "internal virtual images" of Hertz [*5] and Galilei's geometrical figures as letters in the book of nature [*6], it can just as well refer to an intuition about an algebraic or logical structure. Whether this widening of the concept of Anschaulichkeit convinced Heisenberg's contemporary critics, however, is questionable. Furthermore, Bohr and Heisenberg themselves were no friends of mathematically grounded intuition. They use the terms "abstract", or "formal", or the "symbolic character of the wave function" [4] with rather negative connotations. [ *7] An interesting feature of Heisenberg's opening sentence is the mention of "contradictions". Normally, one arrives at a contradictory theory only through blunders. From what type of contradictions should one then protect oneself?
Heisenberg's answer highlights the problematic state of quantum theory as it tries to break away from the semi-classical "old" quantum theory: "The intuitive meaning of quantum mechanics is up till now full of internal contradictions, which lead to clashes of opinions about discrete and continuum theory or particles and waves." [H:172]. Heisenberg's own work likewise pointed to such a contradiction: on the one hand, he had specifically criticized the concept of trajectories of electrons in an atom in his earlier paper [H0]. On the other hand, everyone could directly see the trajectories of particles in a cloud chamber. So how could one develop an intuition that reliably allows one to separate these two cases? This question is indeed answered in the paper.
The microscope passage
The famous example, via which Heisenberg develops his arguments, is simply a microscope, with which the position of an electron is determined by observation. The discussion is entirely semi-classical, i.e., the new quantum theory does not come into play. The light allowing observation is a gamma-ray photon, which is scattered by the electron. This interaction, known as the Compton effect, deflects the photon into an imaging instrument, through which the position of the scattering event is determined. However, the electron experiences a kick because of this collision, so that its momentum is changed. In Heisenberg's words, At the instant of the position measurement, the moment at which the photon becomes scattered, the electron's momentum changes discontinuously. This change becomes larger when smaller wavelengths of light are used for the measurement, corresponding to more exact position measurements. At the moment when the position of the electron is known, its momentum can only therefore be known to a magnitude, corresponding to that discrete change: the more exact the position is determined, the less exact the momentum is known and vice versa. [H:175] It is characteristic here that the photon appears both as a wave, with a wavelength λ, and also as a corpuscle with momentum p 1 = h/λ following de Broglie, where h is Planck's constant. If we also equate the resolution of the microscope q 1 with the wavelength, and the change of the electron's momentum with the elastic momentum transfer, we get the relation This is the uncertainty relation, faithfully reproduced in Heisenberg's own notation. The tilde is (we assume intentionally) not explained by Heisenberg.
From the context, we read it as "of the same order of magnitude". In his own summary at the end of the paper [H:196] Heisenberg calls it a "qualitative statement". In any case, the tilde carries the whole conceptual imprecision of the over simplified microscope theory. With this Heisenberg again emphasizes his intention to improve heuristics, not quantitative theory.
For his simple identification of the resolution of the microscope with the wavelength of the light, first by Bohr, as recorded in the note added in proof to the paper [H:198]. Indeed, the mentioning of a microscope clearly calls for Abbe's theory of microscopes, where the coordinates along and perpendicular to the optical axis play different roles and the aperture of the lens comes into play. Furthermore, this theory can also be described in terms of semi-classical concepts and would have been able to fulfil the same function for Heisenberg's argument, while physically fleshing out the meaning of "microscope". But we must defend Heisenberg from this criticism. His argument is somewhat more abstract, and so more general, than the formulation suggests. Instead of an electron and photon, he could have also spoken of particle A and particle B. The details of the optical imaging are not important [*8]. Furthermore, Heisenberg could be excused here for disregarding a theory over which he almost failed his doctoral examination [34].
The alleged "proof "
Right after the first appearance of the uncertainty relation [H:173], Heisenberg promises a proof from the commutation relations, to be given later in the paper. The only passage that fits this description is [H:180], which is indeed opened with the remark that the relation (1) can be proved by "a slight generalization of the Dirac-Jordan formulation of quantum mechanics". Apparently some people have taken that at face value [*9]. It is clear immediately that it is not a proof "from the commutation relations", because these do not even appear. The most commonly given modern proof [35] does use them, thus fulfilling Heisenberg's promise to the letter. But one would be happy to have any proof from the new quantum mechanics. The "slight generalization" of Dirac-Jordan theory is not specified, but at least it is clear that Heisenberg is using the relation of momentum and position representation by Fourier transform, for which he cites Jordan. Clearly, this is also a sensible basis. Moreover, due to a result of von Neumann in 1931, one can derive the Fourier connection from the commutation relations.
The next strangeness is that Heisenberg identifies the accuracy of a position measurement with the width of the post-measurement state. This is clearly false in general, and we will discuss in Sect. 1.5.5 what may have led Heisenberg to this error. In any case he comes out of this step with a probability amplitude "which is only appreciably different from zero in a region of approximate size q 1 around q ", where these parameters are accuracy (Genauigkeit) q 1 , and measured value q . The aim of the next step is to show that in that case momentum is concentrated in a region of size p 1 satisfying (1). Heisenberg discusses that this works for Gaussians, even with the tilde replaced by equality, provided we put q 1 and p 1 into the Gaussians at the appropriate places. There is no mention of standard deviations here. But clearly this would just be an illustration, not a proof of a general fact. Heisenberg is apparently aware of this burden of proof, because he simply claims as a mathematical fact ("it will be such that[. . . ]") that small support in position implies concentration in momentum space. Unfortunately, that statement is utter nonsense. It would imply that for amplitudes as described the uncertainty relation holds with near equality. This also shows that Heisenberg is clearly not yet thinking of the uncertainty relation as an inequality.
1.5 What is uncertainty?
Uncertainty 1: Discontinuity
The microscope argument refers to the moment of Compton scattering. Presumably, this is the moment at which the position measurement happens, so that we "know" the electron's position. Precisely at this moment, the momentum of the electron changes, and we don't know exactly whether we should assign it the momentum before the interaction or that after the interaction, which has changed by p 1 . But the moment of scattering is actually a poorly defined concept. Neither in classical mechanics nor later in quantum mechanics are interactions instantaneous. In scattering theory, the transition from one asymptote to another is idealized sometimes to a point (when a temporal course of events is even observed). But one wouldn't normally misunderstand such discreteness as a statement about reality. Not so for Heisenberg. He needs the discontinuity here because it is the hallmark of "Quantumness" in the Bohr school just like Bohr's quantum jumps between orbits. More important still, the jumps are what distinguished the matrix theory from Schrödinger's continuum theory.
Uncertainty 2: The degree of applicability of classical concepts
For the modern-day reader, an astonishing aspect of the uncertainty paper is the extent to which quantum particles are treated as classical particles. Actually, this is intentional. Heisenberg tries to present quantum mechanics as a minimal departure from classical mechanics. On one hand, this is to smooth the transition to quantum mechanics in the sense of the correspondence principle. On the other hand, this approach helps to convince readers who are still at odds with quantum theory. For example, the equations of motion of quantum mechanics given by Heisenberg in his equation (9) [H:186] are identical with Hamilton's equations. At first sight this looks like a silly error, but it is actually a quote from Born and Jordan [6], who make a similar effort of mollifying the transition. They invent somewhat contrived definitions of the partial derivatives designed just to make this coincidence true [*10].
According to Heisenberg only two small related changes to classical mechanics need to be made: some non-commutativity and a small precautionary measure in the form of the uncertainty relation. These two aspects are practically identical for Heisenberg. He treats the first as well known and the second as the new insight of his paper. On the one hand, the commutation relations offer Heisenberg a hint [H:173] that an uncritical use of classical concepts is problematic, and that quantum discontinuities will be felt. Conversely, his microscope analysis seems to suggest the non-commutativity: If, for example, the X-coordinate of the electron is no longer a "number", which can be experimentally inferred via equation (1), then it is the simplest conceivable assumption [. . . ] that this X-coordinate is a diagonal part of a matrix, whose off-diagonal components express themselves in an imprecision, and express themselves in other ways under transformations. [H:196] [Heisenberg's "(1)" is also (1) The difference in formalism between classical and quantum mechanics appears to lie in the commutation relations. The difference in physical interpretation, however, is achieved by the uncertainty relation. For Heisenberg, this parallel seems sufficient justification for viewing these aspects as two sides of the same coin and for claiming a "direct mathematical connection" between them.
Uncertainty relations as the limitation on applicability of classical concepts is also the explanation Heisenberg gives in his Nobel lecture [H3]. But it is worthwhile to pause and think of what that could possibly mean. How do we "apply" a classical concept to a quantum particle? We can choose to think of it as a classical particle (that would be plain wrong) or as a wave (also wrong). But that is an irrelevant exercise of the imagination unless we use these concepts and make them part of the explanation of something. In that case we never use just the concept alone, but the theory context around it, equations of motion, and ideas about how to determine the various quantities. In a limited context we can compare predictions of such a classical or semiclassical prediction with quantum mechanics or with experiment, and that might be good or bad. But the "degree of applicability of the classical concept of momentum", as some thing that might compete with the corresponding "conceptual position fuzziness" is a very silly notion.
Quantum observables differ from their classical counterparts in many ways. For example, the Hamiltonian of a non-relativistic particle in a confining potential has discrete spectrum, while its classical counterpart takes a continuum of values. So if the mean level spacing ∆E is such a degree of applicability, what is the role of a conjugate t?. One could maybe think of something here and in other contexts, but we should renunciate the idea that there is any notion of degree of applicability of classical concepts that is meaningful in a general context or without detailed explanation.
In spite of all the efforts to make classical and quantum mechanics look alike, Heisenberg is, of course, well aware of the differences. In particular, and in spite of some statements to the contrary, it is not sufficient to just use classical mechanics with uncertainty-blurred initial conditions: [One could] be tempted to suppose that a "real" world is hidden behind the apparent statistical world, in which the causal law holds true. But we explicitly stress that such speculations seem to us sterile and meaningless. [H:197]
The critical analysis of concepts
The critical operational analysis of concepts is the central approach of the paper, through which Heisenberg sees himself as being in direct succession to Einstein. More specifically, Einstein's analysis and "relativisation" of the concept of simultaneity, with which he founded the theory of relativity in 1905 serves as a model. Characteristic of this approach is a "definition" of concepts through the measurement processes. This offers a further possibility to closely connect quantum mechanics to classical: in both cases, one can determine position using a microscope, which for Heisenberg now equalled a definition. The analysis comes to a new conclusion, however, because there is a new ingredient, the de Broglie relation. This is analogous to Einstein's new conclusion about simultaneity, on account of a new principle, the source-independence of the speed of light.
In the first chapter [H: §1, 174-179] of the paper, Heisenberg sketches this process, not only for position (microscope), but also for the concept of a path (a sequence of position measurements), speed (Doppler measurement), energy (collision experimentsà la Franck-Hertz) and magnetic moment (Stern-Gerlach experiment). He does not address, however, how these concepts, newly defined by a measurement process, can work together. In classical mechanics, it is clear that one can find several measurement processes for the same concept/quantity, like a Doppler measurement or a time-of-flight measurement for momentum. In that case, their equivalence is a result of the theory, and this provides the basis for a unified concept of momentum even in a strict operational-positivistic approach. This is ultimately what is meant by Einstein's dictum that "the theory determines what can be measured" [*12]. If, however, different possibilities for the definition of the new quantities are on offer, one must either choose one or clarify how these different definitions lead to the same result in the context of the new theory.
Heisenberg did not consider the difficulty of different possible "definitions", and already his discussion of the microscope shows that he did not mean this operational program completely seriously. Following this program, p 1 would have to be the uncertainty of the electron's momentum, defined through the corresponding momentum measurement. As a process for that measurement, Heisenberg suggests a Doppler measurement. But instead, he unceremoniously sets p 1 to be the momentum shift required by relativistic momentum conservation. That there is a connection between these two uses of "momentum" would remain to be shown. To put it crudely, the connection between the different variants of the "same" concept actually comes from ignoring the operational program. One can hardly call this an operational analysis.
Just after citing Hamilton's classical equations of motion as the quantum ones (see above, Sect. 1.5.2) Heisenberg writes: "The trajectory can, however, as already stated, only be calculated statistically from the initial conditions, which one can consider as a consequence of the fundamental imprecision of the initial conditions" [H:186]. Once again the concept of a trajectory appears, which was deconstructed previously as a sequence of discrete position measurements. But it still seems healthy enough to support a differential equation because the uncertainty of the initial conditions are invoked as the only difference with classical mechanics. He then explicitly discusses a diffraction experiment [H:189], for which classical theory gives something "grossly different" from the familiar diffraction image. But "nevertheless, we can in no way determine via the trajectory of a single electron a contradiction with the classical theory". This is because that "determination" of the trajectory would indeed require an experimental intervention, so it again hits an uncertainty barrier, and the trajectory must get destroyed. Regarding a concept of a trajectory for unobserved particles, Heisenberg appears to have no objection, even if the clear difference between the observed diffraction image and a classical calculation already guarantees that the particles cannot have travelled on classical trajectories.
For Heisenberg there seems to be no tension between the boastful announcement of an Einstein-like conceptual analysis, and its less than half-hearted execution. Our reading is that he did not try harder because the theory he sought already existed for him -just the new quantum mechanics -and because, for this theory, the concept-critical program had not only been done already, but was even his own contribution [H0]: Quantum mechanics had [. . . ] just arisen from the attempt to break with those familiar kinematic concepts and to set in their place relations between concrete values provided by experiment. Because this appears to have worked, the mathematical scheme of quantum mechanics will require no revision. [H:172][ *13] This link between [H] and [H0], especially the application of the uncertainty idea in the new quantum mechanics, remains unfortunately rather vague. Besides the passage quoted earlier, where non diagonal components of the matrix for position coordinate X are mentioned as an expression of "the imprecision", there are still a few further excerpts, which we will also look at.
Uncertainty 3: Statistics and the friction around Heisenberg's cut
At the beginning of §2, Heisenberg summarizes and "generalizes" his account of the concept-critical previous section, §1: All concepts, which are used in the classical theory to describe a mechanical system, may be also defined exactly for atomic processes analogously to the classical concepts. The reference to experience is unexpected here because the arguments that Heisenberg introduces to support his uncertainty thesis are altogether purely theoretical, and are not based upon any additionally introduced experimental results. Experiments, even thought experiments, naturally play an important role in an operational approach. Heisenberg goes beyond this, it seems, by claiming a connection between imprecision and experiments in general. After the remark that Born and Jordan perceive a "characteristically statistical trait of quantum mechanics in contrast to classical theory" [H:177], and that only a probability distribution can be given for variables like position, he proceeds with One can however, if one wishes, also say in agreement with Dirac that the statistics arise through our experiments. [H:177] Statistics is, on the other hand, closely connected to the uncertainty relations. As is already stated in the abstract: This uncertainty is the actual reason for the appearance of statistical relations in quantum mechanics. [H:172] Hence both statistics and uncertainty happen around the act of observation. To use a more recent terminology, uncertainty and statistics arise at "Heisenberg's cut", which separates the quantum object from the instruments used for observation. The dynamics, be it viewed semi-classically or quantum mechanically, does not include the uncertainties yet. At least that is one possible reading of a statement in Heisenberg's outlook section [H:197]: We have not assumed that quantum theory is, in contrast to the classical, an essentially statistical theory in the sense that from exactly given data only statistical conclusions can be drawn.
Only the observation brings them about. This point of view appears extensively in the examples discussed by Heisenberg and fits with his later views (e.g., [H5, 46]). It also fits with the view of two distinct quantum time evolutions: the deterministic Schrödinger evolution for an isolated system and the collapse process coming with observation.
To see quantum randomness as arising in the act of observation was a very plausible idea in 1927. This is because the observation assumes a macroscopic measuring instrument, of which one cannot possibly control all atoms with microscopic precision. When a highly sensitive microscopic system comes in contact with a coarsely defined macroscopic instrument, random outcomes seem as inevitable as with a roll of dice. Yet this view turned out to be completely wrong, as we will see below.
Uncertainty 4: The degree of possible knowledge
The question of what we can know is present throughout the paper. Already in the quote above from the microscope passage, the concern is how well the momentum is "known". In his remark on Laplace's daemon, knowledge is also the central category: But in the precise formulation of the law of causality: "If we know the present exactly, we can calculate the future", it is not the conclusion but rather the premise that is false. We cannot learn the present in all relevant detail, as a matter of principle. [H:197] The restriction of the simultaneous knowledge of canonically conjugate variables appears then like a "data security law" for electrons, in the form: "no particle can be compelled to share his position as well as his momentum with high accuracy". Again, here one can ask the question of whether the particle itself knows better, or how much independence one can allow the concepts of position and momentum at the microscopic scale.
For Heisenberg, the wave function codifies the knowledge of the physicist about the system. I (R.F.W.) remember well how puzzling I found his formulations as a student. How should one convert something like human knowledge into a complex valued function on the configuration space? In the uncertainty paper, Heisenberg demonstrates how he thinks this could go: starting with knowledge of the position with some known precision, he unceremoniously chooses a Gaussian wavepacket [H:180]. But even if this choice is maybe plausible, it is hardly a guideline for the general case.
The example is from the failed proof (cf. Sect. 1.4), and is related to its main problem: the false identification of the measurement imprecision q 1 of the microscope with the position width of the post-measurement state. This is a stronger form of the projection postulate, where not only the measured outcome but also the "precision" of the measurement is imprinted on the state. One may give examples where this connection holds (e.g., in typical collapse models), but just as easily one may give examples where it does not even hold approximately. But that invalidates a general proof. Heisenberg was seduced into this mistake through his concept of knowledge: if one has investigated the position by measurement, then one "knows" the position, and Heisenberg translates this knowledge in turn into a wave function.
One can avoid Heisenberg's mistake by distinguishing where the "knowledge" arises. There are two fundamental possibilities for this: for one, we can refer to how we have made or "prepared" particles, and for the other, we can check properties through measurements. In the formalism, these aspects are represented differently, namely on one hand the preparation through a density operator (or in the simplest case a wavefunction) and on the other hand the measurement through a positive-operator-valued measure (in the simplest case the spectral projectors of a self-adjoint operator). Heisenberg does not reach this distinction and prematurely identifies knowledge from these sources. By respecting this distinction, one gets two different scenarios for quantitative uncertainty relations, which we will consider later.
In the language of quantum information theory, one could say that "information" is encoded in the system via the preparation, which is then read out through the measurement. This view of quantum mechanics turned up many new questions and led to many new developments. It would surely be too much to read Heisenberg's "knowledge" as "information in the sense of Shannon" and to make him into the founding father of quantum information theory, but, all the same, the concepts resonate to some extent.
Uncertainty 5: Disturbance
In Sect. 1.5.1, we had mentioned that Heisenberg does not directly address the theme of "disturbance" with the microscope, namely the change of the momentum by scattering, but rather emphasizes the discontinuity as an ambiguity at one point in time. Most readers have nevertheless understood the microscope example as introducing a disturbance, and indeed the two ideas are close. Actually, the disturbance reading is also endorsed by Heisenberg himself. There is another passage ([H:183/184]) where the term disturbance [Störung] comes up. In this context he criticizes Jordan's notion of the interference of probabilities. He compares two time evolutions, which differ by an intermediate measurement intervention, and he identifies this change explicitly with the change of momentum from the microscope section [H:184 top]. And unlike in the microscope discussion bits of the new quantum theory enter the discussion. In the passage mentioned, the theme of "uncontrollable disturbance" appears as well, something we will later give a precise meaning to.
Summary of reading [H]
Many ideas simmer in this work. They certainly do not all fit together, but in 1927 quantum mechanics was perhaps not mature enough for this to be expected. Upon closer inspection, many of Heisenberg's arguments are not conclusive. This was probably known to him, but it did not keep him from making bold proclamations. It might help to quote his characterisation of Bohr's style from the memorial volume [37]: [Bohr's] insight into the structure of the theory was not the result of a mathematical analysis of its basic assumptions [. . . ] it was possible for him to sense the relationship intuitively rather than derive them formally. Thus I understood: knowledge of nature was primarily obtained in this way, and only as the next step can one succeed in [. . . ] subjecting it to complete rational analysis.
Without a doubt, Heisenberg realised this very style in [H]. The main message, that in quantum mechanics the influence of measurements on the measured system can no longer be idealised away, is clear today and was also so clear back then that it became generally accepted immediately.
2 What became of this
The Copenhagen interpretation
Bohr adopted the uncertainty relations practically immediately in his complementarity philosophy. In his famous lecture in Como [4] in September 1927, they already played a decisive role, and they were the dominant theme in his debates with Einstein at the Solvay congress in October. They were integrated in a new semi-classical picture of quantum mechanics: classical mechanics, supplemented by the uncertainty relations, and an occasional use of wave pictures. Heisenberg also continued to write in this language in his popular accounts [H2, H5, H6]. While the difference between classical mechanics and quantum mechanics is thus understated, the public is allowed to retain a feeling of familiarity with much of the explanation. My (R.F.W.) impression is that it is this half-hearted mix of theories that Einstein fought against in the Solvay Congress debates with Bohr, and which he referred to as "the Heisenberg-Bohr tranquilizing philosophy -or religion?" in a letter to Schrödinger in 1928 [14].
The semi-classical language contributes much to uncertainty about the question of what exactly the "Copenhagen interpretation" is. The name suggests that there should exist a manifesto from which one may learn this interpretation. But neither proponents nor opponents of the Copenhagen interpretation have ever found such a manifesto, which could count as the interpretation of the theory on the level of 1927 (the new quantum mechanics). Moreover, Heisenberg and Bohr were not of one mind on many important questions and also changed their minds over the course of time. The most plausible answer to the question "Who invented the Copenhagen Interpretation?" comes from Don Howard in an article with that title [24]. The answer is "Heisenberg in the 1950s". Before that, there were hints of the "Copenhagen spirit" and of individual statements, but one cannot identify a distinct doctrine. With his invention Heisenberg connected to the scientifically most successful time of his life and secured for himself the role of an unassailable authority on it. This was maybe especially important at that time, because his friendship with Bohr had perhaps suffered due to wartime events. In any case, his invention appealed to proponents (particularly those of the "Shut up and Calculate" faction) as well as opponents, who could now attack Bohr with Heisenberg quotes and vice versa. For serious discussions about foundations of quantum mechanics, it is advisable to avoid the term completely.
One of the most coherent texts regarding the Copenhagen interpretation is the chapter with this title in Heisenberg's book [H5]. This is noteworthy in our context because of a curious appearance of the uncertainty relation: according to Heisenberg, it has to be taken into account when describing an experiment, particularly for the "translation of the initial experimental situation into a probability function". However, uncertainty is the least concern during this translation because even the most ineptly chosen wavefunction automatically fulfils the uncertainty relation. But this remark offers Heisenberg the opportunity to once more bring his big discovery of 1927 into play[*16].
The preference for semi-classical thinking could have also contributed to the fact that the Copenhagen school missed entanglement, which, according to Schrödinger, was the essential new trait of quantum mechanics [*17]. The situation of two distant non-interacting but correlated systems became foundational for quantum information theory and of course goes back to the paper of Einstein, Podolsky and Rosen (EPR) [15]. The idea is to replace a disturbing observation on one system by a "remote sensing" measurement on the other, given a suitably correlated state. Several of the old puzzles of quantum mechanics are affected by this. To begin with, this consideration completely destroys the view (see Sect. 1.5.4) that the quantum mechanical randomness could be traced back to the interaction with coarsely specified macroscopic measurement apparatuses (see end of Sect. 1.5.4). This is because, for every measurement on a subsystem, there is a "partner measurement" on a distant system, so that the measurement value pairs perfectly agree. If one only observes one system, one sees the usual quantum mechanical randomness. But the idea that the "insufficient specifications" of the two measurement apparatuses bring about the randomness, while the results are perfectly correlated over a large distance, is absurd.
Important consequences of the EPR example arise also for the description of subsystems if one assumes a minimum of locality/signal causality. The nonexistence of joint measurements also follows, which is an important foundational element of complementarity. One might think that such thoughts would have perfectly suited the Copenhageners, or could have been a stimulus for further development. But instead the EPR paper was seen as an attack and was "parried". The background for this was the enormous success of quantum mechanics, with which many young and brilliant researchers were just making their careers. To them the misgivings of the old man Einstein were just annoying, and had to be neutralized quickly in order to return to work. Bohr was the ideal authority to do that, and probably few people bothered to read his reply. It did not matter that he missed all the new and interesting points about separated laboratories and entanglement, by transforming away these aspect in a footnote. He mainly embarked on a sermon criticizing the EPR authors for their lack of conformity with his complementarity ideas, so everything was back to normal.
Nevertheless, Bohr's magic had the desired effect. Peter Mittelstaedt, who did his doctoral thesis and habilitation with Heisenberg, was in the audience of one of the various lectures that I (R.F.W.) gave on the subject. He confirmed my interpretation that [15] was mainly perceived as a nuisance. When he told Heisenberg that he wanted to work on the "EPR-paradox", as it was called then, Heisenberg answered "What do you want with that then; Bohr has dealt with that for us".
One difficulty with the Copenhagen texts is that they do not necessarily strive for clarity. Heisenberg as well as Bohr liked to quote the aphorism that the opposite of a deep truth is also a deep truth. Or even worse, that clarity and truth are complementary. Clarifying something often requires a decision between conceivable conceptual alternatives, but that would have reduced the profoundness for them. When H.P. Stapp made the attempt in 1971 to give a precise formulation of the Copenhagen interpretation, he asked Heisenberg for a confirmation of his version. He gave it (for us astonishingly), but added a general criticism of this intention: It may be a point in the Copenhagen interpretation that its language has a certain degree of vagueness, and I doubt whether it can become clearer by avoiding this vagueness. [42,1113] Thus, no one can be certain to have found the Copenhagen Interpretation. What remaining value it has can only be very subjectively judged. Often enough I (R.F.W.) have been called a "Copenhagener", and actually there are some often cited fundamental views, with which I would agree. These include the necessity to capture experiments and their results in classical language. However, it makes a huge difference whether one applies these statements to classical properties of the measurement apparatus, or -much more problematically -to properties of the microsystems themselves. As for complementarity, it is certainly true that one must always make a choice, when running an experiment or theoretically describing it, and different choices preclude each other. But Bohr's love for contrasting pairs of concepts can safely be ignored. Orwellian doublethink, i.e., the feat of holding two contradictory beliefs simultaneously, like a particle and a wave picture, has no place in quantum mechanics. Personally, I sympathize with the pragmatic stance of avoiding ontological debates. But perhaps many would already disagree with me here.
Minimal statistical interpretation
For the further discussion, we would like to rely on an interpretation of quantum mechanics, which radically implements Heisenberg's program of referring only to quantities that can be measured. Namely, the "quantities" are not to be understood as properties of particles, but rather are defined through a measurement process. Here "process" is to be understood entirely in the sense of laboratory language, meaning it refers to apparatuses, which are to be thought of as being described in the language of classical physics. The counterpart of such apparatuses in the theory is the specification with which probability the different possible measurement values appear, when the apparatus works on a given preparation/state. This assignment is called an observable. Position and momentum are observables in this sense, but it is never assumed that single particles have a known or unknown value of these quantities assigned to them (either at a particular time or always). For example, the position observable is essentially defined by Born's rule making |ψ(x)| 2 the probability density for finding the particle at x. In the further development of the theory the observables may play different roles, for example regarding symmetries, or for writing down interactions. But none of this makes the values of observables more like properties of individual systems. Similarly, the states are not defined through particle properties or a distribution thereof, but rather through the preparation process. The statements of the theory all concern probabilities that can be measured by macroscopically described experiments. We call this interpretation the minimal statistical interpretation.
It was shown by the axiomatic program of Günther Ludwig [29] that one can build the usual quantum mechanics upon this. For my current primary research area, quantum information theory, this interpretation likewise offers an adequate basis. Interestingly, this interpretation is by no means the majority view in that field. For example, David Deutsch constructed the first quantum algorithms as evidence for the (not at all minimal) many-worlds interpretation of quantum mechanics. But we can easily agree with Deutsch on whether a proposed algorithm "works" because this can be formulated and decided on the basis of the minimal interpretation. In a community of researchers with wildly different ideas about foundations, the minimal interpretation defines the common ground.
The most interesting question is then if and how one can go beyond the minimal platform and come closer to an "objective" description of quantum systems. Naturally, that was often tried and will certainly be tried again. But the grander the attempt the more it typically fails. What always works is to come to an objective language in a limited context, which might even be justified from an approximation of quantum mechanics. For example, a time-of-flight spectrometer is treated like a device filtering classical particles, we talk of particles "going through" slits, or discuss the formation of atoms by placing electrons into shells (a metaphor based on the Hartree-Fock approximation). Further examples are the classical Maxwell theory for all quantum optics experiments, in which photon-photon correlations are not considered, or even semi-classical mechanics if one restricts oneself to observables that change slowly in phases space on a scale given by [52,50]. The language one actually uses in laboratories is full of such classical elements, and how could it not be? It is natural language after all, which has developed as a way to communicate about perfectly classical, property-defined things. We have an innate tendency to tell stories, and to base "understanding" on moment-by-moment accounts of what is happening. The quantum mechanical formal language is completely frustrating in this regard. It never tells us what is happening, but only what we will find (and how often) if we look to find out.
What is the relation of this lab talk to the theory then? Of course, we need it whenever we want to get quantitative agreement. But it is also needed around the boundaries of applicability of the classical elements. For example, the Hartree-Fock approximation is by far not the best we have, and unless one wants to hamper progress, one has to get used to the idea that in a multiparticle electron wave function it makes little sense to ask which one-particle states are "occupied". Even more importantly, the partly classical lab talk, like Heisenberg's semiclassical theory, is not free from contradictions and paradox. Then it is important to distinguish whether there is really something wrong, or if once again our classical cognitive reflexes have led us astray. There is actually a whole literature of Gedanken experiments, in which one raises classical expectations, only to find them shattered, and then to marvel at the quantum strangeness.
And so we come back to Heisenberg's programmatic statements about the heuristic meaning of the uncertainty relations. Only now the logic is turned around: we do not need semi-classical arguments to justify the uncertainty relations, but rather the relations, now as general theorems of quantum mechanics, help to identify the regime in which semi-classical arguments are justified.
Uncertainty 6: Preparation uncertainty
Refining the heuristic uncertainty idea and developing it into general statements about the (new) quantum mechanics began already in 1927. Earl H. Kennard came on a sabbatical year to Göttingen and probably followed the young Heisenberg to Copenhagen because his work [25] "On the quantum mechanics of simple types of motion" gave Copenhagen as his address. Also, for correcting the galley proofs, he apparently met Heisenberg in Munich [34, p.588]. In this clearly written paper, we find the uncertainty relation, as it appears in every textbook on quantum mechanics today.
Like Heisenberg, Kennard relates uncertainty to "knowledge", but of the special kind that we get from having prepared the system. His relation says that it is impossible to prepare particles that have both their position probability distribution and their momentum probability distribution sharply concentrated around a single value. As a precise quantification of sharpness of the distribution, he uses the standard deviation ∆A. That is, for a general observable A with real values, he has where the brackets represent the expectation of the quantities in the given state. His proof is somewhat awkward. The currently used proof follows Robertson [35] in the use of the canonical commutators, fulfilling Heisenberg's unfulfilled promise of such a derivation. Independently from Kennard, in 1927/28 Weyl also arrived at this relation in his book [53], though he did not cite Heisenberg and thanked Pauli for the tip about the uncertainty idea. One can hardly overestimate the advance from Heisenberg to Kennard. In contrast to a relationship demonstrated through example in the semi-classical old quantum mechanics Kennard provides a general theorem in the new. Instead of Heisenberg's vague tildes he gives a mathematically sharp, quantitatively falsifiable relation. And finally, the vast ambiguity of the interpretation is reduced to zero. Anyone who wants to know what Kennard's uncertainty relation means, only needs to check what the proof actually proves. So it is not astonishing that Right: for all three components, with the additional simplification that we included with every point all those where one or more variances is larger. This is the relevant body if one is interested in lower bounds on uncertainty only. For details and a paper cut-out model, see [13] this relation (with simplified proof) became "the" uncertainty relation in the pedagogical literature. Whenever students today are asked about the precise meaning of the Heisenberg uncertainty relation, they are supposed to reproduce those parts that did not come from Heisenberg. Heisenberg himself saw this differently. In his lectures in Chicago, he gave Kennard's proof and then deprecated Kennard's achievement with the words It should still be emphasized that this derivation is, in its mathematical content, in no way different from the derivation of the uncertainty relation from the wave-particle duality; only the proof is [. . . ] conducted precisely here. [H2, Sect. II. 1.].
What? We read that as "a proof adds nothing to the mathematical content (!), and I share the glory for this discovery with no one" (see also [*13]). It would be interesting to discover Kennard's reaction to this assessment and to see whether it was this appreciation that made him change fields [*18]. The further development of uncertainty relations practically ground to a halt for a long time due to the success of Kennard's formulation. The relation of Robertson [35] is often called a generalisation to arbitrary observables, but actually it is not an uncertainty relation in the sense discussed here, because the lower bound is still dependent on the state and, for eigenstates of one of the observables, it tells us nothing. It does not allow the conclusion that "the distributions for the same state cannot both be sharp", not even in cases where this statement is actually true. That leads to the challenge to set up and prove uncertainty relations for other observables.
In the simplest setting, one chooses the variance ∆A 2 for the "width of a probability distribution" as Kennard did. A general uncertainty relation for two observables A and B is best given in the form of a diagram (see Fig. 1,Left) that shows the possible values (∆A 2 , ∆B 2 ), taken in the same state, and collected for all states. An uncertainty relation is then every inequality that allows the conclusion that the point (0, 0) cannot be reached. There are simple algorithms [13,39] to calculate the boundary curve, which unfortunately cannot be calculated analytically in most cases. Sometimes it is also natural to consider more than two observables, like, e.g., the three components of angular momentum (see Fig. 1,Right). A further interesting development arises if the "sharpness of the distribution" is represented in an information theoretic sense via entropies [12].
Uncertainty 7: Measurement precision and disturbance
Most people have read Heisenberg's relation as a trade-off between the precision ∆Q of an approximate position measurement and the momentum disturbance ∆P incurred by that measurement. This has nothing to do with the preparation uncertainty because it refers to a completely different experimental situation. To verify a preparation uncertainty relation, one has to perform separate experiments for every state and every relevant observable (e.g., P and Q) to record the distribution. No single particle is thereby subjected to both a position and a momentum measurement. On the other hand, this is essential for any precision-disturbance trade-off.
A common approach to this distinction is to ignore it, namely first to prove the Kennard relation and then to pretend that this says something about measurement precision and disturbance. One finds this in many textbooks and, in a way, this follows Heisenberg, who did not make this distinction either [*19]. It is, however, pedagogically unfortunate to prove something and then lie about the conclusion. A more honest approach is to at least discuss an example of disturbance (e.g., [30, Chap. IV.III]). But can one express this idea as sharply and as generally as for preparation uncertainty?
It was already known to Kennard that his new relations did not cover the categories of precision and disturbance. He remarks explicitly that a definition of "measurement error" as a deviation of the measurement value from the true value does not work in quantum mechanics because the true value does not exist "in a physical sense". He recommends a comparison of the probability distributions. He does not take this further, but we will follow this line of thought here.
In the minimal statistical interpretation, observables are thought of as describing a measurement process. If one changes the measurement setup, there is no point in discussing what would have come out for the original one (no counterfactual definiteness, or "unperformed measurements have no results"). Therefore, comparing the momentum after the microscope measurement with the value one would have obtained without the microscope makes no sense in the theory. Likewise, a single run of the microscope experiment does not produce a value of "momentum disturbance" of which we could then collect the statistics by repetition. Of course, it is also not an option to measure the momentum be-fore the microscope: that would surely disturb the position measurement, and we cannot ignore this effect, since this kind of disturbance is precisely the point under discussion.
On the other hand, two measuring devices are described by the same observable, if they give the same statistics on all states. It is natural to replace "the same" here by "almost the same", i.e., to compare observables not by individual results but by comparing distributions. This way of detecting a disturbance is also familiar from other contexts. Consider, for example, the double slit experiment. It is well-known that if one tries to find out through which slit the particles go by detecting their passage at the slits, the interference pattern will be destroyed. That is, the particles get disturbed. This statement does not require a dubious comparison of an actual trajectory with a hypothetical undisturbed one. The change of the distribution on the screen is enough.
Hence in order to identify a disturbance in the microscope experiment we can compare the momentum distribution of the particles after the measurement with that obtained by a direct momentum measurement. From this comparison, made for all input states we get ∆P . The same idea applies to accuracy: we compare the output distributions from the given device with those from the observable one intended to measure (the ideal, or "reference" observable). For the microscope this standard of comparison is the position observable, which is implicit by calling the microscope an "approximate position measurement". The accuracy is a property of the device, a benchmark quantity. Some value ∆Q implies the promise that, no matter which input state is chosen, the deviation between probability distributions from the device and the reference is less than ∆Q.
What is conspicuously absent from the above explanations, but is clearly needed to come to quantitative relations, is a way of quantifying the deviation between two probability distributions. There are different ways of defining this, which we will briefly describe in Sect. 2.3.5. But once this is settled, one may embark on finding quantitative trade-off relations between ∆P and ∆Q. Moreover, it is clear that there is nothing special about position and momentum in this conceptual framework, so it applies to arbitrary pairs.
Uncertainty 8: Measurement precision and uncontrollable disturbance
Heisenberg [H:183] also refers to the disturbance from a measurement procedure as "fundamentally uncontrollable" (e.g., also [30]). For a long time, I (R.F.W.) found this expression peculiar, but I believe we can now give a good explanation. This is based on the question of how one could try to control the disturbance. Let us assume from the outset that we know the construction of the microscope exactly. With that we know all systematic errors, which one can possibly correct for. The measurement after the interaction then no longer needs to be the standard momentum measurement because it may contain such corrections. We can allow an essentially arbitrary measurement device, constructed with the sole purpose of producing momentum-like outputs, whose distribution is always as close as possible to that of a direct momentum measurement. We can even grant this reconstruction measurement access to the position value obtained in the prior position measurement, or to internal quantities obtained from some monitoring of the microscope measurement, as long as we do not interfere with the process of determining Q, i.e., as long as the microscope is not "disturbed" in the sense explained above. An uncertainty relation for precision and "uncontrollable" disturbance is then a trade-off between ∆Q and ∆P , obtained with an optimized reconstruction measurement.
Uncertainty 9: Measurement uncertainty
We can generalize this further and at the same time restore the symmetry between position and momentum. Taking the microscope and the reconstruction measurement together as one big device, we will consider a scenario in which there is only one device with two outputs, one position-like and the other momentum-like [*20]. We can then determine ∆Q as before, by evaluating the difference between the position-like output distribution with that of an ideal position observable. In this process the momentum-like output is ignored. Our previous definition of momentum disturbance still applies, only this time we ignore the position-like output. That gives a mirror image of the accuracy definition, so in this scenario disturbance is actually the same as accuracy for the momentum-like output. For position and momentum, there are measurement uncertainty relations of the form where the form of the relation already follows from dimensional analysis, namely from the symmetry Q → λQ and P → P/λ. The constant a depends still on how we compare distributions, as also occurs for the corresponding constant in the preparation uncertainty relation. The first relation of this form is proved in [49]. Further developments are found in [8,9], and some are described in Sect. 2.3.6.
Technical supplement: Quantitative comparison of probability distributions
The crucial technical point for the definitions given above is the evaluation of the distance between two probability distributions, which plays the same foundational role for measurement uncertainty as the variance plays for the evaluation of the width of a distribution for preparation uncertainty. In some applications, the "total variation" is used as the distance between two probability measures. Up to a factor this is the largest difference between the probabilities of any event if first one and then the other distribution is used in the calculation. Such a distance measure has the physical unit of a probability and so is a dimensionless number. However, we want a metric that reflects distances in the underlying outcome space. So the distance between two distributions of position should be in meters, and the distance between two sharply peaked distributions should be roughly the distance of the peaks.
The general approach starts from this consideration and also works for measurements with arbitrary outcomes, not necessarily real numbers. We take the outcomes as elements of a set X, on which we have already defined a metric. That is, the main input to the construction is a way to ascertain the distance d(x, y) between two points x and y. For the real-valued case, one usually sets d(x, y) = |x − y|. For other quantities, for example, for angles [7], there are many natural choices depending on the application at hand. We denote by δ x the probability distribution that delivers the value x with certainty. Mathematically we call this a point measure, and the δ is reminiscent of Dirac's δ function. Then it should be true that Here we have simply taken the same letter "d" for the distance between measured values and for the (not yet defined) distance between distributions to emphasize the connection between them, which is expressed through this equation. As a next step, we consider the distance between a point measure δ x and a general distribution µ. The distance of a measured value y from x is then a random variable d(x, y). More generally, we consider the power d(x, y) α , with a socalled error exponent α ≥ 1. For the usual variance and related quantities, we take α = 2. We then choose the distance to be the µ-expectation value of this distance function (with power). Following the usual rules of probability theory, this is given by an integral over µ: Here the power α on the left side of the equation ensures that d(δ x , µ) has the same scaling and units as the metric d itself. In this framework, the natural spread of a probability measure µ is i.e., the distance to the nearest point measure. One easily checks that for α = 2 and (X, d) = (R, | · |) this is just the standard deviation (2).
For the distance between two arbitrary probability measures µ and ν, one could now think about averaging the metric in two variables. But that is a bad choice because then d(µ, µ) would rarely be zero. Instead, one averages over a joint distribution for the two measures, i.e., a measure γ on X × X, so that integrating out the second variable gives µ and integrating out the first gives ν. Such a measure is called a "coupling" of µ and ν or a "transport plan". This goes back to a task tackled by the mathematician and fortress builder Gaspard Monge, in which heaps of clay with distribution µ must be converted into a fort with clay distribution ν. The measure γ then describes how much earth should be moved from the vicinity of x to the vicinity of y. We now assume that the transport of a bucket of earth from x to y incurs a cost of d(x, y) α . For example, with α = 1 this means paying workers by the bucket and the meter. With α = 2 they get a bonus for long hauls. Then the cost of moving earth for building the fort, with a cleverly chosen transport plan is One therefore calls d(µ, ν) the transport distance from µ to ν. This is indeed a metric on the set of probability measures with finite variance and has many desirable properties. A good book on the subject is [45]. When considering observables we must now take into consideration that every state ρ that is measured gives another probability distribution. We denote by ρ A the probability distribution that arises via the measurement of an observable A on the state ρ. Here, A can also be a generalized observable (POVM, see [*20]) as is typical of the marginals of joint measurements.
In the typical uncertainty application A might be the observable that we want to measure, the "ideal reference", e.g., the standard position observable. A will be an approximate version of it, for example one marginal of a joint measurement. The distance d(A , A) is then an overall figure of merit for A , that one might find in the specs of the device A . The specification d(A , A) ≤ ε is equivalent to the following promise: regardless of which state ρ is measured, the distribution ρ A arising from measuring A deviates by at most ε (in the sense of transport distance) from the distribution ρ A , which the reference observable A would have given.
It is important that this holds for all states, i.e., that the maximum is taken in (4). The state-dependent quantity d(ρ A , ρ A ) by itself would be a ridiculously weak measure of quality. A testing lab using it could be fooled by a device A which simply outputs the distribution ρ A on every state. Of course, this hardly deserves being called a measurement. A "good measurement" should be one that delivers reliable results even on unknown states.
Results on measurement and preparation uncertainty
The definitions given above allow us now a quantitative specification of measurement uncertainty relations. As in the preparation uncertainty case it is a good idea to draw an uncertainty region, i.e., the set of pairs d (A , A), d(B , B) , where A , B are the marginals of some joint measuring device. A measurement uncertainty relation is any statement to the effect that this set of pairs does not reach the origin. Apart from the observables A and B under consideration this region depends on all the choices made for the quantitative description of uncertainties: the metrics on the outcome spaces and the error exponents, which need not be the same. Note that these choices determine a variance quantity (2) for use in a preparation uncertainty relation as well as a distance (4) for probability distributions and observables. Therefore, it makes sense quite generally to compare the preparation uncertainty diagram with the measurement uncertainty diagram.
Doing this for position and momentum we find that the diagrams are exactly equal! Taking Euclidean distance and α = 2 this gives [8,9] d(Q , Q)d(P , P ) ≥ 2 .
with the same constant as in the Kennard-Weyl preparation uncertainty. To understand what is going on, let us consider a more general case, namely that of two observables related by Fourier transform. The outcome sets of these observables can be arbitrary locally compact abelian groups, which are duals of each other. The product of these spaces is then called a phase space. The Hilbert space is the space of square integrable functions on one of these groups with Haar measure, and it does not matter which, because in this general setting we have a unitary Fourier transform connecting the P representation and the Qrepresentation. In the standard position/momentum case both groups are R, but we could also take R n if we think of position and momentum as vector valued, or the circle group and the integers (angle and number) [7], or bit strings, or combinations thereof. Of the two metrics we only demand translation invariance d(x + z, y + z) = d(x, y), which makes sense because the domain is a group. We claim [51] that in all these cases the uncertainty regions for measurement and preparation coincide. The crucial step is to consider a special type of joint measurement, called a covariant phase space measurement. These have the property that phase space shifts on the input quantum state are equivalent to the corresponding shifts of the outcome distributions. The structure of such observables is known. Each observable is uniquely characterized by an operator σ, which gives the probability density at the phase space origin. This is sufficient to determine the observable, because by covariance we then get the density at all phase space points. The necessary and sufficient condition for σ to be the density of a normalized covariant observable is that σ ≥ 0 and trσ = 1, i.e., σ is a density operator as is normally used to describe a preparation. These joint measurements are well-known in quantum optics, where taking σ as the oscillator ground state as a density gives an observable whose output density is the so-called Husimi function of the input state. One can compute the marginals of such an observable, and it turns out that the Q-marginal is the convolution ρ Q = ρ Q * σ Q , where these are the output densities for ρ and σ for Q, and similarly for P . In other words, we can simulate the marginal by making a standard position measurement on ρ and then adding noise with distribution σ Q from an independent source. This immediately provides the idea for a coupling γ of the distributions ρ Q and ρ Q , namely the measure on X × X where the first component x is distributed according to ρ Q and the second is y = x + z with z distributed according to σ Q .
Doing the integral in (3) then gives
That is, the errors are bounded by the size of the added "noise", which in turn is a variance, once we shift σ in phase space so that the minimum in the variance definition is attained at 0. So the measurement and preparation uncertainty diagrams are equal by virtue of a one-to-one correspondence between (covariant) joint measurements and states. Via this mapping we can also take over the minimizers and conclude that the unique covariant phase space measurement for which (5) is equality is the one giving the Husimi distribution. What is left for a general proof is an argument for why non-covariant joint measurements cannot do better. This is done by an averaging argument, but also involves a careful discussion excluding that some averaged observable picks up some non-zero probability for infinite values [9]. From this proof sketch it is clear that the quantitative equality of preparation and measurement uncertainty is due to the high symmetry of the observables connected by Fourier transform. If one were to choose two arbitrary observables A and B, such a relation rarely holds. Indeed, there are efficient ways to compute measurement uncertainty bounds by semidefinite programs [40], and even more efficient ways [39] for preparation uncertainty mentioned above, so it is easy to generate examples. This makes it clear that Heisenberg's identification of the two kinds of knowledge is indeed invalid.
For a direct example of this kind consider for each observable A and B a complete von Neumann measurement, i.e., a projective measurements along some orthonormal basis. Then the uncertainty pair (0, 0) is allowed for preparations if the two bases have one vector in common. For measurement uncertainty, the point (0, 0) is attainable if the two observables are exactly jointly measurable, i.e., they commute. That corresponds to the much sharper condition that the two observables are the same up to a permutation of the outcomes.
It turns out that (at least in finite dimensional Hilbert spaces) the convex hull of the preparation uncertainty region contains the measurement uncertainty region for projective measurements [41]. There are numerical counterexamples showing that the convex hull is needed here. It is also clear that such a statement cannot hold for general POVM measurements. Indeed if we add noise to some given measurements, preparation uncertainties go up, because all distributions become broader. However, measurement uncertainty goes down, typically to zero, because with sufficient noise it becomes possible to measure the given observables jointly.
An Aside: Ozawa's error and disturbance
Masanao Ozawa recently asserted that Heisenberg made a mistake regarding the relationship between measurement precision and disturbance. The basis for this is a paper [31] from 2003, in which he gives mathematical definitions of a root mean square error (Q) and a root mean square disturbance η(P ) referring to the microscope situation. Ozawa then alleges that Heisenberg's claim in [H] is that (Q)η(P ) ≥ /2, and goes on to show that this relation is not generally true. From this Ozawa concludes that Heisenberg was wrong.
It should be amply clear from the above reading of Heisenberg that he works entirely on a heuristic level, and never defined root mean square quantities or claimed any inequality. Turning one of his intuitions into a mathematical statement that can then be proved or disproved is an active process, that gets at least as much input from the person doing it as it gets from Heisenberg. When it comes to exact statements, Heisenberg by himself is generally not even wrong. So refuting Heisenberg at that level is simply not a worthy scientific target. The only interesting thing is to find out whether there is something to his intuitions after all. In the case of error and disturbance there clearly is, as we have demonstrated above. There may be other aspects, focusing on other quantities. But the one sure criterion that such an attempt has missed the target is to have no uncertainty relation. This is what happened to Ozawa.
Around the time of his paper, I (R.F.W.) heard Ozawa speak at a conference in Japan and tried immediately to convince him that his formulation of measurement uncertainty and disturbance was no good. I failed, but at least that stimulated me to show how one could do this better [49]. One way or another, the subject hardly interested anyone for quite a while after that. However, when an experimental group succeeded in measuring Ozawa's quantities [36], huge hype in the media appeared stating that Heisenberg had been experimentally disproved.
Naturally, the experiment is quite irrelevant for the question of whether Heisenberg was correct. It is also irrelevant for the question of whether Ozawa's definition of error and disturbance are sensible renditions of what one might understand by these terms. Measuring a silly quantity does not make it less silly. The uncertainty relations that Ozawa foisted on Heisenberg do not even fail in any interesting way, as a quick look will show. In fact, it was remarked in the literature long before Ozawa [1] that one should not proceed in this way. So from this point of view Ozawa's work was simply a step backwards.
One reaction to the hype of the supposed refutation of Heisenberg was that Paul Busch and Pekka Lahti got in touch with me (R.F.W.) and proposed that we should go against the hype on the basis of my old paper [49]. Out of this came the stimulating collaborations [8,9], including a detailed criticism of Ozawa's approach [10].
Uncertainty 10: No information gain without disturbance
Till now, we have only considered measurement uncertainty for pairs of observables. Sometimes it is interesting to allow arbitrarily many observables. A situation where this becomes necessary is quantum cryptography, or, more precisely, quantum key distribution. In this setting there is a channel shared by legitimate users, traditionally referred to as Alice and Bob, over which quantum particles are sent. The eavesdropper is evil Eve. She may carry out arbitrary measurements on the particles flying by. But in so doing, she disturbs them, something Alice and Bob can readily determine by statistical tests. The fundamental principle at work here is that there cannot be any information gain from a quantum system without disturbance. To make this explicit, consider a measurement process M with the property that the state after the measurement is always the same as before. That is, for any further statistical measurement it does not matter whether M was carried out or not. Then it follows that the results obtained by M are independent of the input state, i.e., we learn nothing about the quantum system.
Actually, we need a stronger statement, because exact equality of states before and after a transmission line never holds. There are always small losses, and the rules of the game are that we must attribute them to the eavesdropper. Thus we need to conclude from a sufficiently small disturbance, that the eavesdropper cannot have learned much, and give quantitative bounds for this. This is harder to get, and there are different ways of phrasing it mathematically. With an error criterion based on maximal errors one result of this kind is [28]. Moreover, entropic uncertainty relations with side information have proven to be useful for cryptography [43].
It is interesting that the converse of the statement "no information gain without disturbance" also holds. It is known as the Knill-Laflamme criterion for the correctability of errors [26]. If during some quantum process no information has been given to the environment, then there is a quantum process that restores the input state, i.e., there was no real disturbance. This is can be used to design fault-tolerant quantum computers, i.e., computers that work even with any desired accuracy in spite of imperfect components.
Conclusions from later developments
If "knowledge" about a quantum mechanical system is established via preparation or measurement, then we need preparation uncertainty relations and measurement uncertainty relations together to translate Heisenberg's idea into quantitative theorems in quantum mechanics. The interest in doing this has grown since Heisenberg's time. In his time, experiments close to the uncertainty limit were hardly conceivable. Today they are commonplace in laboratories. There it is important to know how close exactly one is to the boundary. In formulating such uncertainty relations there is considerable mathematical leeway, which one can use to construct and to prove inequalities tailored optimally to a given situation. We should get used to speaking, not of "the" uncertainty relation, but rather of "an" uncertainty relation, or a whole collection of them.
Heisenberg demonstrated great foresight when he introduced his relations as a heuristic principle. For many years afterwards, that was, without a doubt, their most important role, and still today there is nothing better to quickly decide where quantum effects must be taken into consideration.
Remarks
[*1] In a letter to Pauli [20, [105]], Heisenberg laments the existence of two separate quantum communities in Göttingen, with very different views of the relationship of Mathematics and Physics. He especially dislikes the group around Hilbert and Weyl embracing matrices as bringing new progress in physics, and he even considers finding a more physicsy term for matrices. The two groups did make rather different contributions. Whereas Heisenberg was satisfied to develop an intuition "for all simple cases" that occurred to him, the mathematicians von Neumann and Weyl searched for and readily found the generalizable structures and interpretations that we still use today. On the other hand, the uncertainty paper is an undeniable success of the heuristic physical, anti-mathematical approach. However, some of the physicists around Heisenberg did feel the need to bring adequate mathematical tools into the new theory. In 1925/26 Born had already brushed up his operator theory on a visit to Norbert Wiener. He was also the Academy member to submit von Neumann's paper [47]. Just a short while later Pauli and Jordan were both collaborating with von Neumann.
Heisenberg in his later years expresses his appreciation of the mathematical side by claiming that it was his own work in the first place. Here is how, in 1956 [H5], he summarizes the contribution of his uncertainty paper: the link from an experimental situation to its mathematical representation was to be achieved by the "hypothesis that only such states may appear in nature, or can be realized experimentally, that can be represented by vectors in Hilbert space." (Italics in the original). Now this and much more could be said about von Neumann's article [47], in which he actually coins the term "Hilbert space". Heisenberg's paper [H] naturally does not contain the word. Neither does it contain the thing itself. In [H5] he also claims to have corresponded extensively with Pauli about "this kind of solution", but again, the surviving letters (e.g. [20, [115]]) have nothing.
[*2] We found two translations of [H] (see references). Both make a mess of this distinction. Wheeler and Zurek choose "physical content". The anonymous translation on the NASA website has "actual content". Another option, found in the translation of a paper by Schrödinger, is "perspicuity". See also [23].
[*3] In contrast to today, linear algebra was not part of the standard curriculum. When he created matrix mechanics in 1925, Heisenberg knew nothing about the mathematics of matrices, and even Born found it worth mentioning from whom he himself had learned this exotic subject.
[*4] The published statement, which Heisenberg presumably meant here, came from Schrödinger's paper [38] in which he shows the equivalence of matrix mechanics and his theory. It is from a footnote in which he recalls why he had initially ignored Heisenberg's work and why, therefore, his own theory owed nothing to Heisenberg. It is a pity that this is the only part of the paper that Heisenberg mentions. A more mature reaction would have been to accept Schrödinger's result that the two approaches are two sides of the same coin, with priority granted to matrix mechanics, and then work on the remaining differences. The fight against the "disgusting continuum theorists" as a central issue of [H] is worked out clearly in [2].
[*5] "We create internal virtual images or symbols of the external objects, and we make them in such a way that the logically necessary consequences of the images would invariably be the images of the natural consequences of the depicted objects." [21]. The role of the Hertzian "images" in Heisenberg's early works is traced in [18].
[*6] "The philosophy is written in that great book, which constantly lies open before our eyes (I speak of the universe) [. . . ]. It is written in the language of mathematics, and the letters are triangles, circles and other geometrical figures." [17] [*7] In fact, the move hardly convinced Heisenberg himself. The "victory footnote" [H:196] cited above continues by granting Schrödinger an important role in the "mathematical (and in this sense intuitive)" development of the theory [italics by Heisenberg]. This supports my reading that the notion has been extended to include mathematical intuition, here even any mathematical work, but at the same time it seems visualization of a lesser kind. If we apply that to the paper [H] itself it amounts to affirming the criticism it sets out to rebut: 'Sorry, we do not have vizualizable content, but lots of math'. Therefore the visualizability grapes have to be sour. Indeed, Heisenberg continues by charging wave mechanics with "leading away from the straight path outlined" by Einstein, de Broglie, Bohr, and 'quantum mechanics', by the poison of "popular visualizability".
[*8] Nevertheless, the microscope yields further interesting aspects upon closer inspection. For example, one can discuss what happens if one places the photographic plate or detector in the focal plane instead of the image plane, so that one detects the direction of the photon, not its point of origin. Moreover, one can make this choice after the Compton scattering has occurred. [19] [*9] An example is the online exhibition about Heisenberg by the American Institute of Physics [11]. Following the link "Derivation of the uncertainty relation", one finds the cited page.
[*10] In the "two-men-paper" [6], this is the functional derivative of the formal trace of the Hamiltonian, which requires H to be written as a non-commutative polynomial in a special symmetrized form. In the "three-man-paper" with Heisenberg [5], this is replaced by the directional derivative along scalar shifts. That makes the transition to commutators and to the modern representation con-siderably simpler. Both forms of partial derivative are forgotten today and only served the explicitly stated purpose of allowing the equations of motion to appear in a similar form to Hamilton's equations.
[*11] The "simplest conceivable assumption" is actually a very poor, even wrong description of the transition. The distinction between diagonal and non-diagonal matrix elements depends on the basis (and none is specified), and just one diagonal element [=Diagonalglied] as a replacement is even crazier. Just a little further down the whole matrix is taken as the replacing object, which makes more sense. It is an interesting project to develop a reading of the quote that makes sense without excessive interpretational bias. In the paper the most helpful passage for this might be [H:181/182]. Like the quote, it seems to refer to a notion of assigning values to a matrix which is not wholly captured by an expectation value.
[*12] In [H6] Heisenberg dates his conversation with Einstein, from which the quote presumably comes, to the spring of 1926. He mentions that, for him, recalling that conversation was essentially the stimulus for the uncertainty paper. In the narrative of [H6], Einstein criticizes the methodology in [H0] although Heisenberg feels that he has taken it from Einstein himself. It is hard to say whether Einstein would have found [H] any more in agreement with his philosophy.
[*13] The observable-centered approach of the older work [H0] of 1925 is sufficient for Heisenberg here to proclaim the conclusiveness of the theory! That any method for establishing a new theory guarantees its truth, must be doubted. Moreover, this confidence is expressed towards a formalism that is still in its infancy. To what extent the matrices of the formalism [5] are "experimentally determined numbers" remains unclear. In [6] and [5], the concept-critical approach plays a subordinate role. Hence, a purpose of this quoted sentence is to mention this aspect and with it Heisenberg's own contribution once again, and so to defend primacy and priority for the whole of quantum mechanics as well as its ultimate formalism (see also [*1]).
[*14] If one reads them in isolation, one may exclaim at these sentences: "Has this fellow not read Heisenberg?". Wasn't it the whole purpose of §1 to cast doubt upon the word "exact"? But we hope it has become clear that "exact" in this sentence has nothing to do with the opposite of uncertainty or imprecision. Rather "exactly defined" is to be read here as "operationally defined".
[*15] How the uncertainties, which were introduced with the meagre precision of Heisenberg's tildes, could become the "reason" for precise quantitative probability relations is another unfulfilled burden of proof. It only appears in the abstract. The paper itself has no details about it.
[*16] Heisenberg uses a variant of this line also in his Nobel lecture [H3]. The error of taking the uncertainty relation as an additional element, which is independent enough of quantum mechanics so that one could even think of taking or leaving it, has also been committed by some people without such personal motives.
For example, following his semi-classical discussion of the double-slit experiment, Richard Feynman writes: "The uncertainty principle 'protects' quantum mechanics. Heisenberg recognized that if it were possible to measure the momentum and the position simultaneously with a greater accuracy, the quantum mechanics would collapse. So he proposed that it must be impossible. Then people sat down and tried to figure out ways of doing it, and nobody could [. . . ]. Quantum mechanics maintains its perilous but still correct existence." [16, Section 1-8] That completely warps the role of a theorem in the theory and is also historical nonsense, just like Feynman's fictitious reasons for why Einstein did not accept quantum mechanics [16,.
[*17] Don Howard has an entirely different interpretation here [24]. He would possibly let this be true for Heisenberg but is of the opinion that Bohr had, from the beginning, a deep understanding of entanglement, particularly the entanglement between a system and measuring apparatus caused by the measurement interaction.
[*18] It is also unclear for me (R.F.W.) how the American Kennard came to write in a German style so reminiscent of Heisenberg's. Maybe Heisenberg's contribution to this paper was bigger than Kennard's acknowledgement reveals. In a letter to Pauli [20, [164]], dated May 31 in Copenhagen, Heisenberg writes: "I am still very unhappy about the work of an American, which he began with me, and which is in that touchy subject, which I would like to stay away from right now". This could be a reference to Kennard, but the normally very thorough editors of the letters give no hint about his identity.
[*19] A relatively early acknowledgement of this problem came from Karl Popper [32,33]. In these papers we see him fighting the ambiguity in [H]. He did not achieve a complete clarification, and later even had to retract a proposed experiment. But the distinction between the "statistical scatter relations" (Poppers own translation of "Streuungsrelationen", i.e., variance relations) of preparation uncertainty from questions of measurement precision is quite clear: the measurement of a distribution presupposes the sharp measurability of the outcomes in each single case. He comes very close to the concepts of preparation and measurement uncertainty with his distinction of non-prognostic measurements, i.e., those in which one does not care about the particle afterwards, and prognostic measurements, which serve to prepare a new initial state. In his immediate response, von Weizsäcker gives an argument why the measurement uncertainty relations (which Popper is in some sense asking for) must be the same as the preparation uncertainty relations: he sees one kind directed towards the future and the other towards the past, and invokes the time inversion symmetry of quantum mechanics. If it were really that simple, we could have saved ourselves a lot of mathematical work.
[*20] This will usually be a generalized observable, for which the probability of each outcome is not given by a projection, but by a general positive operator (POVM, positive operator valued measure). In a sequential measurement, this is anyhow what comes out, so we cannot avoid this kind of observable. However, it is a point that von Neumann missed. And here we have to say that great men cast long shadows. Even though his role in the early days of quantum mechanics is systematically ignored in the textbook literature on quantum theory, and most physicists have not even heard of him, von Neumann was in the long run hugely influential. His 1932 book was widely recognized as the mathematical basis, and his choice of projections as the yes/no observables, the related identification of observables with hermitian operators, and finally the projection postulate are almost universally accepted. However, these assumptions are unnecessarily restrictive. They are often false in experiments, and almost always when one analyzes an indirect measurement. Various authors came to this realization, beginning with Holevo in estimation theory and Ludwig [29] for reasons of axiomatic parsimony. Now the quantum information community operates entirely in the wider framework. It would be interesting to study von Neumann's reasons, but that is beyond the scope of this article. | 2019-04-12T09:56:05.000Z | 2019-04-12T00:00:00.000 | {
"year": 2019,
"sha1": "5a33be2add2103f40e723368d3e34ef0ac68390a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.06139",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5a33be2add2103f40e723368d3e34ef0ac68390a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
11745809 | pes2o/s2orc | v3-fos-license | Identification and Characterization of Two Novel bla KLUC Resistance Genes through Large-Scale Resistance Plasmids Sequencing
Plasmids are important antibiotic resistance determinant carriers that can disseminate various drug resistance genes among species or genera. By using a high throughput sequencing approach, two groups of plasmids of Escherichia coli (named E1 and E2, each consisting of 160 clinical E. coli strains isolated from different periods of time) were sequenced and analyzed. A total of 20 million reads were obtained and mapped onto the known resistance gene sequences. As a result, a total of 9 classes, including 36 types of antibiotic resistant genes, were identified. Among these genes, 25 and 27 single nucleotide polymorphisms (SNPs) appeared, of which 9 and 12 SNPs are nonsynonymous substitutions in the E1 and E2 samples. It is interesting to find that a novel genotype of bla KLUC, whose close relatives, bla KLUC-1 and bla KLUC-2, have been previously reported as carried on the Kluyvera cryocrescens chromosome and Enterobacter cloacae plasmid, was identified. It shares 99% and 98% amino acid identities with Kluc-1 and Kluc-2, respectively. Further PCR screening of 608 Enterobacteriaceae family isolates yielded a second variant (named bla KLUC-4). It was interesting to find that Kluc-3 showed resistance to several cephalosporins including cefotaxime, whereas bla KLUC-4 did not show any resistance to the antibiotics tested. This may be due to a positively charged residue, Arg, replaced by a neutral residue, Leu, at position 167, which is located within an omega-loop. This work represents large-scale studies on resistance gene distribution, diversification and genetic variation in pooled multi-drug resistance plasmids, and provides insight into the use of high throughput sequencing technology for microbial resistance gene detection.
Introduction
Multidrug resistant E.coli, a clinically significant pathogen, has become a major threat to human health all over the world [1][2][3]. It has become a major cause of hospital-acquired infections worldwide mostly due to rapid acquisition of resistance determinants by horizontal gene transfer via mobile genetic elements such as plasmids, integrons and transposons [4]. More and more chromosomally encoded antibiotic resistance genes emerge on mobile genetic elements, especially those carried on plasmids which are thus easily disseminated. Examples include the CTX-Ms family extend-spectrum b-lactamases (ESBLs), which are mainly produced by Enterobacteriaceae plasmids and have become extensively widespread enzymes in the past two decades [5][6][7]. Quinolone resistance in Enterobacteriaceae is commonly considered to be the result of chromosomal mutations [8]. However, recently, plasmid-mediated quinolone resistance (PMQR) has been discovered, which is related to a variety of genes, including qepA, qnr, oqxAB, aac(6')-Ib-cr; this resistance occurs through plasmid transfer among different Enterobacteriaceae strains [9,10]. Many of these PMQR genes are accompanied by ESBLs and/or aminoglycoside resistance genes on the same plasmid [8]. The increase in number as well as the complicated composition and constitution of resistance genes on plasmids poses a potential threat for empirical treatment of infections. Although plasmids are the most prevalent carriers of antimicrobial resistance determinants, the total resistance genotypes and gene abundance on them are difficult to assess unless complete plasmid sequences are available. Compared with large scale detection of resistance gene profiles, PCR methods appear to be cumbersome and time consuming. Furthermore, they are unsuitable for detecting the relative abundance of resistance genes in mixed samples which could provide insight into resistance gene epidemic tendency.
The second-generation sequencing technologies are generally used to resequence genomes whose reference sequences are available [11,12]. Solexa sequencing, as one of the secondgeneration sequencing techniques, has the property of high throughput and can be used to simultaneously sequence templates at a very large scale [13]. It has also been successfully used in de novo sequencing and the assembly of large eukaryotic and bacteria genomes [11,[14][15][16]. In this study, we applied high throughput parallel sequencing to investigate resistance gene distribution, diversification and genetic variation of plasmids of 320 strains of multidrug resistance E. coli (named E1 and E2, each consisting of 160 clinical E. coli strains isolated from different time periods). Comparative genomics analyses led us to identify all the genes associated with previously known antibiotic resistance, which in turn permitted investigation of new subtypes of resistance genes.
Bacterial Strains Collection, Plasmid Extraction and High Throughput Sequencing
A total of 928 clinical strains of the Enterobacteriaceae family, isolated from sputum, urine, pus or blood samples of patients, were collected in the First Affiliated Hospital of Wenzhou Medical College, spanning the years 2002 to 2010. Among these isolates, 320 strains are E. coli which isolated from the years 2002-2003 (160 strains) and the years 2008-2009 (160 strains). The other 608 strains included K. pneumoniae (113 strains), S. Marcescens (132 strains),E. cloacae (190 strains), E. aerogenes (99 strains) and C. Freundii (74 strains). The bacterial samples in this study were collected from an large anonymous database according to the protocols of Wenzhou Medical College Ethics Committee so that the detailed participants information could not be obtained. Initially, all the participants orally consented that the isolates can be anonymously used in scientific study. The strains were identified by the Vitek-60 microorganism auto-analysis system (BioMerieux Corporate, France). For the pooled plasmids sequencing, each clinical E.coli strain was overnight incubated independently in 5 ml Luria-Bertani broth at 37uC for about 16 hours to obtain the concentration of an optimum optical density (OD 600 = 1.560.2). The cultures were pooled and 100 ml of mixed bacteria was used to extract the plasmids. Plasmids were extracted by alkaline lysis method described as previously [17]. According to the isolated periods, the bacteria were pooled as E1 (160 E.coli strains isolated from the years 2002-2003) and E2. (160 E.coli strains isolated from the years 2008-2009) and their plasmids were subsequently used for high-throughput sequencing by Illumina/Solexa technology, respectively.
Reference Resistant Gene Sequence Collection, Sequencing Read Mapping and SNP Detection
Antibiotic resistant protein sequences were collected from ARDB (Antibiotic Resistance Gene database, http://ardb.cbcb. umd.edu) [18]. CD-HIT (http://bioinformatics.ljcrf.edu/cd-hit) was used for clustering protein sequences [19]. TBLASTN was used to compare protein sequences with a nucleotide collection database of NCBI. Comparison of nucleotide sequences was made using BLASTN [20]. The collected gene sequences were assembled using Phred/Phrap/Consed software package [21]. Mapping sequencing reads onto references was performed using SOAPaligner 2.20 (2 mismatch per read permitted), and SOAPsnp1.03 was employed to detect and annotate SNPs [22,23]. A SNP was identified when at least 3 same high quality bases different from other bases at the same locus were detected. The relative abundance (sequencing depth) for a certain gene was calculated through the cumulative nucleotide length of the mapped reads on the gene divided by the gene size. Other bioinformatics tools used in this study were written by Perl and BioPerl (http://www.perl.org/). bla KLUC-3 and bla KLUC-4 Positive Strain Screening, Cloning and Sequence Determination The primers for positive strain screening and complete ORF cloning were designed according to the concensus sequence of bla KLUC-3. The screening primers were 59-CGCTAAGCGTA-GAGCAGAAACT-39 and 59-TCAGGTCACCCTTTTT-GATCTC -39 with a product of 228 bp in length. The primers for complete ORF cloning were 59-GGGATCCATGGTTAAAA AATCATTACGCC -39 and 59-GAGATCTCTATAATCCCT-CAGTGACGATT -39 with a pair of flanking restriction endonuclease adapters (Bam HI for the forth primer and Bgl II for the reverse primer). Whole genomic DNAs of 928 clinical isolates of Enterobacteriaceae family were used as templates for PCRs. The complete ORF fragment of the PCR product was agarose-gel isolated and cloned into a pMD18 vector (TaKaRa). The recombinant clones were picked and sequenced by an ABI 3730 automated sequencer. The recombinant plasmid (pMD18::bla K-LUC ) was digested with Bam HI and Bgl II and the ORF fragment was recovered and further cloned into a pET28a vector (TaKaRa). Finally, the recombinant plasmids (pET28a::bla KLUC ) were transformed into the host strain BL21. The complete ORF sequence of bla KLUC-3 and bla KLUC-4 have been deposited to GenBank with the accession numbers, JX185316 and JX185317, respectively.
Conjugation and Susceptibility Test
Conjugation experiments were performed in mixed broth cultures. E. coli D41 and E.cloacae Y214 were used as the donors and rifampin-resistant EC600 Rifr was used as the recipient. Overnight cultures (incubated at 37uC with shaking) of the donor strain (500 ml) and the recipient strain (500 ml) were mixed together in 4 ml fresh Luria-Bertani broth and incubated for 6 hours at 37uC. The mixture was then inoculated onto a Trypticase soy agar (TSA) plate containing rifampin (Sigma; 512 mg/L) plus ceftriaxone (Roche; 2 mg/L) for 18 hours at 37uC. The colonies that grew on the selecting medium were picked and identified using the Vitek-60 system. The transferred plasmid was extracted and identified by PCR with bla KLUC screening primers.
Minimal inhibitory concentrations (MICs) of 18 antibiotics or complex antimicrobial drugs were determined by the agar dilution method for two donors, two transconjugants, the recipient strain BL21, BL21[pET28a::bla KLUC-3 ] and BL21[pET28a::bla in accordance with the guidelines of the Clinical and Laboratory Standards Institute (CLSI). Antimicrobial agents were obtained from the National Institute for the Control of Pharmaceutical and Biological Products (NICPBP) and pharmaceutical companies in China. E. coli ATCC 25922 was used as a quality control for the MIC determinations.
Antibiotic Resistance Gene Distribution
To obtain relatively comprehensive reference sequences for the resistance genes, we collected the protein sequences from ARDB and our laboratory to complete a TBLASTN search against the nucleotide collection database (nt database) at NCBI with a constant E-value (1e-80). The internal fragments of nucleotide sequences which matched the proteins were then extracted. They were complete or partial Open Reading Frames (ORFs) of antibiotic resistance genes. To eliminate redundant members of this data set, we assembled these sequences and obtained 492 contigs (Table S1). They were used as ''bait'' references.
The samples E1 and E2, sequenced by Illumina Genome Analyzer, generated 819 and 694 million nucleotides, respectively. All the reads ranged from 73 to 75 nucleotides in length. Mapping reads onto the references yielded resistance genes and the quantity of mapped reads on a specific reference could also suggest the relative abundance of them in the sequenced samples. It showed that these two samples contained a total of 36 hits related to the resistance genes of b-lactams, aminoglycosides, macrolides, fluoroquinolones, sulfonamides, tetracyclines, chloramphenicol and rifampin. The most abundant gene was blaTEM with a sequence depth of 5496 folds (E1+E2) and the average sequence depth was 727 folds (Table 1). According to the ratio of mapped reads to the references, we found that resistance genes related to b-lactams and aminoglycosides were the most prevalent, not only in their total abundance, but also in their category of the corresponding genotypes. Of the 36 identified resistance genotypes, bla PSE , bla KLUC , qnrA and tetM were found only in the sample E1, while the other 32 types of resistance genes consistently appeared in both samples. The genotypes bla TEM , strB, strA, floP, aacC2 and sulI were the most abundant genes in two samples.
Polymorphism of the Resistance Genes
Polymorphism analyses revealed that 25 and 27 SNPs distributed in 9 and 10 resistance genes in E1 and E2 plasmid samples, respectively (Table 2). Among these 9 and 10 resistance genes, more than one half (5 and 7 genes, respectively) are associated with aminoglycosides resistance. Of the remaining 4 and 3 genes, 3 and 1 are involved in b-lactams resistance. Surprisingly, the genotype aacC2 (also known as aac(3)-IIa) in the E1 and E2 samples was scattered with 10 and 11 SNPs, respectively, whereas the sulI gene had an abundance approximately identical to aacC2 in the corresponding samples and did not show any SNP. Nine SNPs (36.0%) in E1 and twelve (44.4%) in E2 are nonsynonymous substitutions ( Table 2). To demonstrate whether the sequencing reads were sufficient to reflect the SNP profiles, we used different fraction of total reads to identify SNPs. With the increased number of sequence reads mapped onto the references, the SNPs reached a stable stage (maximal number of SNPs corresponded to each sequencing library) when 60% of the total reads were used. It suggested that the sequencing data was sufficiently in depth to reflect a majority of the genetic variations from the two pooled plasmid samples.
Identification of Two Novel bla KLUC Genes
Kluc-1 has been identified as a close relative of CTX-M type class A ESBLs, sharing ,85% amino acid identity with CTX-M-1 group members. It was first found in Kluyvera cryocrescens chromosome DNA [24]. Its variant, Kluc-2, one amino acid different from Kluc-1 at position 118, was demonstrated as a plasmid-mediated ESBL hosted in E.cloacae 7506 [25]. Mapping results indicated that some reads in the E1 sample matched to bla KLUC reference with an average depth of 50.7. Further analyses showed that the potential bla KLUC had 2 and 3 amino acids different from (sharing 99% and 98% amino acid identities with) Kluc-1 (AAK08976) and Kluc-2 (ABM73648), respectively. PCRs were performed against all 160 strains of the E1 sample to determine which strains carried this determinant. It showed that only strain E. coli D41 had a positive result. Full length ORF of the new bla KLUC gene (named bla KLUC-3 , GenBank Accession No. JX185316) was cloned into pMD18 vector and sequenced. The sequencing result is entirely identical to the mapping result of Solexa reads.
CTX-Ms, as one of the most prevalent b-lactamases with 90 known members, has been found in various bacteria of the Enterobacteriaceae family including E. coli, K. pneumoniae, E. cloacae, S. enterica, E. aerogenes, C. Freundii, S. Marcescens, P. mirabilis, etc [26].
ctx-m-9 group 508 C C/T \ No
Resistance Activities of bla KLUC-3 and bla KLUC-4
To detect resistance activities of bla KLUC-3 and bla KLUC-4 , complete ORFs of these two novel resistant genes were cloned into pET28a vector and transformed into E.coli BL21. The MICs of the donors, the transconjugants, the transformants and the recipient controls against a group of antimicrobial drugs were detected (Table 3). BL21[pET28a::bla KLUC-3 ] showed resistance to several extend-spectrum b-lactams, including ceftriaxone, cefazolin and cefotaxime, but not ceftazidime. However, it is very interesting to find that the transformant BL21[pET28a::bla KLUC-4 ] did not show any resistance to the antimicrobial drugs examined. This may be largely attributed to the amino acid change at position 167, a substitution of Arg by Leu (R167L) in Kluc-4, leading to disappearance of the resistance activity (Figure 1). The blactamases inhibitor, for example tazobactam, could strongly reduce the activity of Kluc-3. This was consistent with previously described resistant activity of Kluc-1 [24]. The original E. coli D41 and E. cloacae Y214 have stronger resistance to and wider resistant spectrum of the antibiotics examined. Besides b-lactams, they are also resistant to tetracyclines, kanamycin and chloramphenicol.
Discussion
In this work, we conducted a novel approach to detect resistance gene profiles in mixed E.coli plasmids isolated from two different periods of time. In order to effectively illustrate the composition of the resistome of the bacterial populations, 492 contigs as the universal ''bait'' references have been established. Because of the large number of resistance genotypes, as well as their subtypes or variants, in particular those with high identities, it would lead to formation of gene chimeras during the assembly process of the bait references and subsequently interfere with the effective estimation on genotypes and their SNP distributions. Therefore, once the reads mapped on bait reference, it should be manually checked and parsed, and the explicit genotype should be used for calculating resistance gene abundance and assigning SNPs, as listed in Table 1 column 5. On the other hand, the reads generated by Solexa sequencing is shorter in their sizes and the complete sequence of a certain resistance gene could not be entirely covered by single reads. Thus, the genotypes which harbored SNPs might be considered as heterozygous genotypes in the two pooled plasmid samples. The interested genes, such as bla KLUC-3 , and their sequence characters would be screened and determined by PCR combined with Sanger sequencing. Due to a strong ability to hydrolyze cefotaxime and high amino acid sequence similarities with CTX-M family enzymes, bla KLUC was classified as one group of CTX-Ms family. Other groups of this family include CTX-M-1, CTX-M-9, CTX-M-8, CTX-M-25 and CTX-M-2, and the members in the same group share .94% identity, whereas #90% identity is observed between the members belongs to distinct groups [26]. Compared with many other class A ESBLs, such as those of TEMs, the CTX-Ms showed lower hydrolytic activities against penicillins and narrower anti-drug spectrum against cephalosporins. Its much lower hydrolytic activity against ceftazidime clearly distinguishes it from most enzymes in TEM and SHV families [26]. From our resistance susceptible test, we observed that cloned bla KLUC-3 showed resistance to several extended-spectrum cephalosporins, such as cefazolin, ceftriaxone, cefuroxime and cefotaxime, but not cefepime, ceftazidime, meropenem and aztreonam. Unlike bla KLUC-3 , bla KLUC-4 under the same conditions as bla KLUC-3 did not show any resistance to the corresponding extended-spectrum cephalosporins. The MIC values for the examined b-lactams were the same as those of the recipient BL21 (Table 3). The obvious susceptibility difference to the b-lactams between two novel variants is probably related to a positively charged Arg residue replaced by a neutral residue Leu at position 167 which is an omega-loop (V-loop) comprising locus. The V-loop is a nonregular secondary structure most frequently occurring at the surface of a globular protein. It is involved in many protein functions, such as ligand, substrate or inhibitor-binding, tyrosine sulfation, as well as prohormonal cleavage [27][28][29]. More recently, a similar phenomenon has been observed from CTX-M-93 where only a single amino acid divergence from CTX-M-27 at position 169 (L169Q) located at an V-loop caused significantly decreased hydrolytic ability against its best substrates, such as cefotaxime, but led to enhanced resistance to ceftazidime [30].
It has been proposed that the bla KLUC genes seem to be acquired from its chromosomal progenitor of the CTX-M-1 group [4]. The first discovered bla KLUC-1 was demonstrated encoded on chromosome, but the later identified variants, bla KLUC-2, bla KLUC-3 and bla KLUC-4 were all harbored on the plasmids. This indicated that the movable DNA elements such as integrons, insert elements, transposons and plasmids might play essential roles in transposition and the horizontal gene transfer of the bla KLUC resistance genes. It has been reported that ISEcp1 elements, the 42,266 bp insertion sequences, could be involved in mobilization of CTX-M enzyme encoding genes [31,32]. To date, at least 90 types of b-lactams resistant genes and 1100 variants have been discovered [4]. The increased quantity of novel variants of each type sometimes creates different resistance phenotypes of decreased or increased susceptibilities to their substrates which have been demonstrated by their previously characterized relatives. In this work, we successfully found two novel bla KLUC group members, and provided an example of a new resistance gene subtype screening and resistome investigation from pooled clinical isolates. Many mutagenesis-prone-to-happen sites of the gene locus were also detected and are mainly associated with b-lactams and aminoglycosides resistance. The merits of costeffective and high-throughput of second-generation sequencing technology may have potential as a substitute, or a companion at least, for microarray in the field of large scale antimicrobial resistance gene detection.
Supporting Information
Table S1 Available as supplementary data for this article. (RAR) | 2018-04-03T00:28:16.422Z | 2012-10-09T00:00:00.000 | {
"year": 2012,
"sha1": "a720a60d21c9649d3efb6986d097ff5719bf586c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0047197&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a720a60d21c9649d3efb6986d097ff5719bf586c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
4795715 | pes2o/s2orc | v3-fos-license | Development of a complex intervention to test the effectiveness of peer support in type 2 diabetes
Background Diabetes is a chronic illness which requires the individual to assume responsibility for their own care with the aim of maintaining glucose and blood pressure levels as close to normal as possible. Traditionally self management training for diabetes has been delivered in a didactic setting. In recent times alternatives to the traditional delivery of diabetes care have been investigated, for example, the concept of peer support which emphasises patient rather than professional domination. The aim of this paper is to describe the development of a complex intervention of peer support in type 2 diabetes for a randomised control trial in a primary care setting. Methods The Medical Research Council (MRC) framework for the development and evaluation of complex interventions for randomised control trials (RCT) was used as a theoretical guide to designing the intervention. The first three phases (Preclinical Phase, Phase 1, Phase 2) of this framework were examined in depth. The Preclinical Phase included a review of the literature relating to type 2 diabetes and peer support. In Phase 1 the theoretical background and qualitative data from 4 focus groups were combined to define the main components of the intervention. The preliminary intervention was conducted in Phase 2. This was a pilot study conducted in two general practices and amongst 24 patients and 4 peer supporters. Focus groups and semi structured interviews were conducted to collect additional qualitative data to inform the development of the intervention. Results The four components of the intervention were identified from the Preclinical Phase and Phase 1. They are: 1. Peer supporters; 2. Peer supporter training; 3. Retention and support for peer supporters; 4.Peer support meetings. The preliminary intervention was implemented in the Phase 2. Findings from this phase allowed further modeling of the intervention, to produce the definitive intervention. Conclusion The MRC framework was instrumental in the development of a robust intervention of peer support of type 2 diabetes in primary care. Trial registration Current Controlled Trials ISRCTN42541690
Background
Diabetes is a chronic illness which requires the individual to assume responsibility for their own care with the aim of maintaining glucose and blood pressure levels as close to normal as possible [1]. Maintaining optimal glucose and blood pressure levels reduces the risk of diabetes related complications [2,3]. Treatment of diabetes involves psychological, social and physical adjustments to an individual's lifestyle [1]. This can be confusing and overwhelming for people with diabetes [4]. They have to make a complex range of lifestyle modifications sometimes without necessarily noticing any tangible effects [4]. Emotional and quality of life issues need to be attended to as well as physical issues [4].
Diabetes self management training has traditionally been delivered in a didactic setting with emphasis on imparting knowledge. However this approach has been shown to be ineffective in individual behaviour change and improving metabolic control [5]. In recent times alternatives to the traditional delivery of diabetes care have been investigated, for example, the concept of peer support which emphasises patient rather than professional domination [1]. Peer support could be implemented to complement existing diabetes care. Structured care for people with type 2 diabetes in general practice is not yet well established in the Republic of Ireland [6]. Prevalence of diabetes amongst people over 40 years of age attending 41 general practices in the Republic of Ireland reported a prevalence of type 2 diabetes of 9.2% indicating similar prevalence figures to other European countries [6]. The usual care of patients with type 2 diabetes in the Republic of Ireland is outlined in Figure 1.
Testing a complex intervention such as peer support presents a challenge to researchers. Complex health interventions are built up of several components which may include organisational and delivery methods [7,8]. The fact that they involve a number of separate components presents difficulties in isolating the "active ingredient" of the intervention that is effective [8,9]. Therefore it is recommended that a complex intervention for an RCT should be carefully planned and designed [10]. To guide researchers, the UK Medical Research Council (MRC) devised a five phase framework for developing and evaluating RCT's of complex interventions [8]. The framework is comprised of five phases [8]. The Pre-clinical phase involves establishing a theoretical basis to support the intervention. Phase 1, modelling, involves developing an understanding of the intervention and its possible effects. At this point the components of the intervention are delineated. These first two phases are often interrelated. Phase 2, the exploratory trial, is crucial. This is a test of the feasibility of key components of the intervention. Phase 3 is the definitive RCT. Finally long term implementation of the intervention is examined in Phase 4. A flowchart of the methodology of the application of the framework is presented in Figure 2.
This framework has been utilised in a variety of RCT's that have evaluated complex interventions in primary care [7,[11][12][13]. These RCT's examine professionally led interventions for example a behaviour change intervention delivered by primary care practitioners to patients with coronary heart disease [10]. This paper is the first to examine the development of a complex intervention involving peer support.
Usual care in the general practice setting for people with type 2 diabetes [31,32]
Figure 1
Usual care in the general practice setting for people with type 2 diabetes [31,32].
24% of patients receive no structured diabetes care in either specialty care or general practices 60% of people may receive most of their diabetes care from their general practitioner though for many this care is unstructured without the routine use of practice diabetes registers and recall systems.
There is limited access to community based dietician services and chiropody services vary according to an individuals' income One third of the total population are medical card holders which entitles them to free GP, hospital and community care. The allocation of medical cards is means tested. The remaining two thirds pay for services in the general practice but are entitled to free hospital treatment.
We describe below, the application of the first three phases of the MRC framework which led to the development of the intervention of peer support in type 2 diabetes based in primary care. The definitive intervention is currently being tested in a cluster RCT, the peer support in diabetes study (Table 1).
Preclinical phase
The aim of the Preclinical Phase was to review the theoretical basis of peer support and to identify evidence to support the concept.
Phase 1
The aim of Phase 1 was to combine the theoretical basis from the Preclinical Phase with qualitative work to define the components of the intervention.
Phase 2
The aim of Phase 2 was to conduct a pilot study to test the feasibility of the intervention.
Preclinical phase
The Preclinical Phase involved conducting a literature search using CINHAL, Medline and the Cochrane Library. Key words included RCT, diabetes, type 2 diabetes, primary care, community health workers, lay health workers, chronic illness, voluntary workers and peer support workers. The literature retrieved was examined in depth and the concept of peer support was explored. Themes for components of the intervention evolved from reviewing this literature.
Phase 1
In Phase 1, the modelling phase, the theoretical basis from the Preclinical Phase was combined firstly with information from interviews with experts in the area of health psychology, diabetes and volunteering and secondly with qualitative data from focus groups with patients with type 2 diabetes and practice staff. Two focus groups (6 patients in each group) were conducted with patients in the two participating general practices. The topic guide included the meaning of the term peer support; the nature of support for people with type 2 diabetes; and an exploration of how peer support differs from professional support. Two focus groups (4 in each group) were conducted with practice staff from the two participating general practices. The topic guide included the defini-A flowchart of the methodology of the application of the framework Figure 2 A flowchart of the methodology of the application of the framework.
Preclinical Phase
Review and search of the literature
Phase 1 (Modeling)
Combining theory with qualitative work (2 focus groups with patients) (2 focus groups with practice staff) Interviews with experts in the areas of health psychology and volunteering
Phase 2 (Exploratory trial)
Preliminary intervention in 2 practices (24 patients and 4 peer supporters recruited) Qualitative work (1 focus group with peer supporters) (2 focus groups with patients) (3 semi structured interviews with practice staff) The peer support in diabetes study
• Aims
To determine whether a peer support programme for patients with type 2 diabetes improves biophysical and psychosocial outcomes and whether it is an acceptable, cost effective intervention in a primary care setting • Design Cluster randomised controlled trial.
• Participants 420 patients with type 2 diabetes recruited from 20 general practices 30 peer supporters, also patients with type 2 diabetes, from 10 intervention general practices. • Primary Outcomes Blood pressure Total cholesterol HBA1c Well being score [14] tion of peer support; advantages and disadvantages of peer support; and training and support for peer supporters. The focus groups were conducted by a moderator and an observer. Each focus group was taped and the discussions then transcribed and analysed. Descriptive phenomenology was the theoretical framework used for the analysis of the qualitative data. This qualitative research tradition seeks to understand the lived experience of individuals [15]. The combination of information from the Preclinical and this Phase 1 led to the unravelling of four critical components of the preliminary intervention.
Phase 2
Phase 2, the exploratory trial/pilot study, involved testing the preliminary intervention. Two general practices were selected. Both are training practices attached to a university post graduate training scheme. One was a small single handed practice and the other a large group practice. Both had a practice nurse and used computerised records. Neither practices had structured diabetes care clinics. Practice staff compiled a register of patients with type 2 diabetes. Twenty two patients and four peer supporters from the two practices were purposefully selected to participate. The peer supporters, who were selected by the GPs, attended two evening training sessions conducted by the research team. The preliminary intervention was delivered in both practices-each peer supporter facilitated three peer group meetings with participating patients over a period of four months. Quantitative data were collected from participants prior to and following the meetings and was analysed using JMP IN statistical package. Qualitative research was also conducted in Phase 2 following the preliminary intervention; two focus groups with five patients each and one focus group with four peer supporters. The topic guide for these focus groups included feedback from the peer group meetings; how peer support differs from support from GPs and practice nurses; and positive and negative aspects of peer support. In addition to these themes the peer supporters were asked about training and ongoing support for peer supporters. The qualitative methodology used was the same as that for phase 1. In addition, three semi-structured interviews with practice staff were conducted following the preliminary intervention. The discussions were based around the logistics of holding the group meetings in the general practices; and recruitment, retention and support for the peer supporters.
Ethical approval
Ethical approval has been obtained from the Ethics Committee of the Irish College of General Practitioners (Protocol No.: REC0904-11; 01/12/04)
Preclinical phase
Theoretical and empirical evidence for peer support was identified in the literature search.
Peer support within the healthcare context is defined as "the provision of emotional, appraisal, and informational assistance by a created social network member who possesses experiential knowledge of a specific behaviour or stressor and similar characteristics as the target population, to address a health-related issue of a potentially or actually stressed focal person" [16]. This definition of peer support falls within the social support model, that is defined as the process through which social relationships might promote health and well-being [17]. Within the social support model, the direct effect model would postulate that peer support could reduce feelings of isolation and loneliness, provide information about access to health services or the benefits of behaviours that positively improve health and well-being and encourage more positive health practices [16].
The logic behind peer support programmes is that peers have a greater understanding of the target population's situation than other naturally embedded social networks [16]. During times of need or in stressful situations individuals often turn to social contacts and relationships for support to supplement the care given by the health services [16].
Members of their own social network may not be able to offer appropriate support for various reasons. For example they may lack experience and knowledge of the stressful life event; they may feel uncomfortable about the issue or are too upset to provide support [18].
Peer support groups provide individuals with a unique support system where they can gain understanding and feel a sense of belonging. As the group evolves attachments are formed and expressions of caring and genuine concern from the group provides emotional support [18].
Peer support was found to be successful in some health care settings. It has improved outcomes in diverse health settings such as maternal child health development [19], neonatal mortality [20,21] and cardiac surgery [22].
Peer support workers also known as lay health workers are defined in a Cochrane review as "any health worker carrying out functions related to health care delivery; trained in some way in the context of the intervention; having no formal professional or paraprofessional certificated or degree tertiary education" (page 1) [23]. Training for peer support workers should incorporate exploration of the skills required to use experiential knowledge and peer's appreciation and understanding of the target group [16]. However Giblin warns against too much specific training, as this may destruct the concept of "peerness" [24]. In addition to peer support benefiting recipients, peer supporters have reported benefits from their role [25][26][27].
Qualitative research conducted for the Diabetes National Service Framework revealed that people with diabetes felt it would be helpful to meet others in similar circumstances. Peers were viewed as an under-utilised, helpful, source of information and support [28]. However there are no reported randomised controlled trials of peer support in type 2 diabetes. The literature review highlighted the need for a careful consideration of an underlying theoretical framework and the importance of exploratory qualitative work with individuals with type 2 diabetes in the context within which the study was planned.
Phase 1
In Phase 1, issues raised in the interviews with experts included the identification of social support as a theoretical framework for the study. In addition, experts working in the volunteering sector highlighted the importance of continuing support for the peer supporters to sustain the programme over time.
The patients involved in the exploratory qualitative work expressed enthusiasm for the idea of peer support.
FG1.5 "I thought it would be a good idea for me because from the point of view of the diet it could help me keep me on track. Hearing others ideas and sharing them and so on"
They reported a tendency to turn to peers for advice but felt that a structured support network would be more helpful.
FG2.3 "Very helpful because you are going into a hospital, seeing a doctor, but you are not seeing other people who have it like ourselves" They had a preference for group rather than individual meetings. Both patients and practice staff felt that peer supporters required specific training that should include the basics of treatment for diabetes and managing a group. However there was a consensus that medical questions from group members should be referred to the GP or practice nurse.
FG7.2 "It is very important for the peer supporters to know their boundaries. They are not doctors"
The work in the Preclinical Phase and in phase 1 led to the identification of four preliminary intervention components: 1. Peer supporters 2. Peer supporter training 3. Retention and support for peer supporters 4. Peer support meetings Phase 2 Phase 2, the exploratory trial/pilot study, involved testing the following preliminary intervention in two general practices:
Peer supporters
The GPs and practice nurses in each practice were asked to select two patients with type 2 diabetes who would be suitable for the role of peer supporter. All four peer supporters recruited by the GPs and practice nurses had type 2 diabetes for over a year and were compliant to their treatment regime. Further peer supporter characteristics are presented in Table 2. Findings from the semi structured interviews indicated that the GP's and practice nurses felt they should identify the peer supporters within their own practices.
Peer supporter training
Two evening training sessions were organised for the peer supporters. The content of these sessions included the role of the peer supporter, basics of diabetes, lifestyle and medication issues, communication skills, managing groups, confidentiality, role play and support for the peer supporters. The sessions were interactive and informal. They were given a handbook that covered issues raised in the training session. The focus group with the peer supporters revealed that the peer supporters found the training informative and pitched at the correct level. They valued the handbook and referred to it on several occasions during the course of the exploratory trial.
Retention and support for peer supporters
A support system for the peer supporters was implemented. This consisted of the project manager contacting each peer supporter after each group session. This was to allow the peer supporter to debrief and discuss any problems that arose during the course of the meeting. The peer supporters reported that they appreciated this contact.
Peer support meetings
Patients were allocated, by GPs and PNs, to each peer supporter within each practice. Three meetings per group were organised and two groups met in the evening and the other two met during the day. Eighty per cent of patients went to two or three group meetings. Feedback in the focus groups with the peer supporters and patients was positive. Both patients and peer supporters reflected that they enjoyed meeting other people with type 2 diabetes. Exchanging practical information, comparing each others situations, conversing in lay terms and general support amongst the group were identified as particularly positive elements of the group meetings.
FG5.4 "I think there is a common thing here in that the people are not looking for a theoretical understanding of it, you know they don't want to know the Latin. What everybody I think is striving for is kinda practical things"
FG5.4 "the mood was terrific there were delighted to be together they took a lot out of it, there were happy" Patients and peer supporters agreed that more structure in the group meetings would enhance the peer support experience, for example having a set theme for each meeting. Peer supporters suggested a system of 'frequently asked questions' in order to answer any queries that the group members had identified during a meeting.
FG7.4"after the meeting, somebody should put in their questions into the centre and somebody should answer them and bring it back to the group"
Some peer supporters were anxious to have more professional involvement while others pointed out that this would just reproduce some of the services they currently accessed.
The definitive intervention
Following the exploratory phase we finalised the study protocol. The definitive intervention is as follows:
Peer supporters
Potential peer supporters are identified by GPs and practice nurses in the intervention practices. Peer supporters are recruited and trained at a ratio of approximately one peer supporter to seven/eight patients with type 2 diabetes. They are eligible to be trained if they meet the inclusion criteria outlined in Table 3.
Peer supporter training
The peer supporters attend two evening training sessions, which are conducted by a GP and nurse on the research team. Topics covered in Session 1 included: introduction to the project; role of the peer supporter; basics of type 2 diabetes and complications of type 2 diabetes. Session 2 covered the following topics: lifestyle and medication issues; communication skills and working with groups; dealing with difficult group members; role play and confidentiality.
The two sessions focus on the materials to be used during the group meetings (described below) and peer supporters receive a resource pack with a manual and resource material to support these training sessions.
Retention and support of peer supporters
Retention of peer supporters is crucial to the study. Structures are in place to ensure peer support workers are supported in the role (See Table 3)
Peer support meetings
Peer support meetings are held in the general practice premises at a convenient time for practice staff, peer supporters and participants. The intervention consists of nine peer support meetings held over two years; at month 1, month 2 and every 3 months thereafter. There is a defined ten to fifteen minute structured component for each meeting available to the peer supporters (see Table 4 for a summary of the meeting content). At the end of each meeting there is general discussion and the group identifies and records any questions regarding the meeting focus. These are fed back to the research team who compile written answers based on the feedback from all groups, which are presented and discussed at the start of the next meeting.
It became evident to the research team during the Preclinical Phase and Phase 2 that monitoring the delivery of the intervention was crucial. We therefore decided to include a process evaluation and an assessment of treatment fidelity of the definitive intervention. The process evaluation will map the actual implementation of the intervention. Data from peer supporter log diaries of each meeting and the project manager's record of contact with the peer supporters will be recorded and analysed. The assessment of treatment fidelity will monitor the reliability and validity of the intervention. The Bellg framework will be used. It consists of five treatment fidelity strategies: Treatment design, Training procedures, Delivery of treatment, Receipt of treatment and Enactment of treatment skills [29].
Summary
Designing complex interventions that are pragmatic enough to be applied to real life situations is challenging [10]. We found the MRC framework very useful in guiding the design and the preliminary testing of the intervention of peer support in type 2 diabetes. The Preclinical Phase explored the existing evidence on the topic of peer support. In Phase 1 the utility of qualitative methods as specified in the MRC framework, and meetings with experts in the field, was invaluable for the early development of the intervention. The preliminary intervention for the proposed RCT was tested in the pilot study in Phase 2. This allowed us to observe the logistics of introducing the preliminary intervention into the primary care setting.
Methodological issues
After considering several theoretical models and discussing this issue with experts in health psychology and voluntary organisations we selected social support as a theoretical framework for the study. This led to the reassessment of the study outcomes and, in addition to the biophysical outcomes, we added the psychosocial outcomes of wellbeing, self care, self efficacy and social support.
Best practice in randomisation is to randomise following baseline data collection. This avoids introducing bias in terms of patient recruitment and data collection if control practices become demotivated during the baseline data collection phase. Following the exploratory work in phase two, consultation with members of the research team highlighted difficulties with this approach. In order to • What happens to the eyes and kidneys in diabetes • Importance of good blood pressure and blood sugar control in order to prevent complications Questions relating to eye and kidney disease
SESSION 9-LIVING WITH DIABETES
This is intended to be a relatively open session in which the group can discuss any remaining concerns and consider whether they would like to continue to meet Importance of follow up data collection facilitate the purposive recruitment of peer supporters from the patient register in intervention general practices prior to random selection of patients, it was decided that practices would have to be randomised prior to baseline data collection and randomisation of patients and the beginning of the intervention.
The MRC framework emphasises the importance of monitoring the delivery of the RCT intervention [8]. The review of the literature on conducting randomised controlled trials in the Preclinical Phase and the pilot study in phase two led to our decision to include an assessment of treatment fidelity and a process evaluation in the study protocol. This will allow for the monitoring of the process of implementation of the intervention and also assess the validity and reliability of the intervention. The incorporation of these elements will add depth to our understanding of the final results of the randomised controlled trial. For example, we will be in a position to address any potential questions such as whether the intervention was experienced as intended by the participating intervention patients. In addition, we will be able to consider the relative effectiveness of the intervention in relation to the extent of exposure to peer support. This process will also facilitate reproducibility of the intervention if the trial finds that it is effective as there will be a clear and detailed description of the intervention as it occurred in practice settings.
Intervention issues
The qualitative work in Phase 1 and Phase 2 allowed us to identify details of the intervention components that needed further development. In particular the structure of the group sessions and support for peer supporters was developed further. The idea of having a focus to each session and a system of frequently asked questions came from the patients and peer supporters and was incorporated into the definitive intervention. A guide for each session was devised. This guide is designed to be flexible and does not have to be strictly adhered to, so as not to destroy the concept of peer led meetings. Unlike the peer led educational interventions such as the Chronic Disease Self-Management Programme (CDSMP) [30] devised by Kate Lorig the intervention in this study focused more on social support than education. There is a clear need to distinguish between interventions that are genuinely peer led compared to professionally led support or educational interventions. As some of the peer supporters emphasised, professionally led interventions would just duplicate some of the services that they currently access.
Consultation with a volunteering expert led to further development of support mechanisms for the peer supporters. The support given in the pilot study, which involved telephone contact after meetings was identified as crucial by the peer supporters and so was developed further for the definitive intervention. We also plan to hold an annual social meeting to facilitate communication between peer supporters from difference practices. The travel allowance for peer supporters has also been modified so that it is given in stages throughout the intervention.
Conclusion
The MRC framework was instrumental in the development of a robust intervention of peer support in type 2 diabetes in primary care. The intervention of peer support was considered in depth incorporating an analysis of current literature, qualitative work with those who would be both experiencing, delivering and administering the peer support system and finally an analysis of how the intervention would run in the pilot study. It enabled a clear and detailed understanding of the components of the intervention and how each should be documented and tested during the definitive study. The effectiveness of this intervention is now being tested in a cluster randomised controlled trial involving twenty general practices and 420 patients with type 2 diabetes. | 2017-06-23T15:24:49.530Z | 2007-08-31T00:00:00.000 | {
"year": 2007,
"sha1": "f86915d7dd424959162fccd86d8ec41c2fedc868",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-7-136",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ef9e4bd9337f27582f59645f8caed639168c1b9",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8825911 | pes2o/s2orc | v3-fos-license | Methylation-Associated Partial Down-Regulation of Mesothelin Causes Resistance to Anti-Mesothelin Immunotoxins in a Pancreatic Cancer Cell Line
Anti-mesothelin Pseudomonas exotoxin A-based recombinant immunotoxins (RITs) present a potential treatment modality for pancreatic ductal adenocarcinoma (PDAC). To study mechanisms of resistance, the sensitive PDAC cell line KLM-1 was intermittently exposed to the anti-mesothelin SS1-LR-GGS RIT. Surviving cells were resistant to various anti-mesothelin RITs (IC50s >1 μg/ml), including the novel de-immunized RG7787. These resistant KLM-1-R cells were equally sensitive to the anti-CD71 HB21(Fv)-PE40 RIT as KLM-1, indicating resistance was specific to anti-mesothelin RITs. Mesothelin gene expression was partially down-regulated in KLM-1-R, resulting in 5-fold lower surface protein levels and decreased cellular uptake of RG7787 compared to KLM-1. Bisulfite sequencing analysis found that the mesothelin promoter region was significantly more methylated in KLM-1-R (59 ± 3.6%) compared to KLM-1 (41 ± 4.8%), indicating hypermethylation as a mechanism of mesothelin downregulation. The DNA methyltransferase inhibitor 5-azacytidine restored original mesothelin surface expression to more than half in KLM-1-R and increased sensitivity to RG7787 (IC50 = 722.4 ± 232.6 ng/ml), although cells remained significantly less sensitive compared to parental KLM-1 cells (IC50 = 4.41 ± 0.38 ng/ml). Mesothelin cDNA introduction in KLM-1-R led to 5-fold higher surface protein levels and significantly higher RG7887 uptake compared to KLM-1. As a result, the original sensitivity to RG7787 was fully restored (IC50 = 4.49 ± 1.11 ng/ml). A significantly higher RG7787 uptake was thus required to reach the original cytotoxicity in resistant cells, hinting that intracellular RIT trafficking is also a limiting factor. RNA deep sequencing analysis of KLM-1 and KLM-1-R cells supported our experimental findings; compared to KLM-1, resistant cells displayed differential expression of genes linked to intracellular transport and an expression pattern that matched a more general hypermethylation status. In conclusion, resistance to anti-mesothelin RITs in KLM-1 is linked to a methylation-associated down-regulation of mesothelin, while aberrations in RIT trafficking could also play a role.
Introduction
Our laboratory develops recombinant immunotoxins (RITs) for cancer treatment.Current RITs in clinical trials are composed of an antigen-binding Fv fused to a 38-kDa portion of Pseudomonas exotoxin A (PE) [1].After receptor-mediated endocytosis, RITs are proteolytically processed, and PE is proposed to traffic to the trans-Golgi network and move by a retrograde pathway to endoplasmic reticulum, where it undergoes translocation to the cytoplasm [2].Upon arrival in the cytosol, PE targets Elongation Factor-2 (EF-2).Mature EF-2 is produced by posttranslational modification of histidine 715 by the Diphthamide Biosynthesis proteins (DPH) 1-5 and 7 [3,4].This modified histidine ('diphthamide') is ADP-ribosylated by PE, which inactivates EF-2 and halts protein synthesis, eventually leading to programmed cell death [2].
We previously isolated and characterized several leukemic cell lines resistant to Moxetumomab pasudotox [5][6][7], an anti-CD22 RIT currently in phase III clinical trial (ClinicalTrials.govIdentifier: NCT01829711).These resistant cell lines show various aberrations in DPH expression, which prevent EF-2 ADP-ribosylation and protect cells from protein synthesis inhibition [5][6][7].SS1(dsFv)-PE38 (SS1P), another RIT in clinical trials, targets mesothelin, a 40-kDa cell surface glycophosphatidylinositol (GPI)-anchored protein [8] that is highly expressed in several malignancies, including mesothelioma and pancreatic ductal adenocarcinoma (PDAC) [9][10][11].SS1P has limited clinical activity as a single agent, primarily because of dose-limiting PE immunogenicity in patients [12,13].In response, SS1P has been combined with immune-depleting chemotherapeutics, resulting in unprecedented responses in patients with refractory advanced mesothelioma [14], and low-immunogenic RITs have been engineered in which many B-or T-cell epitopes and protease-sensitive regions of PE38 are removed.The latter resulted in a truncated and de-immunized 24-kDa toxin moiety (PE24) that has less reactivity with human anti-sera, is resistant to lysosomal degradation, and displays a decreased non-specific toxicity in rodent models in vivo [15][16][17][18].In collaboration with Roche Innovation Center Penzberg, Germany, this PE24 backbone has been integrated into a novel anti-mesothelin RIT, called RG7787, by linking it to a humanized anti-mesothelin Fab, thereby increasing size and circulatory half-life [19].
We recently showed that RG7787 has significant activity in a PDAC xenograft model, which was established by grafting KLM-1 cells into immune deficient mice.RG7787 was also cytotoxic against several other PDAC cell lines, although in vitro cell killing was not absolute [19].We previously reported that an imbalance between pro-and anti-apoptotic proteins protects cancer cells, including PDAC, from PE-induced cell death [20][21][22].To gain insight into other mechanisms of resistance, the aim of this study was to isolate and characterize cells from KLM-1 that were resistant to anti-mesothelin RITs.
Competing Interests: This research was partly supported by Roche Pharmaceuticals.I.P. is an inventor on several patents on immunotoxins that have all been assigned to NIH (including US 8357783 "Human anti-mesothelin monoclonal antibodies", US 20120263674 "Pseudomonas exotoxin a with reduced immunogenicity" and US 20140094417 "Pseudomonas exotoxin a with less immunogenic T cell and/or B cell epitopes").This does not alter the authors' adherence to PLOS ONE policies on sharing data and materials.
and adding a Gly-Gly-Ser (GGS)-based peptide linker after the furin cleavage site.RG7787 is further optimized for clinical use by replacing the mouse anti-mesothelin Fv (SS1) with a humanized Fab (huSS1), to increase size and therefore circulatory half-life, and by introducing seven mutations (R505A, R427A, R490A, R467A, D463A, R456A, and R538A) in the catalytic domain III of PE to silence B-cell epitopes [19].5-azacytidine (AZA) (Sigma) is a DNA methyltransferase inhibitor, and was dissolved in RPMI-1640 medium.
Cell culture
PDAC cell line KLM-1 [24] was maintained in RPMI-1640 and provided in 2011 by Dr. Udo Rudloff (NCI, Bethesda, MD), who originally obtained the cell line from the RIKEN Cell Bank.Epidermoid cancer A431 cell line was maintained in DMEM and donated in 1982 by Dr. George Todaro (NCI, Bethesda, MD), who originally isolated the cell line [25].Media were supplemented with 10% FBS, 2 mM L-glutamine, 1 mM sodium pyruvate, 100 U/ml penicillin and 100 μg/ml streptomycin (Invitrogen).To isolate resistant cells, 3 x 10 5 KLM-1 cells were seeded in a 6-well plate and incubated with 1 μg/ml SS1-LR-GGS for 72 hrs, which killed over 90% of the cells (S1 Fig. ).Residual cells were expanded for 5 weeks in RIT-free cell medium, after which a second round of selection was performed similarly with 1 μg/ml SS1-LR-GGS, resulting in KLM-1-R.These cells were expanded in RIT-free medium and stored viably frozen in N 2 for further studies.Cell line identities were verified using short tandem repeat analysis (NCI, Frederick, MD).All cells were maintained at 37°C in a humidified incubator with 5% CO 2 .
Cell proliferation, cell death and protein synthesis inhibition assays
KLM-1 and KLM-1-R cell growth was compared by counting viable cells with a Cellometer Vision (Nexcelom).Dead cells were excluded using Trypan blue staining, and each time point was counted in triplicate.To evaluate treatment effect, RITs were added approximately 16 hrs after seeding of the cells in a 6-or 96-well plate.Growth inhibition was evaluated by measuring ATP levels with the Cell Titer-Glo Luminescent Cell Viability assay (Promega).Values were normalized between controls of 1 μM staurosporine (Sigma-Aldrich) and buffer (Dulbecco's phosphate buffered saline without Ca and Mg (D-PBS), Quality Biological, Inc.) containing 0.2% human serum albumin (Division of Veterinary Resources, NIH, Bethesda, MD) or medium.Bright-field pictures were taken on a Zeiss microscope with a 10X EC Plan-NeoFluar objective using the AxioCam MRc camera and the AxioVision 4.7.2 acquisition software.Cell death was evaluated using the Annexin V-PE Apoptosis Detection Kit I (BD Pharmingen), according to manufacturer's instructions.Apoptotic cells were considered Annexin V-positive, as determined by gating the untreated cells.Protein synthesis inhibition was quantified by measuring [ 3 H]leucine (Perkin Elmer) incorporation as done previously [22].Values are presented relative to controls of D-PBS 0.2% human serum albumin and 100 μg/ml cycloheximide-(Sigma-Aldrich) treated controls.
Real-time RT-qPCR
RNA was isolated and purified from cells using the RNEasy kit (Qiagen), reverse transcription was done with the QuantiTect Reverse Transcription kit (Qiagen), and amplification with the QuantiFast SYBR Green PCR kit (Qiagen).Primer sequences for DPH1-5, DPH7, mesothelin, and β-actin are shown in S1 Table .Real-time RT-qPCR was performed on an ABI HT 7900 RT-PCR machine, analyzed using the comparative C T method (ΔΔC T ) method with SDS manager (Applied Biosystems) and normalized for the endogenous β-actin.
Surface protein expression by flow cytometry
Mouse anti-human mesothelin (MN; Rockland Immunochemicals, Inc.) and R-PE antihuman CD71 (Biolegend) were used to evaluate surface protein expression.A mouse IgG-R-PE isotype control (BD Biosciences) was used as a negative control for mesothelin staining.Antimesothelin and the isotype control were stained with a secondary goat anti-mouse IgG-R-PE (1:250 dilution, Jackson ImmunoResearch Laboratories, Inc.).Fluorescence intensity was analyzed by flow cytometry on a FACSCalibur.QuantiBRITE R-PE beads (BD Pharmingen) were used to quantify the number of mesothelin-binding sites per cell.
Shed mesothelin levels in cell medium
Soluble mesothelin levels in cell line medium were measured in duplicate with a mesothelin Meso Scale Assay (Morphotek, Inc.), using the electrochemiluminescence technology of Meso Scale Discovery.Cells were seeded in regular cell culture medium in a 12-well plate.After 20 hrs, media was collected, the volume was measured and number of cells per well counted.After centrifuging the medium, supernatants was stored at −80°C until analysis.Cell supernatant was diluted 50-fold and added to wells of a 96-well plate previously coated with capture antibody.The procedure was performed as previously described [26], and signals were measured on an MSD Discovery Workbench (Meso Scale Discovery).The amount of mesothelin shed by KLM-1 and KLM-1-R was calculated by multiplying the obtained mesothelin concentration with well volume, and standardized for number of counted cells per well.
RG7787 cellular uptake
RITs were labeled with the Alexa Fluor 647 Labeling Kit (Invitrogen) for 3.5 hrs and purified according to manufacturer's instructions.Harvested cells were incubated for 30, 75 and 150 min at 37°C with a saturating 2 μg/ml of SS1P-Alexa647 or RG7787-Alexa647 and processed as previously described [22].Fluorescence intensity was analyzed on a FACSCalibur.Uptake was expressed in number of internalized RG7787 molecules, which was calculated by assuming the RG7787-Alexa647 geomean surface expression of KLM-1 equal to 60 x 10 3 RG7787 molecules (= surface mesothelin binding sites per KLM-1 cell, as evaluated by flow cytometry and QuantiBRITE R-PE beads).
Methylation analysis
Evaluation of the methylation status of a region in the mesothelin gene promoter site was done by EpigenDx (Hopkinton, MA).Bisulfite modification was executed using the Zymo Research EZ Methylation Gold kit.Genomic DNA (500 ng) was used for bisulfite modification, followed by PCR amplification using Hot-Star Taq Polymerase (Qiagen).Pyrosequencing was performed using the PSQ HS 96 Pyrosequencing System (Qiagen).The analyzed region was chosen based on available primers (ADS2475-RS1 and ADS2475-RS2) at EpigenDx, which covered a 147-bp region upstream of the mesothelin transcription start site (chr16:808890-808742; ENST00000563941).Methylation quantification was performed with PyroQCpG software, which calculates for each single CpG the ratio between its methylated and non-methylated=form, resulting in the average percentage of methylation degree.
RNA deep sequencing and data analysis
Total RNA was extracted from KLM-1 and KLM-1-R cells using Qiagen RNEasy kit, and oncolumn DNA digestion (Qiagen) was performed according to manufacturer's instruction.After RNA quality control with an Agilent Bioanalyzer, total RNA samples were sent for library construction and deep sequencing to the NCI core facility (Bethesda, MD).The raw sequences were quality controlled and aligned to RefSeq [29].Raw counts were normalized and loaded into Qlucore Omics Explorer v_3.0 (35) for differential expression analysis.Applying Qlucore's variance filter, a list of the 989 most significantly changed genes was generated which we separated into up-and down-regulated genes.Gene Set Enrichment Analysis (GSEA) alignment was performed from within Qlucore.For alignment to Gene Ontology-(GO), Kyoto Encyclopedia of Genes and Genomes (KEGG)-and Reactome-pathways, the up-and down-regulated gene list was loaded into string-db.org[30] and p-values for datasets with significant changes were extracted.
Statistical analysis
Experiments were typically performed independently at least twice, and representative or average data are displayed.Data are presented as mean ± standard error of measurement of replicate experiments.Applied statistics include Student t-tests or one-way ANOVA with Tukey's multiple comparison tests.Statistical analysis and figure drafting was performed using Graph-Pad PRISM 6 (GraphPad Software, Inc.).A p-value of less than 0.05 was considered statistically significant, unless stated otherwise.
KLM-1 cells isolated after SS1-LR-GGS incubation are resistant to antimesothelin RITs
As described in the Methods section, KLM-1 cells were intermittently exposed with SS1-LR-GGS and surviving cells (KLM-1-R) were expanded in RIT-free medium.KLM-1 and KLM-1-R cells were indistinguishable by bright-field microscopy (S2A Fig. ) and on flow cytometry forward and side scatter, indicating a similar cell size and granularity (S2B Fig. ).Cell proliferation was compared by seeding 1 x 10 5 cells on day 0, and counting viable cells daily for 6 days.KLM-1-R cells grew significantly faster than KLM-1 cells from day 2 on (p < 0.0001) (S2C Fig. ).To evaluate the level of resistance, KLM-1 and KLM-1-R cells were incubated for 72 hrs with anti-mesothelin RITs.ATP viability assays showed that KLM-1 cells were sensitive to SS1-LR-GGS (IC 50 = 1.73 ± 0.01 ng/ml) and RG7787 (IC 50 = 4.41 ± 0.38 ng/ml), in agreement with previous findings [19].In contrast, KLM-1-R cells were highly resistant to both RITs (IC 50 s > 1 μg/ml) (Fig. 1A) and SS1P (data not shown).In resistant cells, there was a small decrease in cell proliferation at RIT concentrations above a 100 ng/ml.As a control, we tested the activity of LMB-2, an anti-CD25 RIT [31] that does not bind KLM-1 cells, and observed a nonspecific decrease at 1 μg/ml LMB-2.This indicates that the decrease at 1 μg/ml RG7787 in KLM-1-R could be attributed to non-specific uptake (Fig. 1A).The established resistance was stable; no change was observed when KLM-1-R cells were in RIT-free culture for several months (data not shown).Because ATP assays cannot differentiate between cell growth arrest and cell death [32], we evaluated response with the Annexin V-PE Apoptosis Detection kit.In contrast to KLM-1 (p < 0.0001), KLM-1-R cells showed no or limited apoptosis when treated for 72 hrs with 100 ng/ml (p = 0.33) or 1 μg/ml RG7787 (p = 0.02), as compared to untreated cells (Fig. 1B).These data confirm that KLM-1-R cells are highly resistant to the cell killing activity of anti-mesothelin RITs.
Resistance in KLM-1-R cells is specific to anti-mesothelin RITs
To evaluate whether the resistance in KLM-1-R also applies to RITs targeted against other receptors on KLM-1-R cells, we tested HB21(Fv)-PE40 that targets CD71 [33].CD71 is expressed at the surface at a similarly high level in KLM-1 and KLM-1-R (data not shown), and both cell lines had a similar sensitivity to HB21(Fv)-PE40 after a 72 hr incubation (Fig. 1C).To evaluate sensitivity to other therapeutics, cells were also treated for 72 hrs with paclitaxel, a mitotic inhibitor, and gemcitabine, a nucleoside analogue.Viability assays showed no difference in sensitivity to paclitaxel between KLM-1 and KLM-1-R (IC 50 KLM-1 = 3.0 ± 1.6 ng/ml vs. IC 50 KLM-1-R = 3.7 ± 1.7 ng/ml) (p = 0.80).KLM-1-R (IC 50 = 44.2± 5.7 ng/ml) was 3-fold less sensitive to gemcitabine than KLM-1 (IC 50 = 15.2 ± 1.9 ng/ml) (p = 0.04), which is much less pronounced than the resistance to RG7787.These results show that the resistance in KLM-1-R is not general and therefore specific to anti-mesothelin RITs.
Mesothelin expression and RG7787 uptake is decreased in KLM-1-R
The first step in the RIT mechanism of action is binding to the targeted cell surface antigen followed by RIT internalization.Mesothelin surface expression, as evaluated by flow cytometry, was 4.9 ± 0.5-fold lower in KLM-1-R (12 x 10 3 sites per cell) compared to KLM-1 (60 x 10 3 sites per cell) (Fig. 2A).Flow cytometry displayed a homogeneous KLM-1-R cell population, suggesting a uniform decrease in mesothelin surface expression.Analysis of the mesothelinnegative A431 cells and use of an isotype antibody control confirmed that the mesothelin signal in KLM-1-R cells was specific.As expected, western blots showed that KLM-1 cells contained a major band of mature mesothelin at 37-kDa and a weaker precursor band at 72-kDa.The resistant cells had small amounts of mature mesothelin, but the precursor was not detected, likely due to its low abundance (Fig. 2B).Excess shedding of mesothelin in KLM-1-R could account for low mesothelin surface levels.Using the Meso Scale Assay, we found that shed mesothelin Activity of anti-mesothelin, anti-CD25 and anti-CD71 immunotoxins in KLM-1 and KLM-1-R.A: KLM-1 and resistant KLM-1 (KLM-1-R) cells were incubated for 72 hrs with the anti-mesothelin SS1-LR-GGS, RG7787 or anti-CD25 LMB-2 as a control.Growth inhibition was evaluated with an ATP cell viability assay.With IC 50 s below 10 ng/ml, KLM-1 is sensitive to the anti-mesothelin RITs, which is not the case for KLM-1-R (IC 50 s > 1 μg/ml). 1 μg/ml LMB-2 decreased cell viability, indicating that this RIT concentration induces non-specific uptake.B: KLM-1 and KLM-1-R cells were incubated for 72 hrs with 100 or 1000 ng/ml RG7787.Apoptosis was evaluated with the Annexin V-PE Apoptosis Detection Kit I. RG7787 induces a significant increase in apoptotic KLM-1 cells, whereas KLM-1-R cells shows no meaningful increase in apoptosis.C: KLM-1 and KLM-1-R cells were incubated for 72 hrs with HB21(Fv)-PE40.Growth inhibition was evaluated with an ATP cell viability assay.Both cell lines are highly sensitive to this RIT.In medium without cells (negative control), no mesothelin was detected.These data are consistent with the difference in surface levels and indicate that the low mesothelin on KLM-1-R is not due to increased shedding.To determine if decreased mesothelin was due to less mRNA, we performed RT-PCR and found that mesothelin RNA was 7.3 ± 4.3-fold lower in KLM-1-R (C T = 24.1 ± 0.09) compared to KLM-1 (C T = 21.54 ± 0.73).To evaluate whether the remaining mesothelin on the KLM-1-R surface could bind and internalize anti-mesothelin RITs, we evaluated the cellular internalization of RG7787-Alexa647.Uptake in KLM-1-R increased over time, but was significantly lower than KLM-1 at each time point (4-to 5-fold, p < 0.01).After 150 min incubation, e.g., KLM-1 internalized about 40 x 10 3 RG7787 molecules, compared to only 8 x 10 3 in KLM-1-R (Fig. 2C).These data demonstrate that KLM-1-R cells have a partial down-regulation in mesothelin and therefore internalize significantly less RG7787, providing a potential explanation for the observed resistance.
AZA partially restores mesothelin surface expression in KLM-1-R with limited effect on sensitivity to RG7787
Mesothelin expression can be silenced by DNA methylation of CpG sites in its promoter region [34][35][36][37].AZA, a DNA methyltransferase inhibitor, can reverse such hypermethylation.We gave cells 500 nM AZA daily for 3 weeks, and found that mesothelin expression was increased in KLM-1-R (KLM-1-R-AZA) by about 3-fold, up to 34 x 10 3 sites per cell, which is still less than the 60 x 10 3 sites per cell in KLM-1 (Fig. 3A).AZA improved the sensitivity to RG7787 in KLM-1 and KLM-1-R, although the latter remained highly resistant after a 72 hr incubation (IC 50 = 722.4± 232.6 ng/ml) (Fig. 3B).These data link mesothelin down-regulation to hypermethylation.
CpGs in mesothelin promoter region are hypermethylated in KLM-1-R
To confirm that the mesothelin gene is indeed subject to hypermethylation in KLM-1-R, we performed an exploratory analysis of the methylation status of seven CpGs in the mesothelin promoter region (chr16:808890-808742, S3 Fig.) using bisulfite sequencing.This 147-bp site region was selected based on the primers available at EpigenDx.Results demonstrated that these CpGs had a significantly higher methylation in KLM-1-R (59 ± 3.6%) compared to KLM-1 (41 ± 4.8%) (p < 0.05) (Fig. 3C).Treatment with AZA brought the methylation levels in KLM-1-R back to those of KLM-1 (p > 0.05).These data further support that mesothelin downregulation in KLM-1-R is associated with hypermethylation of the mesothelin gene.
Protein synthesis inhibition by RG7787 is limited in KLM-1-R, EF-2 ADP ribosylation is intact
Protein synthesis inhibition is initiated after the toxin traffics from the cell surface to the cytosol and inactivates EF-2 by ADP-ribosylation.KLM-1 cells were incubated with RG7787 for 16 hrs, and KLM-1-R cells for 16 and 48 hrs, after which protein synthesis was examined by measuring [ 3 H]leucine incorporation (Fig. 4A).After 16 hrs, RG7787 induced a dose-dependent decrease in protein synthesis in KLM-1, but not in KLM-1-R.After 48 hrs, KLM-1-R showed a small decrease in protein synthesis at the higher concentrations of RG7787, reminiscent of the growth inhibition observed with the ATP assay (Fig. 1A).LMB-2 caused protein synthesis Protein synthesis inhibition and EF-2 ADP-ribosylation in KLM-1-R.A: Protein synthesis inhibition by RG7787 is limited in resistant KLM-1 (KLM-1-R).KLM-1 was incubated for 16 hrs with RG7787, and KLM-1-R for 16 and 48 hrs with RG7787 and anti-CD25 LMB-2.RG7787 induces a dose-dependent protein synthesis inhibition in KLM-1-R, which is absent in KLM-1-R.After 48 hrs, RGG778 induces some decrease in protein synthesis in KLM-1-R, which is also the case with LMB-2.Protein synthesis inhibition was evaluated by measuring [ 3 H]leucine incorporation.B: Diphthamide Biosynthesis Protein (DPH) genes expression is not down-regulated in KLM-1-R, compared to KLM-1.Expression levels were evaluated with real time RT-PCR, standardized for ß-actin inhibition above 100 ng/ml after 48 hrs, confirming that the protein synthesis inhibition in KLM-1-R at higher RG7787 concentrations is in part attributable to non-specific uptake.Next, we evaluated whether the dismal protein synthesis inhibition by RG7787 in KLM-1-R was due to a problem with EF-2 ADP ribosylation.The toxin inactivates EF-2 by ADP-ribosylation of the diphthamide residue on EF-2, which requires the activity of enzymes DPH1-5 and 7 [3].We measured expression of these six diphthamide genes by RT-qPCR and found no meaningful decrease in KLM-1-R, compared to KLM-1 cells (Fig. 4B).To investigate the status of EF-2 in KLM-1-R, we examined EF-2 protein levels and the ability of RG7787 to ADP-ribosylate EF-2 in cell-free extracts at different times of RG7787 incubation.On average, EF-2 levels were 2-fold higher in KLM-1-R cells compared to KLM-1 cells (Fig. 4D).At each time point, the amount of EF-2 that was ADP-ribosylated by RG7787 was similar in KLM-1 and KLM-1-R (Fig. 4C).These data demonstrate that the dismal protein synthesis inhibition in KLM-1-R is not linked to downregulation of DPH enzymes or failure of the toxin to ADP-ribosylate EF-2.These data show that the anti-mesothelin RIT resistance is linked to events occurring upstream of protein synthesis inhibition, which is in agreement with the earlier findings that the resistant and KLM-1 cells were equally sensitive to the anti-CD71 HB21(Fv)-PE40.
RNA deep sequencing gene expression analysis
We carried out RNA deep sequencing on KLM-1 and KLM-1-R cells and analyzed the data set using Qlucore.Applying the software's algorithm for variance, we identified the top up-and down-regulated genes for KLM-1 versus KLM-1-R.The list of genes was separated in KLM-1-R's up-(488 genes) and down-(501 genes) regulated genes, respectively (S2 Table ).Similar to our earlier RT-PCR data, mesothelin (MSLN) was 9.08-fold down-regulated in KLM-1-R.To validate the dataset further, we checked for expression levels and fold-changes of common housekeeping genes and found them to be highly expressed (e.g.actin *0.5% of total counts) with no significant changes between sensitive and resistant cells (actin 1.117-fold, ribosomal protein L22 1.085-fold change).The RNA sequencing dataset was considered reliable as it confirmed the experimentally proven down-regulation of MSLN in KLM-1-R and displayed stability in housekeeping genes between the data sets.
Functional analysis of RNA sequencing data reveals methylationassociated changes
The two sets of up-and down-regulated genes of KLM-1-R were separately applied to the GSEA database supplied by the Broad Institute [38].GSEA sets with high similarity to our dataset were picked and applied to the whole unfiltered KLM-1/KLM-1-R data set of 25671 genes.One of the GSEA sets, "missiaglia_regulated_by_methylation_dn", showed high similarity to our data.This GSEA set was originally generated by treating PDAC cell lines with AZA [39].Of the 122 down-regulated genes in this GSEA, 97 (80%) were also down-regulated in the KLM-1-R cell population, whereas 20 genes (16%) were up-regulated and 5 genes (4%) were not overlapping (S4 Fig. ).In accordance with above-described promoter methylation analysis of the MSLN gene, these data indicate a more general hypermethylated state in KLM-1-R.and presented relative to KLM-1 C: EF-2 ADP-ribosylation is functional in KLM-1-R.RIT-induced EF-2 ADP-ribosylation was evaluated by incubating cell lysate with ADP-ribosylation buffer, 6-Biotin-17-NAD and 10 ng of RG7787 for 0, 15, 30 and 60 min at 25°C.Samples were subjected to SDS/PAGE followed by Western blotting with streptavidin HRP conjugate to detect biotin ADP-ribosylated EF-2.The 0 min time point and the sample without RG7787 are negative controls.D: EF-2 protein levels are on average 2-fold higher in KLM-1-R compared to KLM-1.Western blot was done on cell lysate of KLM-1 and KLM-1-R.β-actin acts as loading control.Protein levels were quantified and adjusted for β-actin levels with Image J. K: KLM-1, R: KLM-1-R,-no RG7787, + with RG7787.doi:10.1371/journal.pone.0122462.g004
Protein network analysis highlights several clusters of differentiallyexpressed genes
The gene lists generated by Qlucore were next analyzed with the online tool STRING [30].This tool clusters proteins based on databases from genomic contexts, high throughput experiments and co-expression, and implements a Pubmed text search.The resulting protein network can then be analyzed by GO annotations or by Reactome and KEGG pathway comparisons to screen for significances in differentially expressed gene sets.Up-and down-regulated genes were again kept separated in this analysis.Looking at the up-regulated genes in KLM-1-R, the largest sets showed genes that are involved in translation, transport and stress responses (Table 1).In accordance with an increased protein production, genes for unfolded protein response, tRNA-synthesis as well as metabolic RNA processes were up-regulated.An increase in these datasets resulted in highly significant p-values (< 0.001) for the GO-BL (gene ontology, biological function), KEGG, and Reactome pathway analyses.The dataset for down-regulated genes of KLM-1-R generally showed less clear patterns and lower significance levels, with several of them not reaching the p < 0.001 threshold for statistical significance (Table 1), including the transport-related datasets.Overall, the deep sequencing analyses of KLM-1 and KLM-1-R show profound changes in the expression profile of many cellular processes that are supportive of our earlier experimental observations.
Discussion
To gain insight into resistance mechanisms to anti-mesothelin RITs, we isolated and characterized resistant cells from the PDAC cell line KLM-1.These KLM-1-R cells were highly resistant against cell death and protein synthesis inhibition by various anti-mesothelin RITs, including RG7787, a novel de-immunized RIT optimized for clinical use [19] but not against HB21(Fv)-PE, an immunotoxin targeting the transferrin receptor.
Previously reported mechanisms of resistance to the anti-CD22 Moxetumomab pasudotox RIT are directly linked to silencing or deletion of DPH genes, which prevents EF-2 ADP-ribosylation and subsequent protein synthesis inhibition [5][6][7].This was not the case in KLM-1-R, where EF-2 ADP-ribosylation is normal.The higher EF-2 levels in resistant cells could imply that it takes the toxin longer to inhibit protein synthesis to the same extent as in KLM-1, but this is unlikely to cause the observed profound resistance, since even prolonged incubation with anti-mesothelin RITs gave virtually no cell death in the resistant cells, and KLM-1 and KLM-1-R were equally sensitive to HB21(Fv)-PE40, which also kills through EF-2 inactivation.The high sensitivity of KLM-1-R to HB21(Fv)-PE40 also indicates that the resistance is not associated with an aberration of the apoptotic machinery [20][21][22], and shows that the resistance is specific to anti-mesothelin RITs.
Mesothelin expression was partially down-regulated in KLM-1-R, resulting in a 4-to 5-fold lower cellular uptake of RG7787 compared to KLM-1.This provides one potential explanation for the observed resistance in KLM-1-R.Mesothelin overexpression has been associated with increased cell growth in PDAC cell lines [40].However, despite the partial loss of mesothelin, KLM-1-R cells still grew significantly faster than KLM-1.We do not consider the lower mesothelin expression to be the cause of this increased proliferation, but rather hypothesize that it is linked to the 2-fold higher EF-2 levels in KLM-1-R.The increase in catabolic cell state was also reflected in the strong up-regulation of transcription and translation observed in the RNA sequencing analysis for KLM-1-R.The enhanced proliferation of KLM-1-R suggests cells changed and acquired resistance during RIT treatment.Since the resistant cells were not obtained via clonal selection, we cannot exclude the possibility that the KLM-1-R cell line is a polyclonal mixture of resistant cells, where each group of cells would display a different mechanism of resistance.However, none of the currently generated data support the presence of such mixture; flow cytometry analyses of resistant cells, e.g., display a single population with a uniform decrease in mesothelin surface expression and a homogeneous forward and side scatter profile.Further research is required to fully elucidate this possibility.
The MSLN gene is frequently hypomethylated in various patient tumors and cell lines, including PDAC, with the extent of methylation correlating with expression and shed mesothelin levels [35][36][37].Hypomethylation of CpG sites typically increases gene transcription, whereas hypermethylation is associated with transcriptional silencing [34].Aberrant methylation can contribute to the emergence of drug resistance in cancer cells [41].Hypermetyhylation of DPH1 and DPH4 genes, e.g., is responsible for resistance against Moxetumomab pasudotox in leukemic cell lines [5,7].The deregulation of gene expression by CpG methylation can be reversed using DNA methyltransferase inhibitors such as AZA.In KLM-1-R, this compound significantly increased the surface expression of mesothelin.We analyzed an exploratory set of CpGs located in the promoter region of mesothelin and found that in KLM-1-R, these were subject to a significantly higher degree of methylation than in KLM-1.The hypermethylation in KLM-1-R was not absolute, which is expected considering that the resistant cells still expressed a moderate amount of mesothelin.Despite the limited number of analyzed CpGs, the overall results clearly link the partial down-regulation of mesothelin in KLM-1-R to hypermethylation.In addition, RNA sequencing expression profiles of KLM-1 and KLM-1-R matched with GSEA data sets indicated a more general state of hypermethylation in the resistant cells, supporting the impact of methylation RIT sensitivity.
There are several indications that the decrease in mesothelin is not the sole cause of resistance.AZA increased mesothelin expression in KLM-1-R to approximately 60% of the original levels in KLM-1, but cells were still about 180-fold more resistant to RG7787.In addition, several (PDAC) cell lines with a surface mesothelin similar to that of KLM-1-R or KLM-1-R-AZA have subnanomolar RG7787 IC 50 s [19], indicating that this amount of mesothelin expression is not necessarily a limiting factor.Furthermore, the introduction of MSLN in KLM-1-R restored the original sensitivity to RG7787, but was associated with a 5-fold overexpression of mesothelin compared to KLM-1, suggesting a significantly higher RG7787 uptake was needed to reach a similar sensitivity in KLM-1-R-Msln.These discrepancies hint that, in addition to the lower mesothelin, RIT trafficking might be a limiting factor in KLM-1-R.Indeed, we recently found that RIT cytotoxicity can depend on the toxin's intracellular itinerary [42].RNA sequencing data provided additional support for this hypothesis, with pathway analyses demonstrating differential expression in KLM-1-R of genes that are linked to intracellular transport.It is, however, currently unclear which transport-related genes could be considered the main drivers of such resistance mechanism.RIT trafficking has several uncertainties and is difficult to study, in part because of the few molecules actually trafficking to the cytosol.Given this complexity, this is beyond the scope of the current study and is subject to further investigations.Additional research is also required to establish whether the RIT resistance mechanisms herein described are common in mesothelin-expressing epithelioid cancer cell lines, and whether these aberrations can be found in vivo.
In conclusion, we isolated PDAC cells resistant to anti-mesothelin RITs.The resistance is linked to a methylation-associated decrease in mesothelin and subsequent low uptake of the RIT.Significant mesothelin overexpression and subsequent higher RG7787 uptake are required to reach the original cytotoxicity in resistant cells, hinting that RIT trafficking is also a limiting factor.Both the aberrations in methylation and intracellular transport were supported by gene expression analyses of the parental and resistant KLM-1 cells.
Fig 1 .
Fig 1.Activity of anti-mesothelin, anti-CD25 and anti-CD71 immunotoxins in KLM-1 and KLM-1-R.A: KLM-1 and resistant KLM-1 (KLM-1-R) cells were incubated for 72 hrs with the anti-mesothelin SS1-LR-GGS, RG7787 or anti-CD25 LMB-2 as a control.Growth inhibition was evaluated with an ATP cell viability assay.With IC 50 s below 10 ng/ml, KLM-1 is sensitive to the anti-mesothelin RITs, which is not the case for KLM-1-R (IC 50 s > 1 μg/ml). 1 μg/ml LMB-2 decreased cell viability, indicating that this RIT concentration induces non-specific uptake.B: KLM-1 and KLM-1-R cells were incubated for 72 hrs with 100 or 1000 ng/ml RG7787.Apoptosis was evaluated with the Annexin V-PE Apoptosis Detection Kit I. RG7787 induces a significant increase in apoptotic KLM-1 cells, whereas KLM-1-R cells shows no meaningful increase in apoptosis.C: KLM-1 and KLM-1-R cells were incubated for 72 hrs with HB21(Fv)-PE40.Growth inhibition was evaluated with an ATP cell viability assay.Both cell lines are highly sensitive to this RIT.
Table 1 .
Significance level for differentially expressed gene sets from KLM-1 to KLM-1-R.Table was generated by STRING using GO, KEGG, and Reactome databases, as indicated under dataset reference.§Number of genes in our data set that are also present in the respective data-set list or pathway list.*p-value represents the probability that 478 genes would show the distribution to match the same number of gene hits in the respective list.Datasets are ranked according to p-value.P < 0.001 is considered statistically significant. | 2016-05-12T22:15:10.714Z | 2015-03-24T00:00:00.000 | {
"year": 2015,
"sha1": "13cf491479b613057656169060a78365cfbb8135",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0122462&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "13cf491479b613057656169060a78365cfbb8135",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
267498353 | pes2o/s2orc | v3-fos-license | Relationship between oral health and depression: data from the National Health Survey 2016–2017
Objective To evaluate the relationship between oral health status, self-perception of oral health, and depression. Methods This cross-sectional study included 2953 individuals that were ≥ 18 years of age and participated in the Chilean National Health Survey (NHS), 2016–2017. Information on oral, dental, and mental health, and the presence or absence of depressive symptoms was collected. Secondary data analysis was carried out using STATA and included logistic regression models adjusted for sex, age, and educational level. The analyses factored in the expansion weights to estimate representative prevalences of the entire population. Results Participants experiencing frequent dental or prosthesis-related discomfort while speaking (OR: 1.57; 95% CI: 1.01–2.43) were related with exhibiting suspected depression. Removable upper denture users were at a higher risk of exhibiting suspected (OR: 2.04; 95% CI: 1.11–3.74) than those not using them. Participants diagnosed with depression in the past 12 months had a similar number of teeth (median = 24) compared to those without depression (median = 25) (OR: 0.99; 95% CI: 0.96–1.02). Conclusion Experiencing dental or prosthesis-related difficulties in speaking is related to suspected depression or a diagnosis of depression. These findings highlight the importance of developing comprehensive healthcare approaches that consider mental health in the context of oral health. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-024-03950-2.
untreated oral pathologies including dental caries, severe periodontal diseases, tooth loss, and edentulism.Oral pathologies also rank first and third in terms of prevalence and incidence, respectively, and are the tenth most common cause of moderate disability [5].Similar trends have also been observed in Chile, with the prevalence of oral diseases such as dental caries and periodontal diseases being relatively high in the population [6].
Evidence suggests that individuals diagnosed with mental health disorders are at a higher risk of developing comorbidities due to difficulties associated with seeking and adhering to appropriate treatment plans [7].Depression is an important risk factor for many systemic conditions including obesity and sleeping disorders.It also plays a significant role in oral health through various biological and behavioral mechanisms, with adoption of risky behaviors such as frequent alcohol consumption, smoking, high fat and sugar intake, and sedentary lifestyles having a negative effect on the patient's oral health status.Furthermore, the patient's self-perception of oral health and their frequency of attendance at a dental clinic may also be affected.Previous studies have also reported potential biological mechanisms including an association between depression and reduced salivary flow, xerostomia, and dysregulation of the immune system and salivary immunity.These, in turn, increase the risk of developing oral pathologies such as dental caries and periodontal diseases.As a result, individuals diagnosed with depression typically tend to exhibit a higher prevalence of caries, loss of teeth, and edentulousness [8].
No studies to date have evaluated the relationship between the oral health status, depression, and self-perception of oral health among adults in Chile, and the current study aims to address this gap in knowledge using data from the Chilean National Health Survey (NHS 2016(NHS -2017)).
Methods
This cross-sectional study used data from the Chilean NHS 2016-2017; version 3 (Department of Epidemiology, Ministry of Health, Chile); which collected information on the social determinants, related factors, and protective influences of various diseases [6].The study sample was representative of the Chilean population and included men and women from both rural and urban parts of the country.Pregnant women and individuals who refused to participate in the survey during the home visit were excluded from the study.The survey was carried out using a complex multi-stage clustered, stratified, randomized oversampling technique and had a homeownership rate of 67% and individual participation rate of 90%.Data collection included home interviews carried out between August 2016 and March 2017 by interviewers and previously calibrated nurses.The survey has 6233 respondents, of which 5520 underwent blood and laboratory testing and oral examination.The first, second, and third visits included interviews; anthropometric measurement and testing (including oral examination) carried out by a nurse; and application of an expanded mental health section to a sub-sample of participants by a trained interviewer, respectively.The oral examination included evaluation of the following items: total number of remaining teeth (both jaws); absence of anterior teeth (yes/no); the total number of teeth with cavitated carious lesions (both jaws); and effective resolution of anterior edentulousness using removable dentures (yes/no; both jaws).
Selected sub-sections (screening, depression, social phobia, agoraphobia, alcohol abuse and dependence, suicidality, mania, psychosis, and use of mental health services) of the Composite International Diagnostic Interview (CIDI), a mental health diagnostic tool developed by the World Health Organization, were applied to a random sub-sample of participants (n = 3403) that were ≥ 18 years of age by a trained interviewer [9].Older adults who exhibited cognitive impairment during the first visit were excluded.For the extended mental health module application, a random subsample excluded 27 cases that did not meet the inclusion criteria.
The final study sample included 2953(89% of subsample) survey participants that were ≥ 18 years of age.The losses were due to: missing data on the oral health item of interest; failure to undergo oral examination; missing data on the extended mental health section (CIDI); and missing data in the depressive symptoms section.
Depressive symptoms were recorded using an abbreviated version of the CIDI instrument (CIDI Short form; CIDI-SF) containing 30 questions focusing on the presence of dysphoria (sadness symptoms) and anhedonia (lack of interest or ability to enjoy), and a depression risk score was calculated if the patient met at least five out of seven complementary criteria (Diagnostic and Statistical Manual of Mental Disorders or DSM-IV minor criteria for depression).
The participants were diagnosed with depression (as per the CIDI-DSM IV criteria) if they exhibited (1) depressed mood and (2) reduction or loss of interest or pleasure for at least 2 weeks and met ≥ 3 of the following criteria: (1) significant increase or decrease in appetite resulting in substantial weight changes; (2) suicidal ideation; (3) considerable sleep disturbances; (4) psychomotor agitation or motor slow-down; (5) fatigue or loss of energy; (6) feelings of worthlessness or guilt; and (7) decreased concentration.
The last five symptoms must have been experienced all day or almost every day for at least two weeks to be considered in the score.Furthermore, these symptoms must have caused clinically significant discomfort and impairment of social, occupational, and other important aspects of the individual's life.Therefore, a diagnosis of depression was made if the participant met at least five criteria.Participants with symptoms caused by substance abuse, drugs, medications, and grief or loss of a loved one were excluded.
The variables were defined based on the questions in specific sections of the forms used for corresponding NHS interviews, such as depressive symptoms, oral health, oral examination, and the 'Depression Section' (CIDI).
The oral clinical exam, which included third molars, was carried out by trained nurses who participated in a theoretical and practical course with a final test of clinical cases.The interexaminer reliability measured with kappa was 0.85 for tooth loss and cavities presence.Cavities were defined as any surface exhibiting discontinuity, encompassing not only filled teeth but also decayed, temporarily filled, and remaining root structures.Then, the independent variables included the use of dental prosthesis; number of remaining teeth (both jaws); anterior tooth loss; number of decayed teeth; and self-perception of oral health, while the dependent variables were suspected depression and diagnosed with depression in the last 12 months.
The self-perception of oral health was assessed using a five-point ordinal scale.Participants were asked to rate their oral health on a scale ranging from 'very poor' to 'excellent.' Additionally, specific survey questions focused on oral discomfort and its impact on daily life and social relationships.These questions inquired about discomfort when speaking, pain and suffering, discomfort while eating, interference with daily activities (such as work or study), and interference with social relationships.The responses to these questions provided valuable insights into participants' overall perception of their oral health and how dental discomfort affected their quality of life.
Descriptive statistics, including percentages for categorical variables and median and dispersion measures for numerical variables, were generated.Logistic regression models were used to estimate OR and 95% CI.Directed acyclic diagrams (not shown) and relationship matrixes (heat plots) were used to examine the association between the variables and outcome measures.The models examining the association between suspected depression and self-perception of oral health were adjusted for sex, level of education, and age, while those exploring the relationship between prosthesis use and the number of remaining teeth were adjusted for the same factors as well as tobacco use.Potential confounding factors considered when examining a diagnosis of depression in the past 12 months as an outcome measure included sex, tobacco, and education, generating open backdoor paths if they do not condition them.The analysis carried out in this study respected the complex sampling and the expansion factors used, which is represented in the results through frequencies and expanded sample sizes.A sensitivity analyses checking the findings robustness using prevalence ratio was performed through generalized linear models with binomial family and log link function.Coefficients from logistic regression model and GLM was compared and tested through adjusted Wald test.All analyses were performed using the statistical software STATA version 16
Results
The study sample included 2953 individuals who participated in the Chilean NHS 2016-2017.Table 1 summarizes patient characteristics by the presence of suspected or diagnosed depression.Approximately 25% of women and 10.53% of men exhibited suspected depression, while 9.84% of women and 2.39% of men had been diagnosed with depression in the past 12 months.Furthermore, the prevalence of a diagnosis of depression in the past 12 months was higher among individuals with higher levels of education (i.e., ≥ 13 years of schooling; 7.26%).Individuals exhibiting suspected depression had a similar median number of teeth (n = 25) while those diagnosed with depression in the past 12 months exhibited a slightly lower median number of teeth (n = 24) compared to those without depression.
Figure 1 shows the relationship between oral health, self-perception of oral health, and suspected depression or a diagnosis of depression in the last 12 months.Adjustments were made based on the DAG evaluation, and the relationship matrix has been shown in Fig. 2. The findings showed that patients experiencing difficulties while eating due to dental or prosthesis-related issues were at a higher odds of exhibiting suspected depression (OR: 1.57; 95 CI%: 1.01-2.43)compared to those who did not experience these difficulties.Removable upper denture users were also at a higher odds of exhibiting suspected depression (OR: 2.04; 95% CI: 1.11-3.74)or a diagnosis of depression in the past 12 months when compared to those who did not use prostheses.The results of prevalence ratio as alternative analysis are in Supplementary material, and did not change significatively from the main analysis.
Discussion
The current study observed a relationship between experiencing dental or prosthesis-related difficulties in speaking, and suspected depression or a diagnosis of depression.Furthermore, participants using removable upper dentures also exhibited higher odds of suspected depression or a diagnosis of depression in the past 12 months.
No significant associations were observed between the number of remaining teeth and depression.This contrasted with several previous cross-sectional or longitudinal studies that reported observing an association between tooth loss and depression, with individuals with fewer remaining teeth being more likely to experience depression.For example, a longitudinal study in the Japanese population found that older adults with fewer teeth were at an increased risk of being diagnosed with depression, potentially due to changes in self-esteem and social support [10].Another study found that older adults with a higher number of missing teeth were at a greater risk of exhibiting depressive symptoms [11], while Matsuyama et al. [12] showed that losing even one tooth increased the risk of exhibiting depressive symptoms or being diagnosed with major depression.It has been suggested that social factors and oral health mediated this association, with declines in oral function and appearance playing a significant role [13].The current study observed no significant association between self-perception of oral health and depression or depressive symptoms, and this was in agreement with Kim et al. [14] who concluded that the incidence of depression was higher among individuals who evaluated their oral health using terms such as "poor" or "bad".Barbosa et al. [15] observed significantly higher (p-value: 0.026) risk of developing depression among individuals with negative self-perceptions of their oral health when compared to those with more positive perceptions (OR: 1.55; 95% CI: 1.05-2.28).
The current study also found that frequent dental or prosthesis-related discomfort while eating was related with a higher frequencie of suspected depression or a diagnosis of depression in the past 12 months.It is important to mention that the presence of oral prosthetics has been related with chewing problems [16] and speaking difficulties and quality of life related with oral health [17].Park et al. [18] evaluated data from the Korean National Health and Nutrition Examination Survey and found that participants experiencing greater discomfort while eating exhibited a higher risk of depressive symptoms (OR: 1.25; 95% CI: 1.05-1.50)compared to those did not experience such discomfort.Mariño et al. [19] used data from the Melbourne Longitudinal Study on Healthy Aging and found that older Australian adults experiencing oral or dental-related difficulties in eating exhibited significantly higher risk of depressive symptoms (p-value < 0.001) compared to those that did not experience these difficulties, while Kim et al. [14] showed that greater discomfort while chewing or eating was significantly associated with stress, depression, and suicidal ideation.However, discomfort while speaking was only associated with stress but not depression.
Previous studies have also examined the association between denture use and depression, with Seenivasan et al. [20] demonstrating that older adults that used dentures were more likely to experience depression compared to those that did not.Jang [21] compared patients who did and did not use removable dentures and found that the prevalence of depression was 1.07 times higher (p-value < 0.001) in the former group.This could potentially be attributed to emotional and psychological alterations as a consequence of loss of teeth or an inability to adapt to the changes associated with the use of removable prostheses [22].Tooth loss can trigger depression in vulnerable individuals in particular, and the level of satisfaction with removable prostheses is often determined by certain personality traits [23,24].
Poor oral health has been shown to be associated with systemic diseases such as depression, with previous studies proposing various underlying biological mechanisms.Oral health problems, particularly those that cause pain, can lead to poor quality of life, stress, anxiety, and depression [25].Chronic inflammation caused by oral infections, such as periodontitis can also cause alterations in hormonal and neurotransmitter levels in the brain, leading to depression [26].Finally, poor oral health and tooth loss are often associated with unhealthy dietary habits, reduced nutritional intake, and difficulties while eating, which increases the risk of various mental disorders [27].
This study has several limitations.First, the cross-sectional study design prevented elucidation of causality, with reverse causation remaining a possibility.Second, the study primarily included secondary data analysis which may have affected the results as the data was not collected specifically for this purpose.Third, the oral health examinations were carried out by nurses instead of dentists; however, provision of appropriate training and subsequent calibration ensured high levels of agreement between the examiners, as evidenced in the pilot studies (104).Fourthly, the CIDI-SF instrument does not rule out the possibility of false positives such as chronic diseases, other psychiatric diagnoses (e.g., dysthymia, bipolar disorder, substance abuse), and mourning.Finally, the majority of oral health variables included in this study were self-reported.Future studies may consider examining the relationship between oral health and depression using variables with higher levels of objectivity (e.g., salivary biomarkers).
The key strength of this study was the use of a large study sample that was representative of the Chilean population, ensuring external validity, generalizability, higher statistical power, and reliability of the findings.Finally, the good replicability demonstrated reinforces the robustness of its findings.
Conclusion
The findings of this study suggest that poor oral health and a negative self-perception of oral health may be related to depression.However, further research is necessary to elucidate the direction of this association, understand the underlying mechanisms involved, and develop effective interventions that adopt a comorbid approach toward improving oral and mental health outcomes.
Fig. 2 Fig. 1
Fig. 2 Relationship between variables for identifying factors that contribute to confusion .1 (Windows; STATA Corp. 2019.College Station, TX: StataCorp LLC.).The NHS 2016-2017 survey was approved by the Scientific Ethics Committee, Faculty of Medicine, Pontificia Universidad Católica de Chile, and informed consent was obtained from all participants.An anonymized version of the database of volunteers has been made available for use for research purposes on the Chilean Ministry of Health website.The current study was approved by the Scientific Ethics Committee of Universidad de los Andes (ID: CEC2021059).
Table 1
Patient demographics by presence of suspected depression or a diagnosis of depression in the past 12 months (n = 2953) | 2024-02-07T06:17:20.560Z | 2024-02-05T00:00:00.000 | {
"year": 2024,
"sha1": "7ef661609c7ccd2509364e6959b8ad1da479bc61",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/counter/pdf/10.1186/s12903-024-03950-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb9363d0224453e629661b8cacbea1a7bb7f4b75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210845712 | pes2o/s2orc | v3-fos-license | Identification of Daily Activites and Environments Based on the AdaBoost Method Using Mobile Device Data: A Systematic Review
: Using the AdaBoost method may increase the accuracy and reliability of a framework for daily activities and environment recognition. Mobile devices have several types of sensors, including motion, magnetic, and location sensors, that allow accurate identification of daily activities and environment. This paper focuses on the review of the studies that use the AdaBoost method with the sensors available in mobile devices. This research identified the research works written in English about the recognition of daily activities and environment recognition using the AdaBoost method with the data obtained from the sensors available in mobile devices that were published between 2012 and 2018. Thus, 13 studies were selected and analysed from 151 identified records in the searched databases. The results proved the reliability of the method for daily activities and environment recognition, highlighting the use of several features, including the mean, standard deviation, pitch, roll, azimuth, and median absolute deviation of the signal of motion sensors, and the mean of the signal of magnetic sensors. When reported, the analysed studies presented an accuracy higher than 80% in recognition of daily activities and environments with the Adaboost method.
Introduction
AdaBoost is one of the first boosting algorithms developed by Yoav Freund and Robert Schapire that was adapted for practical application in many solving tasks.AdaBoost is a method that uses ensemble learning techniques to combine multiple weak classifiers into a single strong classifier.It is combined with other artificial intelligence methods to increase the accuracy of the recognition [1].Thus, weak learners, including decision tree and decision boosting, are commonly used with the AdaBoost method.In comparison with other machine learning methods, the AdaBoost method is less susceptible to overfitting.
One of the strategies adopted by the different implementation of Adaboost consists in combination with other methods to reduce the errors obtained [2,3].The primary purpose of ensemble learning techniques is to improve the results by combining the results of different methods [2,3].These techniques consist of the combination of several machine learning techniques with a single purpose and model to improve the prediction results [4][5][6].It can be divided into two groups, sequential ensemble methods and parallel ensemble methods, where our focus is the sequential ensemble methods, because the implementation of Adaboost consists in the application of a base learner that is generated sequentially [7].
In the last years, several studies have been developed with a focus on the recognition of daily activities using the sensors available in the commonly used mobile devices.These studies conclude that it is possible to accurately detect the daily activities and environments with motion, magnetic, location and acoustic sensors embedded on mobile devices, reporting reliable results available in the literature with different machine learning methods [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23].
Generally, the raw readings of one-dimensional (e.g., blood pressure sensor, thermometer, etc.) or multi-dimensional signals (e.g., accelerometer or gyroscope) can be directly processed by AdaBoost, and other classification and regression algorithms in general.To do that, all sensory readings in a specific time window represent different inputs.For example, if a thermometer reads data with 1 Hz frequency, and the window is 60 s, there will be 60 inputs to AdaBoost.Similarly, a three-dimensional gyroscope would present 180 inputs.Many deep learning methods accept the input data in this format.Be that as it may.Usually, many algorithms benefit from a feature engineering step [43], which significantly improves the accuracy or simplifies the complexity of the models [23,44].
Previous studies [33][34][35][36][37][38][39][40][41][42] shown that the proposed framework includes the correct modules for the reliable recognition of daily activities and environments.However, the results can be improved with other methods, including ensemble learning methods.This paper reviews the different studies available in the literature related to the implementation of the AdaBoost method for daily activities recognition.This review is included in the research and development of a framework associated with the identification of daily activities and environments using the sensors available in mobile devices, where the AdaBoost method can increase the accuracy compared to other implementations.The motivation of this paper is to improve the accuracy reported in previous studies for the recognition.This review intends to explore the use of the Adaboost method to verify if it reports better results than MLP, FNN, and DNN methods for the identification of daily activities.
The main contribution of this review is the presentation of a base of study for the readers who deal with the recognition of daily activities and environments using sensors available in mobile devices providing an in-depth survey of several research projects which implement Adaboost method.
This review shows that the features that reported better results are mean, standard deviation, pitch, roll, azimuth and median absolute deviation of the signal of motion sensors, and the mean of the signal of magnetic sensors.According to the results, the Adaboost method provides huge accuracy for the recognition of daily activities and environments.
The following sections are organized as follows: Section 2 presents the methodology of the review.The results obtained are presented in Section 3. Section 4 presents the discussion on the results.Finally, the conclusions are presented in Section 5.
Research Questions
In this way, the leading questions of this review are: (RQ1) What is AdaBoost?(RQ2) How to detect daily activities with AdaBoost?(RQ3) How to identify daily activities with AdaBoost using mobile devices?
Inclusion Criteria
Studies assessing the recognition of daily living using AdaBoost method were included in this review according to the following criteria: (1) Detect daily activities using sensors; (2) implementing AdaBoost method for the automatic recognition of daily activities, presenting the information about the activities and environments recognized; (3) make use of mobile devices; (4) presents the accuracies obtained with AdaBoost method; (5) published between 2010 and 2019; (6) were available in open-access libraries; and (7) written in English.
Search Strategy
The authors of this review searched for studies according to the inclusion criteria in the following electronic databases: IEEE Xplore and Science Direct.Every study was independently evaluated by eight reviewers (JF, IMP, GM, NMG, EZ, PL, FFR, and SS), and all parties evaluated its suitability.The studies were examined to identify the characteristics of AdaBoost and its relevance for the implementation in recognition of daily activities and environments using mobile devices.
Extraction of Study Characteristics
The following data were extracted from the studies and tabulated (see Tables 1 and 2): Year of publishing, the population was taken into account, purpose, equipment used, and outcomes of each publication.All cited studies in Tables 1 and 2 informed that the experiments were performed in laboratory settings.The verification of the availability of the raw data was performed.
Authors Year Outcomes
Kelarev et al. [46] A cardiovascular autonomic neuropathy identification algorithm that uses mobile devices is proposed.The dataset has been created using health records collected in a university research project named Diabetes Complications Screening Research Initiative.
The main contribution of the paper is the recommendation of the AdaBoost and Bagging based on the J48 decision.
Xu et al. [47] The paper presents an accurate method for context detection, which uses multiple sensors and machine learning.The context information is restrictively used to select activities that require classification, increasing the accuracy and decreasing the complexity of the process.Fourteen subjects each carried a tablet, and four 9-DOF sensors were located on wrists, ankle, knee, and mid-waist.Each volunteer allocated thirty minutes in every context and did each required activity from two to five minutes.The dataset was then divided into two parts, 30% of the data for training and 70% rest for testing.The combined results of the three classifiers were able to achieve higher accuracy for all contexts.
Wisniweski et al. [48] The paper presents an automatic recognition method of asthmatic wheezing through the analysis of a breathing sound dataset.One hundred thirty s130 records for natural and wheezy breathing using 1024 samples each were used for the study.The overall recognition was 93%.
Zhou et al. [49] The authors propose the HATS, which provides both entry-point and post-log-in mobile user authentication.The proposed method integrates several authentication methods like password, keystroke, gesture, and touch dynamics features to explore the vulnerabilities of specific approaches to specific security attacks.The participants were required to go through several training sessions to be introduced to the usage of two different keyboards.Twelve volunteers (for men, eight women) carried the study.
Masri et al. [50]
The study proposes active authentication applying scrolling behaviors for biometrics and evaluates diverse classification and clustering approaches that support those characteristics.The experiment counted with 84 participants and 54 documents.The most accurate method was achieved adopting k-means clustering among two techniques applied to validate users, with a success rate of 83.5%.
Xu et al. [51]
The authors propose an online learning approach for activity recognition based on data collected using inertial sensors.The data was gathered from fourteen volunteers.Every volunteer performs thirty minutes in the respective context and carried each required activity for two to five minutes.This algorithm outperformed the benchmark algorithms by 30-40%.
Tang et al. [52] The paper shows an assessment of ten representative classifiers applied in two datasets.The dataset contains accelerometer time-series data from 22 volunteers.This study concluded that K-Nearest Neighbors is the most suitable classifier.
Yanyun et al. [53]
The paper presents a method based on Convolutional Neural Networks approach to provide automatic extraction of features for transportation mode classification.There were used a total of 169 features, and the dataset has more than 200 h of transportation data collected from thirty volunteers on diverse transportation modes (bus, car, metro, train).The recognition accuracy was: 96.6% for the bus, 99.6% for the car, 99.0% for the metro, and 98.9% for the train.Giving an average accuracy of 98.6%.
Authors Year Outcomes
Li et al. [54] 2017 The authors propose an indoor and outdoor recognition method, which is divided into two parts: The machine learning-based Indoor, Outdoor, and Semi-open areas recognition algorithm and the lightweight WiFi sub-detector.The absolute values and the relative measurements of WiFi received signal strength are calculated to identify if the user environment is a semi-open area, indoor or outdoor.The proposed method presents 85% of accuracy for the lightweight WiFi-based technique and 96% of accuracy using the aggregated IOS-detector.
Yanjun et al. [55] 2017
The article proposes a Bayesian algorithm for traffic pattern recognition.The used dataset consists of 400 h from eight individuals.An accelerometer, a barometer, a geomagnetic, a gyroscope, and base station were the five used sensors.The AdaBoost classification method was also implemented to get better results.The proposed method presents an accuracy rate from 83.3% to 91.5%.
Vafeiadis et al. [56] 2017
The paper presents a machine learning approach for occupancy detection.The water and energy consumption data collected using smart meters are used as features for occupancy detection in a domestic environment.Under their boosting versions, Random Forest and Decision Tree classifiers present more accuracy when associated with the other classifiers.The authors obtain an overall accuracy of 83.37% and 82.79%, respectively.
Subasi et al. [57] 2018
This study proposes the use of AdaBoost based classifier for human activity recognition using data collected from sensors located on the body.The study is based on nine inertial sensors collected by seventeen volunteers who perform 33 fitness exercises.
The results present 99.98% of success rate.
Yuan et al. [58] 2018
The authors present an indoor localization algorithm based on 'Twi-AdaBoost'.The proposed method uses several sensors, such as gyroscope, magnetometer, and accelerometer.The tests used 6304 samples collected from both smartphone and smartwatch devices.The AdaBoost method outperforms the other approaches tested in every metrics.
Results
As pictured in Figure 1, we identified 151 papers with three duplicates, that were removed.The other 148 articles were evaluated according to the title, keywords, and abstract, excluding 133 citations.After full-text evaluation, two papers were removed from the remaining 15 papers.The qualitative and quantitative synthesis included information related to the remaining 13 articles.
In conclusion, we examined 13 documents.To find relevant information about the implementations presented in the different studies analysed in this review, the reader should find the information in the original cited works.Table 1 shows the year of publication and the resume of the papers and final results.Table 2 shows the population, the purpose of the study, devices, settings of the papers, pros, and cons.When the datasets used in a study is publicly available, or the population information is provided, it is considered as a positive aspect.In many cases, the evaluation uses a cross-validation scheme (regular or stratified per class).However, the studies do not consider different subsets of the population for training and testing (i.e., train/test split based on subjects or patients).This is generally a more rigorous evaluation scheme and is expected to hurt the reported accuracy.Other more specific pros and cons are provided for each study.
The papers were published between 2012 and 2018, where two studies were published in 2018 (15%), four studies were published in 2017 (31%), two studies were published in 2016 (15%), two studies were published in 2015 (15%), two studies were published in 2014 (15%), and one study was published in 2012 (8%).Regarding the used devices, it was split among 43% for smartphones and the remaining 57% for mobile devices.The source code is not available for all studies analysed.Moreover, 69% of the studies have the raw data available.Finally, we verified that there are no studies that shared the source code.
Methods for Identification of Activities in Daily Living
In the study [57], the authors tried to use different classifiers for the recognition of activities with sensors to find the best method.Ten classifiers were utilized with the AdaBoost method.The dataset used was publicly available.The settings were investigated using nine inertial sensors from seventeen individuals taking into account 33 fitness activities.The used sampling rate was 50 Hz.After checking accuracies of the AdaBoost method, authors came to conclude that its implementation with random forest gives the best accuracy, with a value of 99.98%.
Authors of [49] have proposed harmonized authentication based on ThumbStroke dynamics (HATS) for mobile devices.The performance of HATS was tested, taking into account the different screen sizes of several mobile devices.Laboratory experiments were conducted to collect data for testing.Participants were required prior experience with touch screen devices and a qwerty keyboard.The study selected some features for learning ThumbStroke models, and these are timing features, spatial features, movement direction features, and operation features.The phrases, entered by the participants, were adopted from MacKenzie and Soukoreff and varied from 16 to 43 characters.Based method across all settings and classification models, the final results showed that HATS outperformed the keystroke dynamics.Among all the classification methods used, AdaBoost reported a maximum accuracy of 41.8%.
Li et al. [54] talks about an indoor/outdoor detection system (IOS).This method is split by the machine learning-based IOS-detector and the lightweight WiFi sub-detector.The first part infers indoor, outdoor, or semi-open environments based on the classification results.The second part focuses on the implementation of mobile devices.Finally, the other part consists of the IOS detection that shows high accuracy for the system.In conclusion, the proposed IOS detector achieves around 96% for the aggregated IOS detector and over 85% accuracy for the lightweight WiFi-based sub-detector.
In the study [50], the authors introduce a method for re-authenticating users taking into account a behavioral biometric-based on users' document scrolling traits.More specifically focused on identifying abnormal scrolling behavior on users while interacting with protected or read-only documents.Dataset was obtained from a previous project aimed to detect document access activities that indicate cyber attacks.Features for this paper were slit in vectors, being vector one derived from scrolling traits, vector two a representation of the polarity of scrolling, and vector 3 treats the dataset as a bipartite graph with two node sets.k-means clustering achieved the best performance with an 83.5% success rate in predicting the authenticated user.
The paper [48] presents a highly efficient method for the automatic detection of asthmatic wheezing in breathing sounds.The process is suitable for personal asthma monitoring via mobile devices since its not computationally complex.Most of the used data came from online databases of Human lung sounds.However, the authors also used several of their recordings of regular and wheezy breaths.The authors also confirmed the optimality of the audio spectral envelope (ASE) plus the value of the tonality index (TI) as a feature detector, using the mRMR (minimal redundancy-maximal relevance) method.Thousands of experiments were performed, and the best results were obtained from the fluctuation of the Audio Spectral Envelope descriptor adopted from the MPEG-7 standard, reporting an accuracy around 100%.
Authors of [53] developed a method to collect the sensor data, acceleration, gyroscope, geomagnetic, and atmospheric pressure were the four kinds of sensors used.The shallow feature extraction of the raw data happens before the CNN learning deep feature, which will reduce the complexity of the network and training time of the model.This process is critical for smartphones because of their limited resources.Three classes of features are extracted from each frame, including statistical, time, and frequency domains.Namely, the features used are: Mean, standard deviation, variance, median, minimum, maximum, range, interquartile range, kurtosis, skewness, root mean square, integral, double integral, autocorrelation, mean-crossing rate, fast Fourier transform, spectral energy, spectral entropy, spectrum peak position, wavelet entropy, and wavelet magnitude.Final results show that the proposed method can achieve 98% accuracy, meaning it outperforms the SVM (support vector machine) and AdaBoost classification in efficiency and computational cost, reporting accuracy of 93.6% with AdaBoost.Yuan et al. [58] propose an indoor localization system using sensors for smartphones and smartwatches.Over 36,000 samples of data were collected in a 185.12 m 2 real indoor environment by a user using two different devices.Looking with the experimental results, the authors concluded that Twi-AdaBoost outperforms the state-of-the-art indoor localization algorithms.The localization error of position x and y achieved was 0.387 m and 0.398 m, respectively.The used datasets include the features: Place ID, Timestamp, Accelerometer_X, Accelerometer_Y, Accelerometer_Z, MagneticField_X, MagneticField_Y, MagneticField_Z, X_Axis Angle (Pitch), Y_Axis Angle (Roll), Z_Axis, Angle (Azimuth), Gyroscope_X, Gyroscope_Y, and Gyroscope_Z, reporting an accuracy around 99%.
In the paper [55], a novel technique based on the Bayesian voting algorithm that can be used with low-power sensors for transportation mode detection is presented.The authors used a set of data that consists of 400 h from eight individuals.Five sensors were used, being those: Acceleration, gyroscope, geomagnetic, barometer, and base station obtain by using AdaBoost classification to improve the results.Besides, the Bias algorithm was used to extract the features to reduce the adaptive boosting feature dimensions and determine the critical factors for identifying different transportation modes.The features used are: Mean, standard deviation, variance, median, minimum, maximum, range, interquartile, kurtosis, skewness, root mean square, time integral, double integral, auto-correlation, mean-crossing rate, fast Fourier transform, spectral energy, spectral entropy, spectrum peak position, wavelet entropy, wavelet magnitude, peak volume, intensity, length, variance of peak features, peak frequency, stationary duration, stationary frequency.Taking into account the final results, authors concluded that their algorithm could supply and replace some traffic pattern recognition algorithms and fix the problem that different mobile phones have various sensors, reporting accuracy between 64.54% and 96.83%.
In [51], the authors presented a contextual multi-armed bandits (MAB) approach that enables activity classification.This method makes context adaptation, continuous online learning, and active learning.Since the cost of extracting specific features is very high, the authors decided to use side information as the context.Since features can be used as contexts, this is not a limitation for the project.The proposed algorithm with active learning outperformed the benchmark algorithms by an average of 35%, reporting, and accuracy between 70% and 85%.
Xu et al. [47] focuses on three challenges, including the ability to accurately detect context using sensors and machine learning.The selection of activities for classification is performed by using context, reducing the complexity and improving the accuracy, speed, and energy usage, and the ability for experts in prescribing sets of physical activities under different environments.The features used for the project were: kNN (k-Nearest Neighbor) with time, kNN with wireless media access control (MAC) address and signal strength, and AdaBoost with audio peak frequency, peak energy, average power, and total energy.These were extracted from raw sensor data using a java program implementing the IContextFeatureExtractor interface.The data used was acquired by 14 participants that carried an Android mobile phone, and four 9-DOF devices were placed on dominant wrists, knee, ankle, and mid-waist.Each subject performed every required activity under every context for 2-5 min.The data were split into training (30%) and testing (70%) sets.Authors concluded that despite the methodology demonstrating effectiveness, efficiency, and potential, a more extensive study needs to be performed to improve privacy, security, and user-friendliness, reporting accuracy between 59% and 100%.
In [56], the problem of occupancy detection in a domestic environment was studied using machine learning techniques and their boosting versions on a dataset collected from electricity and water consumption smart meters.These features were selected using the Mutual Information technique.The dataset contains energy and water consumption (during summer) time data of 1-minute resolution for 16 consecutive days.The features included in the used dataset were: Central power, refrigerator, television, washing machine, dryer, cold water-kitchen, hot water-kitchen, dishwasher-water and washing machine-water, reporting accuracy higher than 70%.
Authors of [52] evaluated ten representative classifiers in the identification of two available datasets.The first dataset consists of accelerometer readings of walking patterns from 22 participants.The second one contains activity and postural transition data collected from the accelerometer and magnetometer data acquired from 30 participants.For the Walking dataset, the authors split the data into fixed-width sliding windows with a 50% overlap and extract nine features from every window and scale the features to [−1, 1].The authors obtained the mean, standard deviation, and median absolute deviation from the different axis of the sensors.The authors of the study already pre-processed the sensor signals by noise filter and partitioned the data into fixed-width sliding windows with a 50% overlap as well and constructed a 561-feature vector for every window.From those features, authors extracted 24 features, including mean, standard deviation from the different axis of body acceleration, gravity acceleration, jerk signals of body acceleration, angular velocity, and jerk signals of angular velocity.In conclusion, the authors reported an accuracy between 95.6% and 97.8%.
The study [46] focuses on using mobile devices for the detection of cardiovascular autonomic neuropathy.The authors concentrated on the task of the detection and monitoring of cardiovascular autonomic neuropathy.After all the studies, they concluded that best outcomes were obtained by the novel combined ensemble of AdaBoost and Bagging based on the J48 decision tree, reporting the highest accuracy of 94.53%.
Discussion
This review confirms that AdaBoost, and in general boosting ensemble methods, are reliable for the identification of daily activities.Several studies are not well described, and the source code of the algorithms are not publically available.The verification and reproducibility of the obtained results is not easily possible, because of the following reasons: Only some authors shared the datasets; in many cases, the methods are not explained well explained, in particular, the preprocessing of the datasets; and the hyper-parameter tuning is poorly described, or the exact algorithm parameters are not described.
The number of studies using the AdaBoost method for the recognition of daily activities is minimal, and the daily activities mainly recognized are the simple activities, including walking, running, walking upstairs and downstairs, and other quotidian activities.
Following our literature review, most of the analysed studies (85%) report the best results using AdaBoost methods.Only two studies (15%) presented in [49,58] have said that the AdaBoost based methods do not show the best results when compared with the other approaches for daily activities and environments recognition.Nevertheless, the authors of these studies still recognised the reliable applicability of the AdaBoost method for activity and environment recognition activities.
In summary, all reviewed works first perform a feature extraction step, which somewhat varies depending on the used sensor types.In cases of multiple sensors, or multi-channel sensors, the feature extraction is performed independently for each time series (i.e., channel or sensor).Generally, various statistical metrics, as listed in Table 3, are computed on the raw signal in the time domain, and rarely features are deriving from the frequency domain.Then, after the features are extracted from each sensor as a separate time series, the extracted features are fed into the classifiers.Very often, a systematic approach to feature extraction improves the accuracy [23].
The authors used different features, and the average accuracies obtained with them can be comparable.Table 3 presents the average accuracy of the various features extracted, verifying that the features that allow the recognition of daily activities with an accuracy higher than 90% are the mean, standard deviation, pitch, roll, azimuth and median absolute deviation of signal of motion sensors, and the mean of the signal of magnetic sensors.Moreover, Table 4 presents the advantages and disadvantages of the Adaboost method, proving that it can be used for the recognition of daily activities and environments with the recent advancements in the hardware and software of the devices commonly used.In comparison with other algorithms, the Adaboost method uses different algorithms as the weak learner, in which these algorithms will take into account the features extracted from the signals, such as mean, standard deviation, variance, and others.In general, Adaboost made use of complex data, but it can be used with 1D data in comparison with other algorithms.The authors of the research studies analysed used the Adaboost with uni-dimensional data, i.e., they used the features extracted from the data to provide the results, where the results obtained proved its reliability for physical and physiological data.
In conclusion, the use of mobile devices for daily activities recognition using AdaBoost is limited, because of the low power processing and battery capabilities of these devices [59,60].According to the reported studies in this review, it is possible to conclude that the use of the AdaBoost method is reliable with mobile devices as verified by the accuracies reported in the different studies, where only two studies reported accuracies lower than 50%.
Conclusions
This review presents studies available in the literature that use the AdaBoost method for the recognition of daily activities and environments.Thirteen studies were analysed, and the main findings are summarised as follows: • (RQ1) The AbaBoost method is an ensemble learning method that is used in conjunction with other algorithms.The different algorithms are commonly named as weak classifiers, avoiding the overfitting problem; • (RQ2) The AdaBoost method is implemented in conjunction with other algorithms to increase the accuracy of the recognition of daily activities and environments; • (RQ3) For the recognition of daily activities and environments, the AdaBoost method is combined with a weak classifier.The features that reported better accuracy are the mean, standard deviation, pitch, roll, azimuth, and median absolute deviation of the signal of motion sensors, and the mean of the signal of magnetic sensors.
This review also highlights the use of smartphones and other mobile devices as they should have a particular purpose because of limited battery life and processing capabilities.First, the authors excluded studies that are not focused on the recognition of daily activities end environments with the AdaBoost method.Secondly, the studies that do not use sensors available on mobile devices were excluded.We excluded several studies after analysis of the abstracts and full-text of the papers.Another reason for exclusion was the language of the study, excluding the studies that were not written in English.With the features collected, the AdaBoost method allows recognition with an accuracy higher than 80%.
As future work, the implementation of the AdaBoost method in the framework for the recognition of daily activities and environments; it will be used to recognize seven daily activities and nine environments.
Table 2 .
Critical analysis of reviewed studies.
Table 3 .
Average of the accuracy reported in the studies analysed, grouped by features.
Table 4 .
Advantages and disadvantages of the use of Adaboost method in the different studies analyzed. | 2020-01-22T11:44:25.132Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "992e974d3e7aa505182a6576fba431beaa98c731",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/9/1/192/pdf?version=1579500699",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "992e974d3e7aa505182a6576fba431beaa98c731",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
263615368 | pes2o/s2orc | v3-fos-license | Developing a Cybersecurity Risk Management Framework for Non-Technical Losses in National Power Distribution Companies
A BSTRACT . Traditionally, power companies are the driving force behind a country’s economy and disturbances in its services have severe effects. Advanced metering infrastructure (AMI) grids are vulnerable to network & web security attacks. The objective of this study is to pinpoint the risk mitigation measures that should be integrated into the electric power advanced metering grids of Jordan. The study investigates and proposes a Risk Management Framework (RMF) to minimize the risks of power fraudulent activity. AMI is vulnerable to electricity losses and hence the need to develop a system that would help mitigate this risk. To develop the RMF, we integrate security and privacy into the management activities, to assist in the organizational preparation of the processes and technologies needed for the ongoing energy system IT and OT convergence and digital transformation poses more cybersecurity concerns and essential requirements. We used the Quantitative Risk Management process utilizing the NIST RMF standards for financial risk impacts mitigation of energy losses in the AMI grid. The dependencies and influences between the dimensions considered are investigated, information gathering, and the collection of work data were carried out and used for quantitative analysis. This paper presents a pilot project study in collaboration with EDCO the developed and proposed RMF requirements, risk assessment and, finally recommends the implementation of the selected security controls for the AMI profile protection to mitigate the identified cyber risk.
Introduction.
In recent years, the great development in renewable energy resources and increasing of electric power demand poses new challenges on the distribution networks [1].The present passive distribution networks (PDN) are of dual structure as they consist of substations and loads [2].Nowadays, there is a need to convert the current PDN into an Active Distribution Network (ADN) of a ternary structure; distributed generations (DGs), substations and loads [3].
In recent years, the great development in renewable energy resources and increasing of electric power demand poses new challenges on the distribution networks [1].The present passive distribution networks (PDN) are of dual structure as they consist of substations and loads [2].Nowadays, there is a need to convert the current PDN into an Active Distribution Network (ADN) of a ternary structure; distributed generations (DGs), substations and loads [3]].
The future of electricity is on the Internet of Things (IoT) and recently the Internet of Everything (IoE).Traditional power grids are getting abandoned for smart and more efficient power grids [1].According to the U.S. Department of Energy, energy reliability is one of the primary reasons behind the move toward smart grids.However, smart grids come with their challenges, such as the need to ensure cybersecurity [2].As such, cyber security experts need to be involved in the development, and maintenance.Also, monitoring smart grids ensure maximum customer satisfaction and ensure that one of the primary sources of any country's security is maintained [3].Energy is an essential resource for any country that wishes to ensure maximum security for its citizens, especially from external attacks [4,5,6,7].
The need to ensure cybersecurity in these smart grids is not a matter of convenience or mere speculation, given the recent attacks on various power grids by hackers.On December 23rd, 2015, in Ukraine, for example, the information systems of the three major energy distribution companies got hacked [8]].Hackers allegedly sponsored by forces and states against the government in Ukraine successfully hacked the power grid and gained control.In the days after this, the country had no electricity, and cybersecurity professionals were the ones who helped to bring the electricity back online.The hack on Ukraine's power grid marked the first-ever successful hack on a power grid, and it marked the turning point in how countries viewed the need to have secure power grids [9,10].The latter becomes more important with the move to use smart grids in most countries.
The example above about a hack in Ukraine's power grid for political reasons is an extreme one to show why cybersecurity is essential for any power grid.However, there are other lesser reasons why it is vital to protect power grids from intrusion.One such reason is to ensure that energy there is no theft.Smart grids work using advanced metering where the consumers are charged depending on their usage, and the electricity cuts itself off if the subscription of the consumer is depleted.However, malicious consumers and intruders might override such instructions and steal energy from the grid.In addition to this, using a smart grid requires that consumers share their personal information, and this information might get stolen [7] which poses a risk.
Jordan has appreciated the need to use smart grids in its energy distribution.To this end, the Jordan Electrical Power Company is responsible for distributing about 66% of the country's energy consumers, and it intends to use advanced metering systems.Given that energy theft is one of the significant cybersecurity issues that would face such a smart grid, it is crucial to assess the possible vulnerabilities and possible solutions to mitigate this risk [11,12,13,14].
In summary, the critical issue in this study is to investigate the advanced metering infrastructure from an energy theft perspective.Energy theft is one of the most important reasons to implement risk cybersecurity management for energy power distribution in Jordan.The study will also explore the control measures that power companies can take to manage cyber risk to reduce theft energy risk.It will also assess the physical and digital attack surface and vulnerabilities associated with each AMI then make recommendations for appropriate security requirements.The organization of the rest of this paper are as a follow, in section 2 background, related work in section 3, RMF AMI pilot project in section 4, Finally, the results and conclusion are presented in section 5.
Motivation
The Energy and Minerals Regulatory Commission (EMRC) is responsible for regulating and monitoring the energy sector, generation, transmission, distribution, and electricity supply.EMRC recorded 19,962 cases of electricity theft in 2018.Also, Law enforcement personnel at the EMRC recorded 10,443 cases of theft, while employees at the three electricity distribution companies discovered 6,768 cases [12,13,15].This month's report on the largest electricity and water theft in the Kingdom, which will now get submitted to the Judicial Authority, stipulated that the thief must get fined 2.7 million JOD.Finally, last month, a joint force of the Public Service Directorate and Gendarmerie seized equipment worth 300 thousand JOD used to embezzle electrical power.The above formal reported issues constitute the driver motivations for conducting this empirical research.Conducting this empirical research will help provide a solution to mitigate the energy theft problem in the Kingdom of Jordan [12,13,15].
Project Description
The project objective is to conduct pilot project research to investigate electricity losses being the leading concern for power distribution companies for decades.Power distribution companies throughout the world are trying various new methods for detecting electricity non-technical loss.In combination with the innovation in information and communication, technologies Cyber Security threats, more unique and effective non-technical losses detection methods recommended by NIST aiming to implement RMF in the Jordanian power distribution companies to mitigate the risks affecting the smart grid infrastructure.The proposed development and implementation process of risk analysis solution allows for the practical consideration of potential risk determinations.For this pilot project, the Quantitative Risk Management process methodology is employed utilizing the NIST RMF risk mitigation of energy loss in AMI.
Related work
Previously various studies have been conducted on smart electricity grid protection against theft and other malicious activities associated with cybersecurity.These projects and studies have identified various vulnerabilities in smart grids and solutions to these vulnerabilities.Langer, Skopik, Smith, and Kammer stetter assessed cybersecurity issues that face smart grids as they evolve from the traditional forms of electricity grids to the new types of smart grids [3].The researchers understood that smart grids are made of various ICT components that are all vulnerable to theft and other malicious activities.In their article, the researchers sought to provide a solution to assess any possible cybersecurity issues with these smart grids.The researchers recommended a two-stream risk assessment method to determine the various risks in a given smart grid.The research suggested covered both the existing components and the near future developments of any current system.Fig. 1 represents the model that the researchers recommended.FIGURE.1. Conceptual and implementation-based risk assessment in several interrelated steps [3].
The above method got implemented in Australia, and it was also evaluated in the course of the Austrian Research Project.The level of threats in smart buildings, e-mobility, customer premises, low-abvolt generation, medium-abvolt generation, grid test points, primary substation, secondary substation, grid service, and metering were all identified using this assessment process.Authentication, authorization, security mechanisms, integrity, availability, internal and external interfaces, confidentiality, data protection, system maintenance, and system monitoring were among the risks found.This method can be a great place to start while assessing the Jordan electricity grid risks.
In their research, Mathas et al. explored the problems of Advanced Metering Infrastructures (AMI) [5].The scientific and industrial progression through installing smart meters has increased the demand side of technical and security risk management.Smart meters are an essential element of the AMI, where they enable two-way data communication between service users and utilities provider.The smart meter's real-time measurements generate a large volume of data that can be quickly transmitted to customers.Consumers may benefit from the smart meter's soundless functionality, but security issues and threats are significant problems and risks that should be tackled.Consumers will be unable to use the excellent features provided by smart meters if they are not properly prepared and risk-managed.The feasibility, investment, and need to preserve an acceptable degree of privacy through cybersecurity risk management must all be considered during implementation.
Khattak, Khanji, and Khan also understood the possibility of vulnerabilities in smart meters, given that the Internet of Electricity was getting appreciated by energy companies and governments [2].Given the advanced metering infrastructure implementations, the researchers decided to investigate the cybersecurity concerns in the increasingly complex smart energy grids.The researchers identified the AMI security issues as smart meter security, data collector security, communication, and network security.The paper suggested the following security control and countermeasure.a) Having the smart meters encrypted that protecting the communication between devices and networks.It would also help to reduce the chances of data and information security getting compromised.b) Authentication mechanism serves the same purpose as smart grid encryptions and ensures that only authorized people have access to critical controls and information in the energy networks, c).The availability mechanism ensures that the availability of the AMI infrastructure does not get compromised through vulnerabilities such as network jamming and packet flooding, and d).Jamming prevention mechanism to help with preventing the jamming vulnerability technique that malicious people might use on the AMI devices and networks.This study is essential for this research since it gives direction on some of the vulnerabilities to look for when assessing the Jordan electric grid and possible solutions for these cybersecurity issues.
Yadav, Kumar, Sharma, and Singh conducted a study to determine the possible cybersecurity issues in smart grids and the possible solutions to these problems [11].The researchers understood that given that smart grids rely on IoT, various cybersecurity issues must get addressed.The researchers identified the protection of consumer information, system availability, integrity and reliability, and confidentiality as some of the key cybersecurity issues that face the Smart grids.The researchers identified that the key goals of any smart grid cybersecurity are the availability of service, the confidentiality of data, and the integrity of the information shared.The researchers determined that the security of the smart grids would get compromised through 5 main methods, the use of malware, unauthorized access by internal users, the use of replay or repeat false messages, traffic analysis, and DoS attacks.The researchers suggested that the other cybersecurity measures for protecting networks ensure that the above methods do not work on a given smart grid.However, the researchers identified that this is only related to providing the users with efficient electricity availability, protecting their data, and focusing less on other security concerns.Nonetheless, this is not the first study to identify energy theft as a possible issue in ensuring the cybersecurity of smart grids.Lopez, Sargolzaei, Santana, and Huerta also conducted a study to determine the threats and countermeasures present in smart grids when it comes to cybersecurity and identified energy theft as a possible threat facing smart grids [4].According to researchers, the intention to steal energy from the grid will interrupt measurements before taking place, tampering with the stored data before, when, or after the measurements have been taken and stored in the meter.Also, one might modify the networks before or during the data logging by the meter.It is thus imperative to ensure that smart grids get protected against energy thefts.The use of theft detectors was the researchers' approach to the issue of energy theft.Theft detectors work by determining the average use of electricity per day against a certain predetermined threshold to assess whether electricity is stolen.If the average use is less than the minimum threshold per day, the assumption is that the energy is stolen.
McLaughlin, Pdkuiko, and McDaniel [6], and [1] went further than Lopez, Sargolzaei, Santana, and Huerta to demonstrate where energy theft might take place in a given smart grid [4].McLaughlin, Pdkuiko, and McDaniel studied the phenomenon of energy theft in advanced metering structures and found vulnerabilities that help malicious people steal energy [6].Byres, Franz, and Miller, on the other hand, investigated the phenomenon of vulnerabilities in the SCADA systems using attack trees [1].When the two are combined, it becomes easy to see the various stages in which malicious persons might steal energy from the systems.Using the concepts developed by [6] and [1] to investigate energy theft would be helpful for this research since it creates a benchmark and a body of knowledge in which the research can progress.The studies by [6] and [1] were limited in that they did not focus on the specific circumstances surrounding Jordan's energy networks and the possible solutions for these energy theft vulnerabilities.
Perhaps, one of the best solutions offered to counter energy theft in smart power grids is provided by Sun, Hahn, and Liu [9].According to Sun, Hahn, and Liu, various cyberattacks have happened that focused on AMI, including energy theft.The researchers concentrated on energy theft caused by network intruders from external interfaces including smart meters and information hackers.To address the issue of cyber-attacks, the researchers recommended using Anomaly and Intrusion detection systems (ADSs) [9].These ADSs detect any type of anomaly or possible intrusions in the system and communicate them, alerting those people tasked with ensuring the security of the smart grid system.[1] presented a technique, This report, which focuses on the known sources of AMI threats, offers a holistic view as to how various security issues contribute to electricity theft being addressed.Future research should look at each of the known sources of threats in greater detail, and then apply suitable intelligent algorithms to evaluate data to create a model for timely decision support.[22]..
Summary
The above studies describe the research conducted in line with ensuring the cybersecurity of smart grids.The studies show the different methods and techniques used to detect security threats or possible intrusions and the solutions for ensuring that these security threats and possible intrusions repetition.Given that this study seeks to focus on Jordan, these papers will form a basis for the following research areas, including offering solutions to the vulnerabilities identified in Jordan AMI that may enable intruders to steal energy.
RMF AMI pilot project
This pilot project is ongoing research on the Jordanian Power Distribution Companies (PDC) to investigate electrical losses risk mitigation.due to cyber-attacks for the evolving national Smart Grid components implementation and grid digitalization.As a follow-up to our research field of interest, we are harnessing our studies to solve national problems in the field of cybersecurity for energy distribution in Jordan.This pilot research will propose and implement RMF for selected security controls to mitigate risks at the AMI.The RMF developed a riskbased approach process study that incorporates cybersecurity and privacy into the company management activities to aid in the organizational preparation of the processes and technology required to meet the energy system digitalization transformation requirements.The goal is to install smart grid AMI grid security controls on the electric grid for securing a designated region.
The Study Preparation
This research is carried out along with the agreement and authorization of the Electricity Distribution Company (EDCO) to provide the data and unclassified information and resources for a limited pilot project aimed at RMF implementation.The preliminary kick-off meeting is headed by the company's former General Director and attended by the director-general deputies for administrative, technical, and planning affairs and the concerned CEOs.
National PDCs, EDCO alike various worldwide, is facing electricity theft and bribery in electricity usage as the two most serious issues facing PDCs.The broadness, security, and privacy of these issues limited the scope of the study to a partial RMF controls selection and implementation; other issues and controls are considered for any future collaborative work.
Accordingly, data & information collection was achieved via several meetings with CEOs, department teams, staff, and published reports by national energy stakeholders to lay the floor to collect the data for the scope of this pilot project, making use of the company IT unclassified resources.EDCO has a mandate for distributing electrical energy purchased from the National Electric Power Company (NEPCO) to the southern part of Jordan, the Jordan valley, and many rural areas in the country, operating different transmission and distribution electricity voltages to end-user facilities.This pilot project's scope is restricted to investigating energy losses caused by SMI devices network equipment for a selected area in the company service coverage areas as a model to develop and recommend the RMF to be approved and implemented in phases to mitigate loss risk.
Typically, energy distribution companies, and EDCOs alike, have massive data gathered in their electricity distribution network databases to monitor, control, and manipulate, among other issues, it's Smart Grid electrical loss.
An electrical loss gets caused by resistance in the flow of electric current in electrical networks and transformers.The loss is influenced by the square of the value of the electrical load flowing through medium and low-voltage networks.The voltage at which the networks get operated also affects it.If the current and voltage increase, so the electrical loss increase.Table 1 represents the real electricity loss rates on medium and low voltage networks in EDCO electricity distribution company (2018-2020) [12,13,15].
Table 1 and Fig. 2. shows the company's electric loss rate for 2018 and 2019 was (11.88%), which was higher than the (11.32 %) allowed by the Electricity Sector Regulatory Authority for the tariff period (2018-2019) without penalties.It also shows; that in 2020 the electricity loss increased to (12.88%), consequently exceeding the allowed limits and resulting in financial losses.The loss rate on low voltage increased from (8.2%) in 2018 to (8.6 %) in 2019, while the percentage of loss on MV networks increased from (4.0 %) to (4.5 %) compared the year 2018, 2020 the MV increased to (5.58%).The increased MV and LV are the impact of COVID-
FIGURE. 2. Electricity loss rates on medium and low voltage networks
The company reported a loss rate of (12.88 %) in 2020, an increase from the previous year's loss rate of 11.88 %, and the reasons for this increase are due to the circumstances of the COVID-19, full and partial closures in the Kingdom of Jordan, which have directly affected the increase in electrical loss rates.That reduced the implementation plans to reduce non-technical losses, which led to an increase in the loss by (0.5%) and the most important of these procedures.Increased network tampering and attacks in 2019 were 569 cases, while in 2020, it increased to 710 cases.The latter was a result of customers' behavior and difficult financial circumstances.Also, it was caused by a lack of monitoring, decreased inspection, and detection due to closures and injuries to company staff.Particularly during the first phase of the pandemic, as this resulted in a decrease in the number of meters identified.It also caused an increase in cases of tampering, particularly given the company's inability to take any action, particularly at the stage of complete closure.
During the year 2019, the company worked to make all possible efforts to reduce electrical losses in all its forms through the following: • Identifying the electrical feeders and areas in which the electrical loss exceeded the performance indicators and determining the necessary measures to reduce the losses on these feeders and regions.The company has implemented several major projects to improve the performance of electrical networks and contribute to reducing electrical loss in the company according to the loss reduction plan.
• To prevent tampering and misuse, the company conducted procedures for the detection and inspection of subscriptions, as well as prosecutions of cases of tampering, in cooperation with the Commission's judicial police and security authority (Public Security and the Gendarmerie), and filed an invitation with the courts.In 2019, there were 892 cases, and in 2020, there were 882 cases.
Security Control
The presented recommendations are a cornerstone step to studying losses by specifying criteria for cyber-physical and knowledge networks in smart grids.Cybersecurity controls for protections, safeguards, or defensive measures (processes, protocols, applications, procedures, or even other intentions) intended to protect a system or its resources from cyberattacks [17,18,24].Also, to investigate and characterize principles that define selected cybersecurity controls applied to AMI to protect the smart grid, essential for the RMF development and implementation in the allocated region.
Security controls can be classified as management, technical and operational controls.Management controls apply to problems that management must deal with, and focuses on security policy, preparation, guidelines, and requirements that influence the set of organizational and technological controls to reduce risk and protect the purpose of the company.The proper use of device hardware and software protection capabilities are examples of technical controls, steps that perform in parallel to protect critical and sensitive data, information, and I.T. system functions [16].Establish automated protection against unauthorized access or misuse, aid in identifying security breaches, and support application and data security standards.Operational controls are mainly concerned with implemented and executed processes by those responsible for the system's use.Operating controls aim to enhance the security of a specific technique or group of systems and are often based on management and technical controls [25].
The smart grid is a complicated system of systems, and to secure its mission of efficient power delivery, it needs more than one layer of security controls.Physical fences and surveillance cameras are only a few of the security controls available, as are encryption algorithms and digital certificates.Applying these security controls on the smart grid prevents the unauthorized disclosure of information, ensures that the information was not modified by an unauthorized source, and ensures that the services and information are available when a user or system requires them.Confidentiality, Integrity, and Availability (CIA) are the priorities of every physical or information security program.
The fact that the smart grid would include both legacy and advanced technologies is an added challenge, making defining the specifically required security controls, and where to put those security controls a difficult decision.The evolving nature of information flow in a smart grid network, as well as new applications for that information, further complicates the security landscape [26].
Security Controls for AMI
Smart Electricity Meters (SEMs) installation at both client's end and substations is part of Smart Grids' modern digitalization of energy usage and costing scheme, which is controlled by the AMI.The latter supports the bidirectional data connectivity between SEMs and utilities, allowing for the development of a smart grid for the PDCs.The AMI is controlled and managed via instructions in real-time transmission the consumption data, and pricing information to both the utility company and the consumer.The SEM, customer gateway, verbal communication network, and headend are all components of AMI networks.Energy-related data is recorded and communicated by SEMs.Typically, they get arranged to record and supply customers' power usage and billing data at regular intervals, typically every minute.
The client gateway connects the AMI network to customer systems and appliances.This network acts as a connection between the SEM and the AMI headend, permitting transmitted data to flow in both directions.Normally such connections are implemented utilizing Virtual Private Networks (VPN), Fiber, or wireless connections communication technologies owned, controlled, and managed by a third party.AMI security standards, threat sources, and SEM attacks all aim to manipulate data and are major cyber security concerns, although it jeopardizes revenue and customer privacy.Moreover, it is also capable of harming the overall operations of the power grid.Fortunately, the presence of these assaults and other criminal actions such as unauthorized procurement processes, sale, and manipulated equipment by company employees, involvement with consumers, typically via third parties, to commit energy thefts, and the illegitimate purchasing and selling of reserved vouchers.The existence of compromised data in the AMI indicates this.AMI Communications Network serves as a link between the SEM and the AMI headend, allowing data to flow back and forth.
As a result, the protection methods to cope with AMI data, similar to those used to secure data in general, are justified based on access control, analysis and feedback, authentication, authorization, availability, confidentiality, integrity, non-repudiation, privacy, and accountability.
Confidentiality, integrity, availability, and non-repudiation are the four basic AMI specifications (CIAN).However, the continued operation of CIAN is jeopardized due to cyberattacks, which usually seek to disrupt the AMI for energy theft.Risks to AMI's CIAN mitigation occur in two ways: Viz ensuring the AMI's security requirements are maintained.Furthermore, automatically restored after any security breach or through diligent identification of threat sources [18].Finally, the eventual following is that relevant models and algorithms are used to manage the parameters of the required systems.It is worth noting that an indication of a violation or threat indicates you have failed to meet all the CIAN's requirements; additionally, analyzing the threats and attackers offers more useful information for proper device management and surveillance [24].
Control is defined as an operation, process, method, or another measure that eliminates damage by avoiding or stopping a security breach, mitigating the risk it can inflict, or finding and revealing it such that corrective action is taken.
When analyzing the smart grid and security to identify Smart Grid Security Controls, it is crucial to determine what needs to be protected and why protection is so critical.The global power grid comprises various technologies and components, and electric utilities have evolved several business practices to ensure the reliable delivery of electricity [19].
AMI Security Profile provides a collection of baseline controls for safeguarding the AMI components.The controls are the outcome of a four-phase procedure that entails the following: 1) smart grid use cases assessment 2) risk assessment, 3) domain analysis, and 4) analyzing and adapting national authority-specified controls.The collection of security measures is comprehensive.Aside from its definition, each step includes an explanation for adoption and, where applicable, future improvements or supplementary guidelines [24].
Risk Management Framework Methodology
Research methodologies used for cyber security risk are the study of the documents of the act's norms, international standards, procedures, international legislation, content analysis, comparative methods, and statistical and graphical presentation methods.Information gathering and the collection of work data were carried out by using the statistics of operational plans, risk analyses, and operational procedures.This contribution is the result of the above method in the form of a pilot project research methodology.This interactive research methodology has two parts: investigation and achievement, to establish the practice's progress based on the learning of individuals and workgroups.This pilot study provides the research team and others with a better insight into the problem and potential solutions.The NIST Special Publications 800-53 Rev.4 & Rev.5 to control the security and privacy of information systems, organizations' standard and compliance framework are continuously updated that attempts to flexibly define standards, controls, and assessments based on risk, capabilities, and cost-efficiency, [31].The NIST SP 800-53 rev4, NIST SP 800-82, and AMI security profile structures, as well as the associated practice standard and controls for risk reduction, provided the theoretical basis for this proposed risk management approach.The selected used rules to perform a quantitative risk analysis, plan risk responses; and apply controls to mitigate risks for this pilot case project are presented in Table 2.
This section includes the pilot project chosen security controls guidelines from the Industrial management and Automation Systems Security measures (IACS) and NIST.IACS is an important part of the smart grid because it tracks and controls industrial processes in the entire power supply chain, from generation to disruption.As shown in Table 2, their safety is critical to the proper functioning of the power grid.Although this pilot project does not include all the smart grid's command and control areas, the security principle remains the same.When implementing a control command, the control system must know that the command is transmitted from an authorized and authenticated source [24].
System and information integrity
Security measures and practices specific to IACS Table 3 shows general implementation standards for security controls and procedures, IACS adoption, smart grid adoption, and AMI adoption.
NIST SP 800-53 rev4
Security and Privacy controls for AMI data systems and organizations lay out a foundation of controls for securing information systems in government, based on a variety of statutory and regulatory documents, guidelines, and business criteria.Policy formulation and management, awareness and training, contingency planning, incident response, staff protection, systems procurement, and other security aspects are addressed by the controls, which are organized into 18 families that represent unique security topics.Furthermore, it devotes a substantial portion of its material to illustrating the control selection process, which can be used as part of a risk management strategy.
NIST SP 800-82
Limiting physical access to IACS networks and reducing access to IACS networks (e.g., through network isolation, DMZ, multilayer, access control), protecting against vulnerabilities, detecting security incidents, maintaining a multidisciplinary security unit, successful networking and information sharing, fault tolerance, graceful decay, device restoration, and defense-in-depth are some of the Key protection priorities identified in NIST SP 800-82.As an outcome, an IACS defense strategy can include IACS-centric policies and procedures, knowledge and training, security across the life cycle of IACS components (from design to disposal), a multi-layered network with critical operations performed in the most protected subnetwork, and other elements derived directly from security objectives.The paper goes through each of these points in detail.
Electricity Regulatory Framework in Jordan
Jordan has laws and regulations governing the responsible use of nuclear energy and general electricity.The safe use of nuclear energy issues in April, has provisions for authorization, disposal of radioactive material, emergency preparation requirements, administrative sanctions, inspection, safeguards, safety responsibilities, liabilities and punishment, enforcement, and physical protection.It is important to follow the stipulated regulations when dealing with nuclear energy sources, given their toxicity and lethality in case they are not handled well.These are enforced with the help of the Jordan Nuclear Regulatory Commission, which was established in 2007.The rules used to regulate the nuclear energy climate in the country follow the IAEA safety standards, and EU, USDOE, CNSC, IRSN, and KINS commissions to ensure the responsible use of nuclear energy [14].
The general Electricity Law concerns the illegal use of the electrical system, unlawfully connecting, stealing electrical power, or even assisting a person in such activities will result in imprisonment from 6 months to two years.Other Punishments that one might also face include fines of less than two thousand dinars but not more than 10,000 dinars or both imprisonment and a fine.Sabotage will result in imprisonment for a period of one month to one year or a fine of fewer than 500 dinars (not more than 2,000 dinars) or both imprisonment and fine.Negligence results in one week's imprisonment to three months or a fine of not more than 500 dinars or both imprisonment & fine.
These general electric laws are regulated with the help of the EMRC.Electricity tariffs, payment fees, service fees, disbursements, royalties, and link charges to the transmission and distribution system are all determined by the Electricity Regulatory Commission, which was instituted in 2001.
Results and conclusion
Figure the RMF framework proposed in this pilot project results from an actual initiation process in which the company team and the researcher (research team) worked together consciously.Losses for real-time data processing over three years.The RMF, as well as the assessment processes for cybersecurity-related threats and mitigation management methodologies, as well as active project team participation, aids the study in becoming more successful in risk management methodology adjustments.Because of the pilot project's limitations, development and implementation are limited to; General application requirements measures and procedures that specify cybersecurity areas and controls, with the adoption of a smart grid and Security Profile for AMI profile summarized in appendix A [29].Adapted from NIST 800-53 shows a set of guidelines for conducting security and privacy control assessments for information systems.We recommend systematic assessments, performed in phases for the system implementation.The access control family catalog procedure to assess the security controls and control enhancements in NIST Special Publication 800-53, Rev. 4 & Rev. 5 protection and privacy controls.The implemented procedures are adaptable and customizable, allowing the company to conduct security and privacy control assessments that aid internal risk management processes and are coherent with the company's acceptable risk tolerance, aiming to develop effective protection and privacy evaluation plans with this information.
Future Work
The recent annual 2021 report of The National Electric Power Company (NEPCO) stated that the Company's accumulated losses amounted to JD 5,135,023,755 as of 31 December 2021, which exceeds 75% of the paid-in capital.This report is another strong motivation fueling the researcher to undergo this vital research challenge that needs continuous risk assessment, identification & mitigation.Also, Jordan's PDC undergoing digital transformation activities ought to improve its cybersecurity strategies.A Risk Management Plan is needed to guide the project team and managers during the implementation and development of the RMF cyclical process to incorporate principles of security and risk management into the organization's system policies and procedures.A defined document for the RMF Plan that collects all necessary and valuable information for the researcher to manage the appropriate risks, including the RMF objectives and tolerances, the identified methods, strategies, and procedures to detect, assess, plan responses, monitor and control risks, utilizing the defined models to use.To expand the RMF development and implementation with continuous improvements, extra information from the company team may be required to identify the security requirements and utilize a systematic asset assessment required for RMF practices by acquiring knowledge and encouraging them to engage in the risk management process' phases and feel their contribution in the improvements through effective engagement.A risk Monitoring and Control process for implementing risk mitigation plans, controlling identified risks, monitoring risks, identifying potential risks, and evaluating the efficacy of risk management systems that have been put in place.A further direction is to study SCADA security risks associated with data communication networks utilizing Virtual Private Network (VPN) connectivity to connect the AMI and SEMs grids.VPN data communication security on the public network is based on the CIA triad concept in network security.To investigate when VPN may be used, and when to recommend the use of VPN to have protected communication based on anonymity communication.Also SCADA, OT (Operational Technology), and IT (Information Technology) systems have become increasingly interconnected, creating new cybersecurity threats and vulnerabilities.
Research limitations
As predicted, some challenges arose during the pilot project's implementation and creation of the RMF methodology due to the adoption of a new practice.These challenges were due to the following factors: novelty, the timing of the research study during COVID-19 together with staff working remotely from home; restricted time available; and public awareness of the importance of cybersecurity and risk management.As an innovative approach, the suggested technique, method, and procedures were placed on the project team.manager, deputies, and engineers for their support and assistance in providing the data & information to complete this project.
AT-1(a)(2) AT-1(a)(2)[1]
develops and documents procedures to facilitate the implementation of the security awareness and training policy and associated awareness and training controls; AT-1(a)(2) [2] defines personnel or roles to whom the procedures are to be disseminated AT-1(a)(2) [3] disseminates the procedures to organization-defined personnel or roles; AT-1(b)(1) AT-1(b)(1) [1] defines the frequency to review and update the current security awareness and training policy; AT-1(b)(1) [2] reviews and updates the current security awareness and training policy with the organization-defined frequency; AT-1(b)(2) AT-1(b)(2) [1] defines the frequency to review and update the current security awareness and training procedures; and AT-1(b)(2) [2] Reviews and updates the current security awareness and training procedures with the organization-defined frequency.provides basic security awareness training to information system users (including managers, senior executives, and contractors) when required by information system changes; and AT-2(c) AT-2(c) [1] defines the frequency to provide refresher security awareness training thereafter to information system users (including managers, senior executives, and contractors); and AT-2(c) [2] provides refresher security awareness training to information users (including managers, senior executives, and contractors) with the organization-defined frequency defines the types of inappropriate or unusual activity to look for when information system audit records are reviewed and analyzed; AU-6(a) [2] defines the frequency to review and analyze information system audit records for indications of organization-defined inappropriate or unusual activity; AU-6(a) [3] reviews and analyzes information system audit records for indications of organizationdefined inappropriate or unusual activity with the organization-defined frequency;
AU-6(b) AU-6(b)[1]
. defines personnel or roles to whom findings resulting from reviews and analysis of information system audit records are to be reported; and AU-6(b) [2] Reports findings to organization-defined personnel or roles.
POTENTIAL ASSESSMENT METHODS AND OBJECTS:
Examine: [SELECT FROM: Audit and accountability policy; procedures addressing audit review, analysis, and reporting; reports of audit findings; records of actions taken in response to reviews/analyses of audit records; other relevant documents or records].Interview: [SELECT FROM: Organizational personnel with audit review, analysis, and reporting responsibilities; organizational personnel with information security responsibilities].
FAMILY: IDENTIFICATION AND AUTHENTICATION IA-2 IDENTIFICATION AND AUTHENTICATION (ORGANIZATIONAL USERS)
Assessment Objective: Determine if the organization: Determine if the information system uniquely identifies and authenticates organizational users (or processes acting on behalf of organizational users).
POTENTIAL ASSESSMENT METHODS AND OBJECTS:
Examine: [SELECT FROM: Identification and authentication policy; procedures addressing user identification and authentication; information system design documentation; information system configuration settings and associated documentation; information system audit records; list of information system accounts; other relevant documents or records].
Interview: [SELECT FROM: Organizational personnel with information system operations responsibilities; organizational personnel with information security responsibilities; system/network administrators; organizational personnel with account management responsibilities; system developers].
Test: [SELECT FROM: Organizational processes for uniquely identifying and authenticating users; automated mechanisms supporting and/or implementing identification and authentication capability]
IA-3 DEVICE IDENTIFICATION AND AUTHENTICATION
Assessment Objective: Determine if IA-3 [1] the organization defines specific and/or types of devices that the information system uniquely identifies and authenticates before establishing one or more of the following: IA-3 [ defines personnel or roles to whom the procedures are to be disseminated; SC -1(a)(2) [3] disseminates the procedures to organization-defined personnel or roles; SC -1(a)(2) SC -1(a)(2) [1] develops and documents procedures to facilitate the implementation of the security awareness and training policy and associated awareness and training controls; SC -1(a)(2) [2] defines personnel or roles to whom the procedures are to be disseminated : Examine: [SELECT FROM: Security awareness and training policy and procedures; other relevant documents or records].Interview: [SELECT FROM: Organizational personnel with security awareness and training responsibilities; organizational personnel with information security responsibilities].AT-2 SECURITY AWARENESS TRAINING Assessment Objective: Determine if the organization AT-2(a)provides basic security awareness training to information system users (including managers, senior executives, and contractors) as part of initial training for new users; AT-2(b) a remote connection; and/or IA-3[2][c] a network connection POTENTIAL ASSESSMENT METHODS AND OBJECTS: Examine: [SELECT FROM: Identification and authentication policy; procedures addressing device identification and authentication; information system design documentation; list of devices requiring unique identification and authentication; device connection reports; information system configuration settings and associated documentation; other relevant documents or records].Interview: [SELECT FROM: Organizational personnel with operational responsibilities for device identification and authentication; organizational personnel with information security responsibilities; system/network administrators; system developers].Test: [SELECT FROM: Automated mechanisms supporting and/or implementing device identification and authentication capability].]develops and documents a security awareness and training policy that addresses: defines personnel or roles to whom the security awareness and training policy are to be disseminated; SC-1(a)(1)[3] disseminates the security awareness and training policy to organization-defined personnel or roles; develops and documents procedures to facilitate the implementation of the security awareness and training policy and associated awareness and training controls; SC-1(a)(2)[2]
Table 1 .
19. Electricity loss rates on medium and low voltage networks
TABLE 2 .
Cybersecurity NIST controls specified in power systems' standards.
TABLE 3 ;
general application standards within an AMI smart grid security control | 2023-10-04T15:08:19.873Z | 2023-09-25T00:00:00.000 | {
"year": 2023,
"sha1": "538cf1651852ea959278c761326039f3a06bfb5c",
"oa_license": "CCBYNC",
"oa_url": "https://dsr.mutah.edu.jo/index.php/jje/article/download/717/499",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a25b404c4ba58e5698251bd08fb74ce22485246b",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
233658018 | pes2o/s2orc | v3-fos-license | Spatial Variation of Extreme Rainfall Observed From Two Century‐Long Datasets
This paper presents the spatial variation of area‐orientated annual maximum daily rainfall (AMDR), represented by well‐fitted generalized extreme value (GEV) distributions, over the last century in Great Britain (GB) and Australia (AU) with respect to three spatial properties: geographic locations, sizes, and shapes of the region‐of‐interest (ROI). The results show that the spatial variation of GEV location‐scale parameters is dominated by geographic locations and area sizes. In GB, there is an eastward‐decreasing banded pattern compared with a concentrically increasing pattern from the middle to coasts in AU. The parameters tend to decrease with increased area sizes in both studied regions. Although the impact of the ROI shapes is insignificant, the round‐shaped regions usually have higher‐valued parameters than the elongated ones. These findings provide a new perspective to understand the heterogeneity of extreme rainfall distribution over space driven by the complex interactions between climate, geographical features, and the practical sampling approaches.
1. How do areal rainfall extremes change over space? 2. How may other factors, such as the size, shapes of the area in question affect such spatial dependencies? 3. How are the spatial patterns and variations linked to the large-scale climatology of rainfall? 4. What is the implication of the spatial variation of the parameters to the applications (e.g., FRM)?
Additionally, a toolbox known as the spatial random sampling for grid-based data analysis (SRS-GDA, Wang & Xuan, 2020), is employed to assist the required spatial sampling with predefined or randomized features, for example, size, location, and dominant orientation of ROIs, from the grid-based datasets. The sampled annual maximum daily rainfall (AMDR) of each ROI is fitted with the widely used and tested Generalized Extreme Value (GEV) distributions whose spatial variation is then analyzed. The associated intensive computation demand is met by the high-performance computing (HPC) resources provided by Supercomputing Wales (https://www.supercomputing.wales).
The remainder of this paper is organized as follows: Section 2 describes the data and methods, then the sampled ROI and the goodness-of-fit (GOF) results. Both the qualitative and quantitative results of the spatial variation of the distribution parameters and further application are discussed in Sections 3 and 4, as well as their linkage to the climatology of rainfall are discussed in Section 5. The conclusions and recommendations of further study are given in Section 6.
Datasets
This study makes use of two century-long datasets which are the "gridded estimates of daily areal rainfall" (GEAR) and the "Australian Data Archive for Meteorology" (ADAM). The GEAR dataset is a grid-based (1 × 1 km 2 ) rainfall estimation that covers the mainland of GB from 01/01/1898 to 31/12/2010. It is derived from the UK Met Office national database of observed precipitation from the UK rain gauge network, using the natural neighbor interpolation method. The coordinates are in the National Grid Reference (Ordnance Survey, 1946) which is a projected map coordinate system with the easting (x-) and northing (y-) expressed in linear kilometers (Tanguy et al., 2016). The ADAM data set is generated using a sophisticated analysis technique described in Jones et al. (2009), which is also grid-based (0.05 × 0.05°, ∼5 × 5 km 2 ) rainfall from January 1, 1900 to December 31, 2018 over AU based on the Geocentric Datum of AU 1994 (GDA94; Collier, 2002) with the origin (44 S, 112 E), that is (0,0), and easting (x-) and northing (y-) transformed to kilometers. The recorded rainfall values are provided as daily rainfall, that is, the total rainfall amount over a predefined 24-h (9:00-9:00 a.m.) period which refers to the 24 h prior to the reporting time for the ADAM data set and the 24 h after for the GEAR data set.
Methodology
The geographical areas of the two data domains, i.e., GB and AU, are sampled into a series of regions-of-interest (ROIs) using the SRS-GDA toolbox. Three predefined spatial features (i.e., geographical locations, sizes, and shapes) are applied during the sampling process to reduce the overall computing time while maintaining the representativeness of the samples. The block maxima series, that is, the AMDR of each sampled ROI is then extracted and further fitted by a proper probability distribution. In this study, the three-parameter GEV distribution is chosen as the candidate distribution whose GOF is evaluated. The parameters ( and ) of the fitted distributions are then analyzed with regards to their spatial distribution referring to the large-scale climatology of rainfall variations.
ROI Generation and AMDR Extraction
The sampling starts with an initial set of uniformly distributed ROIs whose locations are represented by the coordinates of their geometric centroids. At each location, there are seven ROI shapes produced, indicated by their distinctive spatial indexes (Wang & Xuan, 2020) and reciprocally grouped as 0.2/5.0, 0.5/2.0, 0.8/1.25, and 1.0, respectively. The size of these ROIs is then gradually increased by 10 steps with 20% increment each, while maintaining the same shape and location. In the end, the largest sizes the ROIs are 1,050 km 2 for GB and 9,900 km 2 for AU, respectively.
The SRS-GDA toolbox used to generate the ROIs is set up in a way that only one spatial feature is allowed to vary at a time. For instance, to obtain ROI samples of G2 and A2 in Table 1, the centroid locations and shape are kept unchanged while generating 10 ROIs in different sizes. Geographical location (marked as "") Abbreviations: AU, Australia; GB, Great Britain; ROI, region-of-interest. For each ROI, its daily areal rainfall is calculated by taking arithmetic average over the covered grids (e.g., spatial average); then the block annual maxima are picked up to generate a time series of AMDR. This work was carried out by the HPC, processing around 642.3 GB data with 11,011 areal AMDR series produced.
Fitting the Extracted AMDRs Using GEV Distribution
The GEV distribution is one of the most well-founded distributions for describing normalized maxima from a sequence of independent, identically distributed random variables, for example, the block maxima of annual rainfall series. It has been applied to characterize both gauged (Feng et al., 2007;Westra et al., 2013) and grid rainfall datasets (Overeem et al., 2010). A GEV distribution is controlled by three parameters, namely, the location , the scale , and the shape which defines the three limiting types: the Gumbel ( 0), the Frechét ( 0), and the reversed Weibull ( 0). AMDR (denoted as x) of each ROI is fitted by a GEV distribution whose cumulative probability distribution function is as follows: (Hosking, 1985) is employed to estimate the parameters of GEV.
However, the GEV distribution has been chiefly used to fit point rainfall extremes as reported in many studies (Schaefer, 1990;Yoon et al., 2013), very few studies have been done on the suitability of fitting the areal grid-based rainfall extremes with GEV. Thus, in this study, the GOF is tested using a bootstrapped version of Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) tests, and the L-moment ratio diagrams (Text S1). Out of the AMDRs from all ROIs (1,416 ROIs of GB and 9,595 of AU) tested, the results show that the GEV distribution fits well the AMDR series with a 100% pass of the KS test and more than 97% of the AD test. This is also supported by the L-moment ratio diagrams ( Figure S1c) which demonstrates the fitted GEV distributions compared with the statistical characteristics of the AMDR itself.
Analyzing the Spatial Distribution of the Location-Scale Parameters
The spatial variation of the location and scale parameters of the fitted GEV distributions are analyzed both qualitatively and quantitatively. Instead of using full spatial coordinates to represent the geographical locations, a univariate spatial-location representation is adopted in this study. This procedure is briefly described below: 1. The chosen GEV parameter is aggregated meridionally, for example, over all ROIs that have the same x-direction (easting or longitude) coordinate. 2. The aggregated GEV parameter values are indexed by their x-direction only coordinates which are then used as an input variable to represent the geographical locations. 3. The same procedure is also applied zonally, that is, over the same y-direction coordinate.
With this arrangement, the meridional or zonal average of the GEV parameter in question is taken as the response variable. In AU, a concentric pattern is found where both the meridional and zonal average show similar results ( Figure S6); For the GB case, only a strong west-east pattern exists. Therefore, for the convenience of comparison, the meridional average is chosen for both cases. The spatial variation of is less significant over space thus is not considered (Figure 1 and S7).
Finally, a generalized linear model (GLM) is fitted to quantify the relationship between the GEV parameters and the associated spatial features of the ROIs, that is, to explicitly model the spatial variation of the GEV parameters with respect to the locations, sizes, and shapes of the underlying ROIs. A k-fold-cross-validation (Efron & Tibshirani, 1997) and variogram (Cressie & Hawkins, 1980) are employed to evaluate the performance of GLMs. (e) and AU (f) where a symmetric pattern in both cases is observed but the changes with respect to ROI shape is not significant. AU, Australia; GB, Great Britain; GEV, generalized extreme value; ROI, region-of-interest. Figure 1a and 1b present the histograms and spatial variations of the three GEV parameters of all ROIs in both GB and AU where the following patterns can be clearly identified:
GEV Parameter Variation Over Geographical Locations
• Most ROIs are in favor of the Frechét type of distribution ( 0). • Both and present a similar spatial pattern where a higher is usually accompanied by a higher .
In GB, the values of and in the western region, especially those in the coastal area, are much larger than those in the east. Such west-east gradient is also strong in the west, indicated by much denser contours. However, there is no remarkable variation from south to north, even though both and in Scotland are higher. As such, the meridional average is thought to better reveal such eastward pattern which can be described as "west high, east low" with an apparently nonlinear variation.
Both and in AU have a clear increasing trend from the south-middle zone to the coastal regions. This spatial pattern can be seen as a series of concentric circles. It is also noted that the rapid variations are close to the north-eastern coastal regions. The decreases in both and with increased ROI sizes have an important implication: the most frequent AMDR (relating to ) becomes smaller for larger ROI alongside an overall decreased extremity (relating to both parameters). Another interesting measure is the rate of such reduction (RR) as the size of ROI increases, which has also shown a clear spatial dependency. In AU, the RR remains low in the central desert zone (e.g., from Easting 300 to 360 km), and increases near the coastal areas where large parameter values are also found. This feature can be explained by the fact that regions receiving more extreme rainfall (e.g., the coastal regions in AU) are not only manifested by the higher and ; but also have more heterogenous rainfall than regions with less extreme rainfall (lower and ). Therefore, the changes of and are more sensitive to geographic locations, as revealed by the RR. GB also shows a similar pattern albeit not as remarkable. Figure 1e and 1f present the changes of and of all meridional groups in GB and AU, parameterized by the ROI shape (sp). The variation of the shape starts from west-east orientated ( 0.2 sp ), gradually growing into more rounded ( 1.0 sp ) , then to north-south orientated ( 5.0 sp ) ones. Two shapes with reciprocal sp values will have their major dimension swapped, that is, east-west versus south-north and vice versa. The result is inspected and summarized as follows:
Variation of GEV Parameters Due to the Area Shape
• For the majority of the meridional groups, there is little difference between the location-scale parameters of ROIs with reciprocal shapes, for example, two shapes with sp values of 0.2 and 5.0. This is regarded as a symmetric pattern around 1.0. sp • Generally, and of ROIs in an elongated shape are smaller than those of the ROIs in more rounded shapes. This indicates that the rounded-shape ROIs have a better chance to capture more rainfall extremes than the elongated ones. It also leads to that for the same area size, regions with more regular shape tend to have more extreme areal rainfall. • Overall, the effects of ROI shape are insignificant.
Quantification of Spatial Variation
Generalized Linear Models (GLMs) are based on an extension to the classical linear regression model, having been widely applied in hydrometeorology (Coe & Stern, 1982;Stern & Coe, 1984) and shown to be effective in terms of incorporating complex structures (Segond et al., 2006). Since Chandler and Wheater (2002) proposed a GLM-based framework for interpreting historical daily rainfall records and revealing the changes on rainfall occurrence and amount in western Ireland, many more applications have followed, for example, Yan et al. (2002), Yang et al. (2005), and Rashid et al. (2013) with good performance reported.
As the two meridionally averaged parameters and which reflect the property of rainfall extremes are shown to follow similar right-skewed gamma distributions ( Figure S2a), we broadly followed Chandler and Wheater (2002) and proposed a GLM with a log-link to quantify the spatial variation. The fitting starts from a simplest prediction form, and then adds other predictors or their combinations successively, for example, x s, s sp, or x sp (James, 2002). To determine whether to keep the newly added predictor/combination, the significance of the new term is evaluated by calculating the log-likelihood at the significant level of 0.05 (detailed in Text S2). Finally, an optimal form of the GLMs is obtained by considering both the log-likelihood and the discrepancy (e.g., root-mean-squared-error) and identified as: where the subscripts of β refer to the study case in question. A maximum likelihood estimator (McCullagh, 2018) was employed to obtain β (see the estimations in Table S1). These fitted GLMs are visualized in Figure 2a and 2b which show the following intriguing spatial features: ). It means that the north-south variation in AU is in general smaller than that of the eastwest direction. Besides, the combined term ( x s) is significant, which means that the RR of with respect to ROI size varies at different geographic locations. It is manifested by the uneven vertical gaps between contours in the right panel of Figure 2b The performance of the GLMs is evaluated by comparing the parameter values modeled by the GLMs and those from the originally fitted GEVs (Figure 2c), where in the GB case there are slight underestimates for some large values that appear in the western coastal region; and in the AU case, some overestimates happen for the small values which are located in the middle-south dry zone. The GLM model probability structure is checked by a residual analysis (McCullagh, 2018;Pierce & Schafer, 1986;Wang, 1987) whose results are shown in Figure 2d with a theoretical normal distribution shown on the x-axis and the residual quantiles on the y -axis. If the probability assumption (i.e., gamma assumption) is correct, all residuals would have the same distribution which is an approximate normal distribution. The distribution of the residuals of the four GLMs is symmetric with two flat sides. Generally, the approximation fits well except for the upper side which represents only 0.9% of the total data points.
In view of the research aims, this is considered to be acceptable. The locative continuity in the adjacent ROIs was compared using variograms (see Text S4) which show highly similar spatial correlations in the parameters modeled by the GLM and those from the originally fitted GEVs for both the GB and AU cases. The prediction skill of the fitted GLMs is tested using a k-fold-cross-validation ( 10 k ) method which produces average Nash-Sutcliffe efficiency (NSE) coefficients across all random partitions of the four GLMs as 0.97, 0.89, 0.86, and 0.62 (see Text S3).
These quantitative findings regarding the spatial variation of the GEV parameters have two import implications to many downstream applications of the areal rainfall maxima, for example, FRM. For one aspect, the traditional approach in FRM makes use of point rainfall maxima to represent the areal one (catchment or a predefined area), where a scaling factor is involved. This simplistic treatment ignores the complexity in spatial distribution, nor can it account for the interplay of the size, location as well as shape of the area in question, as revealed above. For another, the overall quantification of the spatial variation of the GEV parameters (hence, the return values) makes it possible to study the FRM at country level as a single entity instead of looking at individual regions with isolations. It also helps to gain insights into how large scale hydroclimatic (rainfall) variation can affect the local FRM, which is very important for studying FRM under climate change impact.
Link to the Large-Scale Climatology of Rainfall Variation
The GEV distribution parameters that can reveal the characteristics of extreme rainfall in terms of both its amount and occurrence probability, are shown to have a strong spatial dependency as discussed in the previous sections. To understand how such spatial variation of the extreme rainfall is related to the climatology of rainfall variation, areal annual rainfall (AAR) time series from each ROI were obtained. The mean and standard deviation (SD) of the AAR series were then compared with the GEV parameters and of the AMDR series extracted from the same ROIs. To visualize the link, the spatial continuity of the corresponding parameters from both AAR and AMDR were represented by their variograms (see Figure s5) which show very little difference on locative continuity. Figure 3 demonstrates a great deal of similarity existing in space between the daily maxima time series (i.e., the AMDR) and the cumulative annual rainfall (i.e., the ARR). For example, regions with higher mean of AAR are not only represented by a higher SD (e.g., the circles located in west Scotland and west Wales of GB and in north-eastern coastal regions of AU, appearing more reddish and larger), they are also associated with higher GEV parameters of AMDR, and appear to be more heterogenous. This feature also exists in regions with low and more even annual rainfall distribution, but works in an opposite way (e.g., circles located in middle and eastern England of GB and middle-north zone of AU are all more bluish and smaller). These findings are consistent with those published in the series of climate reports of GB (M. Kendon
Conclusions
This paper presents a study on the spatial variation of extreme rainfall using two-century long datasets covering GB and AU. The AMDR series extracted from regions-of-interest (ROI, 11,011 in total) with various spatial properties (location, size, and shape), are individually fitted with GEV distributions whose parameters are then analyzed over the space. Four GLMs are developed to quantify these variations by involving the effect from the geographical location, area size, and shapes. From the results discussed previously, the following conclusions can be drawn: 1. The GEV distributions are shown to be able to model well the grid-based areal-AMDR for both GB and AU; more than 90% of the regions are better fitted with the Frechét distribution among the three GEV types. 2. The GEV location () and scale ( ) parameters present similar spatial patterns where a higher is usually accompanied by a higher indicating those regions that have higher amount of most frequent rainfall often observe a higher occurrence probability of extremes. 3. Geographic location is the most significant factor affecting the two GEV parameters. The spatial pattern in GB is an eastward-decreasing-banded pattern with no significant difference along north-south direction. In AU, a concentrically increasing pattern from middle-south zone to north-east coasts is found. 4. Increasing the region size will decrease both parameters which means a decrease of the most frequent AMDR amount and the occurrence probability of extremes. However, in AU, the rate of such decrease varies with regions as the combined impact of ROI location and size is also detected to be significant. 5. Compared with other spatial properties, the shape of ROI is detected as insignificant, even though, a symmetric pattern is found for regions with reciprocal spatial indexes. Also, regions of more elongated shapes tend to have small parameter values in contrast with those having regular/rounded shapes.
These findings offer a new quantitative insight in understanding the spatial variation of large-scale climatology of rainfall. The quantification of the extreme rainfall and its spatial dependencies are of great practical value in engineering design, for example, designed rainfall/floods for constructions. The methods employed in this study are specifically designed for large grid-based datasets, and thus can be readily applied to climate projections for evaluating the spatial heterogeneity of climate change impact, such as flooding and droughts. It should be noted that the quality of the underlying datasets, which have undergone a series of quality control measures, may still bring in large amount of uncertainties and should be addressed in further work. Additionally, impact of the density of the underlying data observations, that is, rain-gauges, and its variation over long term also need to be further studied.
Data Availability Statement
The open-source toolbox of spatial random sampling for grid-based data analysis (SRS-GDA toolbox, https://doi.org/10.5281/zenodo.4044626) developed by the authors used in this study. | 2021-05-05T00:08:35.789Z | 2020-10-09T00:00:00.000 | {
"year": 2021,
"sha1": "fb9e2386f7265032283e2bb6e71cec03d6660c9a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020GL091933",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "f65dc43df1af8ac8a3ab3876bb509913b514aa16",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
119267173 | pes2o/s2orc | v3-fos-license | Quantum measurement schemes related to flavor-weighted energies
Reporting about the density matrix theory for a composite quantum system of flavor eigenstates, we introduce the idea of flavor-weighted energies. It provides us with the right correlation between the energies of flavor eigenstates and their measurement probabilities. In addition, the apparent ambiguities which follow from computing flavor-averaged energies are suppressed. The framework of the generalized theory of quantum measurement also provides some theoretical tools for computing the von-Neumann entropy correlated to flavor associated energies. It allows for relating flavor weighted(averaged) energies to non-selective (selective) quantum measurement schemes. As a final issue, the connection of such flavor associated energies with the expressions for neutrino effective mass values is investigated. It is straightforwardly verified that cosmological background neutrino energy densities could be obtained from the coherent superposition of mass eigenstates. Our results show that the non-selective measurement scheme for obtaining flavor-weighted energies is consistent with the predictions from the single-particle quantum mechanics.
I. INTRODUCTION
Although quantum mechanics is an intrinsically probabilistic theory, its inherent application of probabilistic concepts is quite different from that of a classical theory. In particular, in the framework of the quantum mechanics for composite systems, the density matrix is the analogue to the probability distribution of position and momentum in classical statistical mechanics. Such a statistical description through density matrices is required when one considers either an ensemble of systems, or a composite quantum system defined when its preparation history is uncertain and one does not know if it is a pure quantum state or a statistical mixture. That should be the case of an ensemble of neutrino flavor eigenstates, for instance, in the cosmological scenario.
The description of measurements performed on a composite quantum system provides an important tool to debug the procedures for computing the averaged energy densities of cosmological neutrino flavor eigenstates. We shall demonstrate that it should result in the appropriate relation between the cosmological neutrino background energy densities and neutrino mass values. In fact, since one asserts that the generalized theory of quantum measurement is based on the notions of operations and effects, the correlation between measurable flavor neutrino energies and flavor eigenstates, at least from the formal point of view, deserves some specific attention [1][2][3][4].
Departing from the standard formulation of composite quantum systems, we present the fundamentals of the physical significance of the density matrix theory in computing flavor probabilities and flavor-averaged energies. To overcome the ambiguities and misunderstandings that arise when flavor-averaged energies are defined, we discuss the idea of flavor-weighted energies mathematically correlated to flavor probabilities. Reporting about the generalized theory of quantum measurements, it is demonstrated that one can depict the averaged and weighted energy definitions from the idea of selective and non-selective quantum measurements. It is shown that such weighted energies based on some relations with statistical weights are more convenient in describing certain properties of composite quantum systems strictly related to flavor quantum numbers.
Flavor energy "measurements" or "projections" are therefore potentially subject to imprecise definitions. Obviously it reflects the dynamics of quantum systems being driven by a diagonal Hamiltonian in the mass eigenstate basis. In this case, the interpretation of the von-Neuman entropy for selective and non-selective quantum measurements is helpful in verifying correlations with flavor oscillation probabilities. It is performed by assuming that the von-Neumann-Lüders projection postulate introduces the concepts of selective and nonselective measurements [5]. It complies with the fundamentals of the generalized theory of quantum measurements [5][6][7][8] developed from the extended idea of a positive operator-valued measure that associates with each measurement outcome α a positive operator M α (0) . These concepts can play a relevant role in the fine-tunning of the neutrino mass value predictions. The experimental procedure for determining the mass of the neutrino using the CMB results is by inferring the transfer function in the matter power spectrum at small scales [10]. The contribution due to massive neutrinos to the closure fraction of cold dark matter at present substantially modifies the matter power spectrum, even for neutrinos behaving like hot dark matter at higher redshifts [11][12][13]. Determining the fraction of the neutrino energy density at late times is therefore a relevant aspect that has to be included in the procedure for deriving neutrino masses. In this manuscript, the influence of different flavor energy definitions in obtaining the predictions for cosmological neutrino mass values are discriminated. In particular, we show how the mass predictions are modified by some explicit dependence on the statistical weights of an ensemble of neutrino flavors.
Our manuscript was organized as follows. In section II we report about the usual mechanism of flavor oscillations through which one deduces the flavor conversion formulas and the expressions for flavor-averaged energies. The concept of flavor-weighted energy is discussed in section III where it is compared with the previously defined flavor-averaged energy and with the total averaged energy inherent to composite quantum systems. The idea of selective and non-selective quantum measurements is introduced in manner to embed the definitions of averaged and weighted energies. In section IV, an extension of such concepts is proposed in order to include a connection to flavor associated von-Neumann entropies. Our results show that there exists a kind of correlation rate between the flavor-weighted energy and the von-Neumann entropy changes due to a non-selective measurement scheme. Finally, potential implications on the properties of the cosmological neutrino background are discussed in section V, where the connection between the weighted energies and the cosmological neutrino energy density is established. We draw our conclusions in section VI.
II. FLAVOR OSCILLATIONS
The aspects of neutrino flavor oscillations that are relevant to our analysis can be comprehended from a simplified treatment involving just two degrees of freedom. The time evolution of a quantum system of well-defined flavor quantum numbers described by the state vectors ν e and ν µ respectively related to electron and muon neutrinos is given by where ν 1 and ν 2 are the mass eigenstates with well-defined energies, E s = √ p 2 + m 2 s , with s = 1, 2, and the matrix U parameterizes the mixing relation as where θ is the mixing angle. Since the Hamiltonian of the system in the mass eigenstate basis can be extracted from Eq. (2) as H = Diag{E 1 , E 2 }, the flavor projection operators can be easily defined as and where ∆E = E 1 − E 2 and it can be verified that M e (t) + M µ (t) = 1. Thus the temporal evolution of a flavor eigenstate can be described by and the supposedly relevant measurable quantities, or observables, of the closed quantum system can be summarized by the the flavor-averaged energies, that result in time-independent quantities, withĒ = (1/2)(E 1 + E 2 ), and by the time-oscillating flavor probabilities, that result in and that are interpreted as the probabilities of e(µ)-flavor states produced at time t 0 be measured as e(µ)-flavor states or be converted into µ(e)-flavor states after a time interval t−t 0 ∼ t−0 ∼ t.
Due to the relation between flavor eigenstates described by Eq. (5), the above definition of flavor-averaged energies is ambiguous in the sense that eventually e(µ)-flavor states can be partially, or even completely, converted into µ(e)-flavor states. To be more clear, once the projection of the muon vector state at time t, ν µ (t) , onto the initial (t 0 = 0) electron vector state, ν e (0) , is not zero, i. e. | ν µ (0) |ν e (t) | = 0, the averaged value computed from Eq. (5) represents an ambiguous and inappropriate definition of flavor associated to energies, given that it is not uniquely correlated with the respective flavor eigenstate. Obviously one has such a crude definition of flavor energy "measurements" or "projections" because the time evolution of the system is driven by a diagonal Hamiltonian in the mass eigenstate basis.
Such an ambiguity has stimulated some non-standard analysis of the cosmological background neutrinos as a coherent superposition of mass eigenstates, where the (in)appropriate quantum mechanical treatment affects the neutrino mass values derived from cosmological data [14]. One commonly notices that the flavor associated energies are defined through the averaged value from Eq. (5). We shall demonstrate in the following that the (re)interpretation of the probabilistic concepts for a composite quantum system through the principles of the generalized measurement theory agrees with the assertion that the above definition is inadequate.
III. FLAVOR WEIGHTED ENERGIES
Supposing that the density matrix of a composite quantum system of two neutrino flavor states is given by with w e + w µ = 1, one easily finds that the re-defined probabilities of measuring the electron and muon flavor eigenstates at time t are given by where we have used the results from Eqs. (9-10) from which one easily notices that P e (t) + P µ (t) = w e (P e→e (t) + P e→µ (t)) + w µ (P µ→e (t) + P µ→µ (t)) = w e + w µ = 1 (14) and that the properties of a statistical mixture are immediate. It leads to a reinterpretation of the energy related to each flavor quantum number.
The standard total averaged energy for a composite quantum system is defined through the density matrix as from which one can notice the explicit dependence on the flavor-averaged energies, E e,µ (t) , recovered from Eq. (6). In this context E e (t) and E µ (t) are respectively decoupled from the statistical weights w µ and w e . It just ratifies our previous arguments that such flavor energies are noway correlated with the flavor probabilities from Eq (15), P e (t) and P µ (t) since both of them depend simultaneously on both statistical weights, w µ and w e . Thus the arguments that assert the ambiguity and the insufficiency in defining the flavor eigenstate averaged energies through E e,µ (t) are maintained. To overcome such incongruities we suggest that some kind of flavor-weighted energy should be considered in order to establish a univoque correspondence between flavor eigenstate energies and the statistical definitions of probabilities, P e,µ (t) , After simple mathematical manipulations involving the definitions from Eq. (4) and the probabilities from Eq. (13), one easily finds that Observing the cyclic properties of the trace, the flavor-weighted energies can be defined as which can be promptly compared with the previous definition through the relation In allusion to the interference phenomenon in quantum mechanics such a residual term has an interference character since it intrinsically brings simultaneous information of e and µ flavors. One should notice that its time-averaged value is not zero, which means that the above analysis lead to different interpretations for the mean values of flavor-averaged and weighted energies.
One can also easily identify that the total averaged energy differs from the sum of flavorweighted energies by a residual energy term given by where we have used the unitarity from M e (0) + M µ (0) = 1. To summarize our results up to this point, we introduce the simplificative variables: w = w e and δw = w µ − w e so that one obtains the simplified expressions, for the total energies, from which the flavor-residual energy results in The oscillating probability and the corresponding flavor-weighted energy for some particular value of the mixing angle, θ, and of the statistical weight, w, for different regimes of propagation parameterized by m/p, are described in Fig. 1. For comparative effects, in Turning back to quantum fundamentals of the above analysis, it is important to emphasize that the definition of flavor-weighted energies reflects some concepts of the generalized theory of quantum measurements [5][6][7][8]. There are important variants of quantum measurements schemes that are encountered in practice. The above results turn out that the generalized measurement theory based on notions of operations and effects is very singular.
The generalized measurement theory leads, in a natural way, to the extended idea of a positive operator-valued measure which associates with each measurement outcome α a positive operator M α (0) . It may be viewed as an immediate generalization of the von-Neumann-Lüders projection postulate that introduces the notion of selective and non-selective measurements [5]. At our analysis α corresponds to the quantum numbers related to electronic and muonic flavors, e and µ.
The measurement outcome α represents a classical random number with probability distribution given by Eq. (13) where M α (0) is a positive operator called the effect. For the case that the measurement is a selective one, the sub-ensemble of those systems for which the outcome α has been found is to be described by the density matrix where M α (0) ρ M α (0) is called operation, which maps positive operators into positive operators. Notice that one consistently has For the corresponding non-selective measurement one has the density matrix from which it is also easily verified that Tr{ρ ′ } = 1.
At our approach, flavor-averaged energies, E α (0) , result from the density matrix for selective measurements, ρ α , from Eq. (23), as it can be verified through the relation, and flavor-weighted energies, ǫ α (t) , result from the density matrix for non-selective measurements, ρ ′ , from Eq. (25), as Each energy component E α (t) , with α = e, µ, τ , in the above equations are respectively decoupled from its corresponding statistical weight, w α (c. f. Eq. (26)). Therefore, flavoraveraged energies, E α (t) , are noway correlated with the flavor probabilities from Eq (13). The conversion probabilities, P α (t) , have multiple dependencies on all statistical weights, w e , w µ and w τ . The inaccuracy in correlating flavor-averaged energies, E α (t) , with flavor eigenstates is consequently obvious. Otherwise, the total averaged energy given by seems to be well-defined, in the sense that it is independent of the measurement scheme.
From such a novel interpretation, flavor-weighted energies are naturally embedded into the quantum measurement scheme and the results can be easily extended to n-flavor oscillating quantum systems.
IV. VON-NEUMANN ENTROPY AND QUANTUM MEASUREMENTS
Now let us report about some basics on quantum statistics and thermodynamics, from which the von-Neumann entropy provides an important entropy functional defined in terms of the density matrix by where we have set the multiplicative Boltzmann constant, k B , equal to unity. The entropy S(ρ) quantifies the departure of a composite quantum system from a pure state, i. e. it measures the degree of mixture of a state describing a given finite system. As one can expect, quantum measurements induce modifications on the the von-Neumann entropy of the system. The entropy change due to a non-selective measurement scheme described by operations parameterized by the projection operators M α (0) is given by where Since ∆S ≥ 0, the non-selective ideal quantum measurement never decreases the von-Neumann entropy. An additional property concerns the variation of the entropy involved in the transition from selective to non-selective levels of a measurement. The quantity can be interpreted as a mixing entropy. It corresponds to the difference between the entropy of a system projected by a non-selective quantum measurement, S α P α (t) ρ α , and the average of the entropies of the sub-ensembles, ρ α , described by states M α (0) . All the above defined entropies satisfy some set of inequalities [5] which have been extensively used in different forms in the framework of quantum information theory and quantum entanglement.
In particular, one should notice that in case of the previously discussed condition of ρ α = M α (0) for the selective measurement scheme, with M α (0) denoting the creation of a single-flavor state, the mixing entropy is reduced to By observing the common points between energies and entropies through the above discussed measurement schemes, i. e. by noticing the following correspondence scheme, it is possible to establish the following correlation between entropy changes and the absolute value of time-averaged energy differences, as one can easily depict from Figs. 3 and 4 where we have computed time-averaged quantities.
In case of flavor oscillating systems, it corresponds to assuming an intrinsic de-coherence mechanism that suppresses the periodic functions of the density matrix non-diagonal elements. Such a de-coherence mechanism is equivalent to a delocalization effect that can be achieved by assuming that momentum (p) states weighted by a momentum distribution (f (p)) are comprised by an ensemble B in the same way that flavor states are comprised by an ensemble A. In this case one should have where T r B {} denotes an integration over the continuous space of momentum.
Therefore, the entropies computed in terms of ρ t , S( ρ t ) can be interpreted as late-time entropies since one is considering their asymptotic behavior in t. In terms of effective actions, it destroys any coherence behavior of ρ AB , which results in a statistical mixing described by For the two-level system discussed above, from Fig. 3, one can identify a similar analytical pattern between the energy and entropy time-averaged values in terms of their dependence on the mixing angle, θ. The Fig. 4 shows a kind of correlation rate between the above quantities when they are normalized by each respective maximum value. In spite of discussing a twolevel system, the qualitative analysis of the results depicted from Fig. 4 allow us to identify, by varying the mixing angle, a higher level of correlation between flavor-weighted energies and the respective entropies when non-selective measurements are taken into account. It simply corresponds to an indication that when one assumes flavor-weighted energies as the quantifiers for the energy associated to flavor eigenstates, the loss of information due to the measurement procedure is better quantified by the non-selective related entropy. To summarize, we shall compute some effective quantities related to flavor-weighted energies in the following section.
V. CORRECTIONS TO THE SINGLE-PARTICLE QUANTUM MECHANICS OF COSMOLOGICAL NEUTRINOS
The recent issues [14][15][16][17] on quantum mechanics of cosmological neutrinos has focused on finding an appropriate procedure for computing the neutrino mass values derived from cosmological data. To illustrate an application of our analysis, we reproduce some results from the single-particle quantum mechanics of flavor oscillations reported by Fuller and Kishimoto [14] and we show how the density matrix theory supports such results.
From the Standard Model and cosmological point of views, the main assumption for the cosmological neutrino background is that neutrinos and antineutrinos should be in thermochemical equilibrium with the photon-and e ± -plasma at early times, namely when background temperatures are T > T Dec ∼ 1 MeV. For T ≪ T Dec , the neutrinos and antineutrinos would be completely decoupled, comprising seas of free streaming particles with energymomentum and flavor distributions reflecting the equilibrium prior to decoupling, followed by the expansion of the universe.
Assuming that neutrinos are forced by weak interaction-mediated scattering into flavor eigenstates in the pre-decoupling time, the neutrino momentum distribution for each flavor can be approximated by Fermi-Dirac distribution functions, with the number density ν α 's in a momentum interval dp given by [14] dn να = 1 2π 2 · p 2 e Eν α (a)/T (a)−ην α + 1 dp, where we have assumed natural units by setting = c = k B = 1, and we have reported about the textbook's ratio of chemical potential to temperature for neutrino species ν α , η να .
One can also notice that T ν (a) = T Dec a Dec /a is an effective neutrino temperature obtained by In a previous issue [14], the energy-momentum dispersion relation was introduced in order to give that leads to the standard choice for the effective mass of a neutrino in flavor eigenstates ν α , It is interpreted as the dynamical mass for ultrarelativistic neutrinos of flavor ν α . In particular it has been shown that when the neutrino momentum redshifts forward non-relativistic regimes [18], this effective mass is no longer relevant in characterizing the energy-momentum dispersion relation. As explicitly pointed out by Fuller and Kishimoto [14], one can also notice that energy distribution functions for neutrinos in mass eigenstates can be approximated by weighted sums of the flavor eigenstates, that has been used to compute the neutrino energy density ρ E through which one can derive the neutrino mass values after some phenomenology. The usual textbook [9] method for computing the energy density of neutrinos in the universe can be rewritten [14] in using the effective mass from Eq. (39) and the number density distributions of neutrinos in flavor eigenstates from Eq. (40), so that one has However, Fuller and Kishimoto [14] assumes that to calculate the energy density of these particles, the mass eigenstate energy could be introduced in order to give At equivalently small redshifts corresponding to large scale factors, the neutrino freeze-out regime is intensified and neutrinos become non-relativistic, i. e. the magnitude of the neutrino masses become more relevant than the momentum magnitude at late times. At early times, the ultra-relativistic regime naturally suppress any eventual divergence from the naive effective mass approach.
Through the above reported single-particle quantum mechanics framework already quantified and discussed by Fuller and Kishimoto [14], the distribution function used in the identical for all three active flavors [14].
From this point we assume that the energy-momentum dispersion relation related to Reporting about a phenomenologically consistent analysis that involves three neutrino flavor eigenstates, a generical interpretation can be depicted from our results. We assume that each of the flavor ensemble is described by a normalized state vector ν α , with α = e, µ, τ , in the underlying Hilbert space. It is then natural to study the statistics of the total ensemble by mixing the flavor ensembles with respective weights w α . The mixing is achieved by taking a large number N α of systems from each flavor ensemble so that w α = N α / N α .
Thus the maximal statistical mixture with w e = w µ = w τ results from the assumption that dn e = dn µ = dn τ . In this case, the total averaged energy computed in the previous section would lead to a series of convergent results, where we have considered that H = H 0ȧ /a ∼ a −n , n > 0, is the Hubble rate during the remaining time from radiation to matter domination eras, and q = k B T ν 0 is the comoving momentum. Maintaining the assumption of natural units and observing that H −1 0 ∼ 0.7 × 10 33 eV −1 , ∆m 2 2.4 × 10 −3 eV 2 and q ∼ 0.167 × 10 −4 eV, one finds a huge oscillation number given by ∆E τ ∼ 10 34 , justifying the time-average ( time ) procedure.
Since sin 2 ∆E 2 t time = 1 2 , the time-averaged quantities that are relevant to us are ǫ e + ǫ µ =Ē + δw 2 ∆E cos (2θ) cos 2 (2θ)), which can be conveniently manipulated to obtain that differ one from each other by the corresponding dependence on the mixing angle, θ, as it is qualitatively illustrated in Fig. 5. One can easily notice thatĒ = 1 D D s=1 E s is the input into Eq. (42) used to compute the neutrino energy density, ρ ν E , in case of D mass eigenstates. Since it is defined in terms of the mass eigenstate eigenvalues for a well-defined Hamiltonian, we consider ρ ν E as our reference for the standard quantum mechanical procedure that results in measurable energy densities. Eq. (46) suggests at least three methods for discussing the fractional difference between energy densities, ρ E . All of them depend on the difference between the statistical weights, δw, and on the modulation given by the contour function illustrated in Fig.5. The common variable among them is the mass-energy difference, ∆E, from which one can identify the following auxiliary variable, that with allows one to quantify difference among the three predicitions derived from E M −Ē, The results of the above analysis can be immediately extended to a composite system of three flavor eigenstates, i. e. electronic-e, muonic-µ, and tauonic-τ neutrinos, for which the 3 × 3 density matrix is given by for three values of the mixing angle: θ = π/3, θ = π/4, and θ = π/5. We have considered the mixing angles: θ = π/3 (dashed lines) and θ = π/5 (solid lines), in correspondence with Fig. 1. For maximal mixing conditions, i. e. with θ = π/4, the relative residual energy is null. The same effect is observed for pure states (w = 1, or even w = 0) and for the maximal statistical mixture (w = 1/2). Fig. 6 when one considers total averaged quantities (thick lines), non-selective measurements (thin lines) and the corresponding entropy changes (dashed lines). Is is possible to infer a higher degree of correlation between energy and entropy for the case of non-selective measurements since it would be maximum for a straight line with angular coefficient equal to unity. | 2012-08-27T19:11:48.000Z | 2011-04-15T00:00:00.000 | {
"year": 2011,
"sha1": "aee564bc8cd1e766e1bafc313f47bb3b8dd0d4f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1104.3120",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aee564bc8cd1e766e1bafc313f47bb3b8dd0d4f7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249326048 | pes2o/s2orc | v3-fos-license | Natural Products Produced in Culture by Biosynthetically Talented Salinispora arenicola Strains Isolated from Northeastern and South Pacific Marine Sediments
Laboratory cultures of two ‘biosynthetically talented’ bacterial strains harvested from tropical and temperate Pacific Ocean sediment habitats were examined for the production of new natural products. Cultures of the tropical Salinispora arenicola strain RJA3005, harvested from a PNG marine sediment, produced salinorcinol (3) and salinacetamide (4), which had previously been reported as products of engineered and mutated strains of Amycolatopsis mediterranei, but had not been found before as natural products. An S. arenicola strain RJA4486, harvested from marine sediment collected in the temperate ocean waters off British Columbia, produced the new aminoquinone polyketide salinisporamine (5). Natural products 3, 4, and 5 are putative shunt products of the widely distributed rifamycin biosynthetic pathway.
Introduction
Marine isolates of bacteria in the genera Salinispora have proven to be a rich source of novel natural products that often exhibit biological activities of interest for drug development [1,2]. They have also been found to be an excellent resource for exploring the 'one strain many compounds' (OSMAC) strategy for bioactive natural product discovery [3], whereby culture conditions are varied to elicit the production of new compounds. As part of our ongoing interest in biologically active natural products produced by Salinispora arenicola obtained from both tropical and temperate Pacific Ocean marine habitats, we have discovered two biosynthetically talented isolates RJA3005 and RJA4486. Isolate RJA3005 is a strain of Salinispora arenicola, obtained from marine sediment collected in Papua New Guinea, while isolate RJA4486 is a strain of S. arenicola harvested from marine sediment collected from the temperate ocean waters off the coast of British Columbia. The tropical S. arenicola strain RJA3005 attracted our attention because its crude extract was active in a bioassay screen for phosphatase inhibitors, while the temperate water S. arenicola strain RJA4486 represented a range extension for S. arenicola [4,5] and it produced the two known bioactive natural products rifamycin W (1) [6] and staurosporine (2) [7] (Figure 1).
Molecules 2022, 27, x FOR PEER REVIEW 2 of 12 bioassay screen for phosphatase inhibitors, while the temperate water S. arenicola strain RJA4486 represented a range extension for S. arenicola [4,5] and it produced the two known bioactive natural products rifamycin W (1) [6] and staurosporine (2) [7] (Figure 1). Prompted by their demonstrated capabilities to produce bioactive natural products and bioactive crude extracts, we have further interrogated isolates RJA3005, and RJA4486 by varying the culture conditions or investigating very minor metabolites in an attempt to uncover new natural products from their cultures. Bioassay-guided efforts to isolate a compound responsible for the phosphatase inhibitory activity exhibited by extracts of an S. arenicola strain RJA3005 culture failed to identify an active natural product. However, as part of this exercise, we discovered the very minor metabolites salinorcinol (3) and salinacetamide (4). Compound 3 has been reported as the product of feeding an engineered Amycolatopsis mediterranei S699 strain (-AHBA synthase) with synthetic aromatic precursors [8], and compound 4 has been reported as a shunt product of an engineered A. mediterranei S699 strain [9], but neither 3 nor 4 has been reported as a natural product produced by a wild-type bacteria in culture. Investigation of very minor metabolites with unusual UV spectra in the organic extract of cultures of S. arenicola strain RJA4486 resulted in the isolation of the aminoquinone salinisporamine (5). Herein, we describe the isolation and structure elucidation of the new microbial natural products 3, 4, and 5 ( Figure 1).
Results
S. arenicola strain RJA3005 (16S rRNA gene sequence, GenBank accession no. OM728180) was grown as lawns on MM1 solid agar prepared with seawater, and the mature bacterial lawns and agar were cut into small squares and jointly extracted by soaking in multiple batches of EtOAc. The combined EtOAc extracts were concentrated in vacuo Prompted by their demonstrated capabilities to produce bioactive natural products and bioactive crude extracts, we have further interrogated isolates RJA3005, and RJA4486 by varying the culture conditions or investigating very minor metabolites in an attempt to uncover new natural products from their cultures. Bioassay-guided efforts to isolate a compound responsible for the phosphatase inhibitory activity exhibited by extracts of an S. arenicola strain RJA3005 culture failed to identify an active natural product. However, as part of this exercise, we discovered the very minor metabolites salinorcinol (3) and salinacetamide (4). Compound 3 has been reported as the product of feeding an engineered Amycolatopsis mediterranei S699 strain (-AHBA synthase) with synthetic aromatic precursors [8], and compound 4 has been reported as a shunt product of an engineered A. mediterranei S699 strain [9], but neither 3 nor 4 has been reported as a natural product produced by a wild-type bacteria in culture. Investigation of very minor metabolites with unusual UV spectra in the organic extract of cultures of S. arenicola strain RJA4486 resulted in the isolation of the aminoquinone salinisporamine (5). Herein, we describe the isolation and structure elucidation of the new microbial natural products 3, 4, and 5 ( Figure 1).
Results
S. arenicola strain RJA3005 (16S rRNA gene sequence, GenBank accession no. OM728180) was grown as lawns on MM1 solid agar prepared with seawater, and the mature bacterial lawns and agar were cut into small squares and jointly extracted by soaking in multiple batches of EtOAc. The combined EtOAc extracts were concentrated in vacuo and then purified via sequential application of Sephadex LH20 chromatography, step-gradient C 18 Molecules 2022, 27, 3569 3 of 11 reversed phase flash chromatography, and C 18 reversed-phase HPLC to give pure samples of salinorcinol (3) and salinacetamide (4).
The three identified substructures accounted for all of the atoms and sites of unsaturation required by the molecular formula of 3. HMBC correlations ( Figure 2) between the methyl doublet at δ H 0.85 (Me-15) and the carbon resonance at δ c 164.2 (C-5) and between the allylic methine at 2.66 (H-6) and the carbon resonances at δ c 164.2 (C-5) and 100.0 (C-4) connected the allylic carbon (C-6) to the pyrone, and HMBC correlations between the oxy-methine resonance at δ H 4.58 (H-7) and the dihydroxybenzene ring carbon resonances at δ C 145.5 (C-8) and 104.8 (C-9/C-13) linked the oxymethine carbon to the benzene ring at C-8, completing the constitution of 3. The three identified substructures accounted for all of the atoms and sites of unsaturation required by the molecular formula of 3. HMBC correlations ( Figure 2) between the methyl doublet at δH 0.85 (Me-15) and the carbon resonance at δc 164.2 (C-5) and between the allylic methine at 2.66 (H-6) and the carbon resonances at δc 164.2 (C-5) and 100.0 (C-4) connected the allylic carbon (C-6) to the pyrone, and HMBC correlations between the oxy-methine resonance at δH 4.58 (H-7) and the dihydroxybenzene ring carbon resonances at δC 145.5 (C-8) and 104.8 (C-9/C-13) linked the oxymethine carbon to the benzene ring at C-8, completing the constitution of 3.
The natural product 3 has the same constitution as a compound produced by a mutated rifamycin producer A. mediterranei S699 (-AHBA synthase) that had been fed 3,5dihydroxybenzoic acid [8]. Our discovery of 3 in extracts of S. arenicola strain RJA3005 cultures is the first report of 3 as a natural product from a wild-type bacterial culture, and indeed as an entirely biosynthesized molecule. We have named the natural product salinorcinol (3). The absolute configurations at C-6 and C-7 in the semisynthetic bioengineered sample of 3 were assigned as 6R,7S. The chemical shifts at C-6/H-6 and C-7/H-7, as well as the 1 H/ 1 H coupling constants between H-6 and H-7 in our natural product and the engineered compound are virtually identical. Therefore, we assume the natural product 3 is also 6R,7S-salinorcinol (3). Salinacetamide (4) gave a [M + H] + ion in the HRESIMS at m/z 334.1297 appropriate for a molecular formula of C17H19O6N that requires 9 sites of unsaturation, differing from that of salinorcinol (3) by the addition of C2H3N. Comparison of the NMR data of 4 and 3 (see Table 1 and Supplementary Materials) revealed a loss of symmetry in the 1,3,5 trisubstitution about the benzene ring of 4. In all other respects, 4 and 3 were structurally identical. An NH resonance at δH 9.80 in the HMBC spectrum of 4 showed correlations to carbonyl resonating at δC 168.3 (C-16) and an aromatic carbon at δC 140.3 (C), that was assigned to C-12. A methyl singlet at δH 2.04 (C-17, δC 24.2) was also correlated in the HMBC spectrum to the carbonyl at δC 168.3. The placement of an acetamide functionality at C-12 was consistent with both the NMR and MS data obtained for salinacetamide (4). A compound isolated from cultures of a rifamycin producer A. mediterranei S699 that was subjected to mutations was assigned the constitution of 4 solely based on HPLC MS data. Two of the A. mediterranei S699 mutations involved the loss of a 21kb DNA fragment [8,9] of the rifamycin gene cluster's post-PKS modification genes [10] and a mutation involving a rifF deletion [9]. The samples of 4 identified from cultures of the mutant strains were never fully characterized by NMR analysis and, to the best of our knowledge, 4 has not been reported as a natural product from cultures of a wild-type bacterium.
Cultures of the Northeastern Pacific S. arenicola strain RJA4486 (16S rRNA gene sequence, GenBank accession no. OM721757) were grown as lawns on solid agar containing marine medium and the mature cultures were extracted with EtOAc as described above. The EtOAc extracts of the combined cells and solid agar media were fractionated using sequential application of Sephadex LH20 chromatography, step-gradient Si Gel flash chromatography, and C18 reversed-phase HPLC to give a pure salinisporamine (5) as optically inactive blade-shaped orange crystals with a complex UV spectrum (λmax at 195, The natural product 3 has the same constitution as a compound produced by a mutated rifamycin producer A. mediterranei S699 (-AHBA synthase) that had been fed 3,5dihydroxybenzoic acid [8]. Our discovery of 3 in extracts of S. arenicola strain RJA3005 cultures is the first report of 3 as a natural product from a wild-type bacterial culture, and indeed as an entirely biosynthesized molecule. We have named the natural product salinorcinol (3). The absolute configurations at C-6 and C-7 in the semisynthetic bioengineered sample of 3 were assigned as 6R,7S. The chemical shifts at C-6/H-6 and C-7/H-7, as well as the 1 H/ 1 H coupling constants between H-6 and H-7 in our natural product and the engineered compound are virtually identical. Therefore, we assume the natural product 3 is also 6R,7S-salinorcinol (3).
Salinacetamide (4) Table 1 and Supplementary Materials) revealed a loss of symmetry in the 1,3,5 trisubstitution about the benzene ring of 4. In all other respects, 4 and 3 were structurally identical. An NH resonance at δ H 9.80 in the HMBC spectrum of 4 showed correlations to carbonyl resonating at δ C 168.3 (C-16) and an aromatic carbon at δ C 140.3 (C), that was assigned to C-12. A methyl singlet at δ H 2.04 (C-17, δ C 24.2) was also correlated in the HMBC spectrum to the carbonyl at δ C 168.3. The placement of an acetamide functionality at C-12 was consistent with both the NMR and MS data obtained for salinacetamide (4). A compound isolated from cultures of a rifamycin producer A. mediterranei S699 that was subjected to mutations was assigned the constitution of 4 solely based on HPLC MS data. Two of the A. mediterranei S699 mutations involved the loss of a 21kb DNA fragment [8,9] of the rifamycin gene cluster's post-PKS modification genes [10] and a mutation involving a rifF deletion [9]. The samples of 4 identified from cultures of the mutant strains were never fully characterized by NMR analysis and, to the best of our knowledge, 4 has not been reported as a natural product from cultures of a wild-type bacterium.
Cultures of the Northeastern Pacific S. arenicola strain RJA4486 (16S rRNA gene sequence, GenBank accession no. OM721757) were grown as lawns on solid agar containing marine medium and the mature cultures were extracted with EtOAc as described above. The EtOAc extracts of the combined cells and solid agar media were fractionated using sequential application of Sephadex LH20 chromatography, step-gradient Si Gel flash chromatography, and C 18 reversed-phase HPLC to give a pure salinisporamine (5) as optically inactive blade-shaped orange crystals with a complex UV spectrum (λ max at 195, 217, 282 and 322 nm). Salinisporamine Figure 3) revealed that the structure of 5 consisted of a highly functionalized aromatic ring system with extended conjugation into perhaps a quinone moiety as characteristic carbonyl resonances were observed at δ C 180.8 (C-11) and 181. 8 (C-14). In addition, 1 H NMR resonances at δ H 2.30 (Me-18), 2.02 (Me-16), and 1.59 (Me-17), that each integrated for three protons, and correlated to carbons at δ C 16.7 (C-18), 16.2 (C-16) and 14.7 (C-17), respectively, in the HSQC experiment suggested that the structure of 5 possessed 3 aromatic and/or olefinic methyl residues. Three aromatic or olefinic methine carbons (δ C/H : 144.2/7.30 (C-3); 130.5/7.89 (C-9), 102.6/5.58 (C-13)) were also observed in the NMR data along with a phenolic proton resonance at δ H 10.10 (OH-7). Figure 3 illustrates the three structural fragments of 5 that could be assigned from the NMR data. However, the complete constitution of 5 could not be elucidated from the NMR data alone. Therefore, crystals of 5 were subjected to single-crystal X-ray diffraction analysis and the resulting ORTEP-style diagram in Figure 4 shows the complete structure of salinisporamine (5). With the X-ray structure of 5 in hand, it was possible to make a complete assignment of the NMR data listed in Table 2. resonances in the 1 H NMR spectrum of 5 recorded in DMSO-d6 were doubled (Suppor Information). When repurified by HPLC with no TFA present, the resulting 1 H NMR s trum of 5 recorded in DMSO-d6 showed only a single set of well-resolved resonances ble 2, Supplementary Materials). Detailed analysis of the 1D and 2D NMR data (Tab Figure 3) revealed that the structure of 5 consisted of a highly functionalized aromatic system with extended conjugation into perhaps a quinone moiety as characteristic bonyl resonances were observed at δC 180.8 (C-11) and 181.8 (C-14). In addition, 1 H N resonances at δH 2.30 (Me-18), 2.02 (Me-16), and 1.59 (Me-17), that each integrated for t protons, and correlated to carbons at δC 16.7 (C-18), 16.2 (C-16) and 14.7 (C-17), res tively, in the HSQC experiment suggested that the structure of 5 possessed 3 arom and/or olefinic methyl residues. Three aromatic or olefinic methine carbons ( 144.2/7.30 (C-3); 130.5/7.89 (C-9), 102.6/5.58 (C-13)) were also observed in the NMR along with a phenolic proton resonance at δH 10.10 (OH-7). Figure 3 illustrates the t structural fragments of 5 that could be assigned from the NMR data. However, the c plete constitution of 5 could not be elucidated from the NMR data alone. Therefore, c tals of 5 were subjected to single-crystal X-ray diffraction analysis and the resulting TEP-style diagram in Figure 4 shows the complete structure of salinisporamine (5). W the X-ray structure of 5 in hand, it was possible to make a complete assignment o NMR data listed in Table 2. The underlined character is the atoms with described chemical shifts for the displayed functional group. (5) were tested for antimicrobial activity against Bacillus subtilis (UBC 344), Staphylococcus aureus (ATCC 43300), methicillin-resistant S. aureus (ATCC 33591), Escherichia coli (UBC 8161), Pseudomonas aeruginosa (ATCC 27853), and Candida albicans (ATCC 90028) using a standard disc diffusion assay. None of the compounds showed antimicrobial activity at a concentration of 40 µg/disc.
Salinorcinol (3) and salinisporamine
Given the range extension of S. arenicola RJA4486, the strains' 16S rRNA gene sequence was utilized as a query in a BLASTN [11] search against the Salinispora genus. The results from that BLASTN search were utilized to construct a phylogenetic tree (Supporting Information). Additionally, the genomes of S. arenicola RJA4486 (5.6 Mbp, 140 contigs GenBank accession no. JAMQNB000000000) and S. arenicola RJA3005 (6.9 Mbp, 109 contigs GenBank accession no. JALPRT000000000) were sequenced utilizing a shotgun sequencing approach. The sequencing data from these two strains were analyzed for natural product biosynthetic gene clusters utilizing antiSMASH [12]. The results from antiSMASH were analyzed alongside data from the S. arenicola CNS-991 genome (GenBank Accession no. KB913036.1, collected from Fiji and available as a high-quality draft genome. Each putative gene cluster was analyzed for completeness by comparing the gene clusters to the MiBIG [13] analysis presented under the antiSMASH shell, along with performing BLASTP analysis for core biosynthetic genes. The results of this analysis demonstrate that all the analyzed S. arenicola strains contain complete gene clusters with 80% or higher sequence homology to characterized gene clusters for staurosporine, rifamycin, sporolide A, paramagnetoquinone, alkyl-O-dihydrogeranyl-methoxyhydroquinones, desferrioxamine, ketomemicin, and lymphostin. Additionally, gene clusters with shared sequence identity to calicheamicin (both at~40%) and stenothricin (~30%) were found common to all analyzed strains. Nevertheless, despite belonging to the same species, each strain possesses a few biosynthetic gene clusters predicted to be unique (See Table 3 and Supplementary Materials), with RJA4486 containing the highest number of gene clusters predicted to belong to the terpene class. Due to the commonality of the rifamycin gene cluster to the natural products reported here, the rifamycin partial fragments found in S. arenicola RJA4486 and S. arenicola RJA3005 were aligned to the complete rifamycin gene cluster available in the sequenced genome of S. arenicola CNS-205, utilizing a MAUVE [14] alignment (Supplementary Materials). Results from the MAUVE alignment confirmed that the rifamycin gene cluster is found across multiple contigs in the sequenced genomes of S. arenicola RJA4486 and S. arenicola RJA3005; likely due to the draft genome status of each strain. At this stage, the gaps prevent detailed analysis for potential mutations.
Discussion
In this study, we have examined the unexplored metabolic potential of biosynthetically talented marine bacterial isolates through variation of culturing conditions and isolation of minor metabolites guided by chemical signatures such as UV chromatograms and unique NMR chemical shifts as an avenue to the discovery of new natural products. Our efforts have revealed the new polyketide natural products salinorcinol (3), salinacetamide (4), and salinisporamine (5). Our studies have revealed that wild-type S. arenicola strains from widely separated geographical regions and climatic zones produce unique natural products from the well-dispersed rifamycin gene cluster. Salinorcinol (3) and salinacetamide (4) were discovered as new natural products produced in culture by the S. arenicola strain RJA3005 isolated from sediments collected in the tropical southwestern Pacific waters off the coast of Papua New Guinea. Both 3 and 4 had been previously reported as metabolic products of laboratory mutated and/or engineered rifamycin-producing bacteria, but not as natural products. In particular, 3 was the product of a mutasynthetic study where an -AHBA mutant was fed various synthetically derived starting units [8]. The structural similarity of both compounds to a mutasynthesized compound (for 3) [8] and an engineered compound (for 4) [9] from a rifamycin producer strongly support that 3 and 4 are the products of the rifamycin gene cluster. Intriguingly, the 10,12-dihydroxy aromatic moiety of 3 would require the production of a unique starting unit as compared to the AHBA starting unit involved in rifamycin biogenesis [10]. The biogenesis of the necessary starting unit for 3 within the context of rifamycin biosynthesis is still unclear. In a related fashion, temperate ocean strain RJA4486 cultures produce the new putative rifamycin shunt product salinisporamine (5). When isolated with TFA, two sets of resonances were observed. One set of resonances is identical to that seen in the sample with no TFA present and the biggest difference is seen in the chemical shifts of H-13 that is adjacent to the 12-NH 2 . Likely when TFA is used under HPLC concentrations approximately half of the compound is protonated at 12-NH 2 and charged and in the other half, it is not. Importantly, the isolation of S. arenicola strain RJA4486 from sediments collected in the temperate northeastern Pacific waters off the coast of B.C. appears to be a range extension for this species, which was thought to be confined to tropical habitats [4].
Despite the similar 16S rRNA gene sequences of S. arenicola strains RJA3005 and RJA4886, our work reveals chemodiverse compounds from these two strains of an identical species. This work underscores the importance of the sampling and screening process used for bioprospecting i.e., taxonomically identical strains should not be discarded, as these strains can produce strain-specific compounds [15,16]. This phenomenon can be explained by mobile biosynthetic gene clusters that are likely acquired by horizontal exchange among bacteria in an ecological niche to confer ecological fitness [17][18][19][20]. These acquired biosynthetic gene clusters can produce new chemical scaffolds or compounds, which can be exploited for medical or biochemical applications. The discovery of 3 and 4 as natural products provides an example of nature and human metabolic engineers using similar modifications of major pathways to sample biosynthetic chemical space. The work described herein reinforces the premise that exhaustive exploration of 'biosynthetically talented' bacterial strains is a productive way to find new natural products.
General Experimental Methods
Optical rotations were measured using a Jasco P-1010 Polarimeter with sodium light (589 nm). UV spectra were recorded with a Waters 2998 Photodiode Array Detector. 1 H and 13 C NMR spectra were recorded on a Bruker AV-600 spectrometer with a 5 mm CPTCI cryoprobe. 1 H chemical shifts are referenced to the residual DMSO-d 6 (δ 2.49 ppm) and 13 C chemical shifts are referenced to the DMSO-d 6 solvent peak (δ 39.5 ppm). Low and high-resolution ESI-QIT-MS were recorded on a Bruker-Hewlett Packard 1100 Esquire-LC system mass spectrometer. Merck Type 5554 silica gel plates and Whatman MKC18F plates were used for analytical thin-layer chromatography. Reversed-phase HPLC purifications were performed on a Waters 1525 Binary HPLC Pump attached to a Waters 2998 Photodiode Array Detector. All solvents used for HPLC were Fisher HPLC grade. S. arenicola strain RJA3005 was isolated from marine sediment collected in Papua New Guinea. Laboratory cultivation was carried out using nutrient-rich Marine Medium 1 (MM1-soluble starch: 10.0 g; yeast extract: 4.0 g; peptone: 18.0 g; sea water: 1.0 L; KBr: 0.001 g and FeSO 4 ·7H 2 O: 0.0004 g. Solid media had agar added (2.0 g/L). S. arenicola strain RJA3005 was grown for 14 days on solid agar until a thick mat of orange leathery textured bacterial colonies had formed. The agar and mature mycelia were cut into small squares and immersed in EtOAc for extraction. The EtOAc soaked agar was filtered through paper to separate the agar from the supernatant (3×,~2 L solvent per 8 L growth medium). The EtOAc portions were combined and the solvent removed in vacuo to give a thick red/brown oily-solid, with large portions only soluble in MeOH until further separation. The crude oily/solid was dissolved in a 9:1 mixture of H 2 O/MeOH prior to first partitioning between hexane/H 2 O and then CH 2 Cl 2 /H 2 O·MeOH was removed from the H 2 O layer in vacuo giving a deep golden aqueous solution containing reddish-brown oily-solids. EtOAc was added to completely dissolve the oily-solids and the EtOAc-soluble layer was removed and dried in vacuo giving a viscous oil that was the active fraction in the phosphatase inhibition assay. This material was chromatographed on Sephadex LH-20 (eluent: 4:1 MeOH/CH 2 Cl 2 ) and fractions pooled by TLC similarities and bio-assayed. Active fractions were separated using step-gradient reversed-phase flash chromatography (H 2 O to MeOH), using C 18 was added and then mixed slowly by inversion and incubated for 10 min at 55 • C. NaCl was added to 1.25 M and mixed thoroughly by inversion before one equivalent chloroform was added to precipitate proteins. The two-phase solution was mixed by inversion for 30 min at room temperature and centrifuged for 20 min at 6000 rpm. The upper aqueous phase was transferred to a new tube and 0.6 (v/v) equivalent isopropanol was added for salting out of DNA. The liquid was removed and 70% (v/v) of ethanol was added to wash the DNA before dissolving the resulting pellet in TE buffer (pH 8.0). Sequencing was performed at the Chinese National Human Genome Center (Shanghai, China). A total of 1 µg of genomic DNA was fragmented by sonication and~500 bp and~300 bp fragments were recovered by agarose gel electrophoresis to construct libraries using TruSeqTM DNA Sample Prep Kit-Set A (Illumina, San Diego, CA, USA). The libraries were then amplified by TruSeq PE Cluster Kit (Illumina, San Diego, CA, USA) and two libraries were sequenced on the Illumina Hiseq2000. The 300 bp and 500 bp libraries yielded 0.99 Gbp and 0.7 Gbp, respectively for S. arenicola RJA 3005 and 1.19 Gbp and 0.73 Gbp, respectively for S. arenicola RJA 4486. After sequence assembly using Velvet 1.2.03 [22], the final assembly consisted of 6.9 Mbp of non-redundant sequence across 109 contigs (coverage is~383×) for S. arenicola RJA 3005 and 5.6 Mbp of non-redundant sequence across 140 contigs for S. arenicola RJA 4486. Gene analysis and functional annotation were performed using Glimmer 3.02 [23], combined with 2ndFind (http://biosyn.nih.go.jp/2ndfind/ accessed on 10 February 2019) and BlastP [24].
Bioinformatic Analysis
Bioinformatic programs were used at default settings. Fasta files produced during sequencing were uploaded to antiSMASH. All gene clusters predicted from antiSMASH were then visually inspected for complete gene clusters or truncated gene clusters. Gene clusters were cross-referenced with MiBIG available in the antiSMASH shell. Core genes and key accessory genes were utilized for further BLASTP searches before assigning a natural product class and closest homologous gene cluster. Mauve alignments were completed utilizing the S. arenicola CNS-205 rifamycin gene cluster extracted from antiSMASH. The contigs containing partial rifamycin gene clusters of S. arenicola RJA 4486 and RJA 3005 were reorganized in alignment with the complete gene cluster using the Mauve, 'Move Contigs' tool. The list of genes available from antiSMASH analysis and the genome sequencing was then aligned to confirm the incomplete sequencing status of the rifamycin gene cluster in S. arenicola RJA4486 and RJA3005.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27113569/s1, 1D and 2D NMR spectra for compounds 3, 4, and 5; Experimental details for X-ray diffraction analysis of 5 and additional secondary metabolite gene cluster details.
Conflicts of Interest:
The authors declare no conflict of interest.
Sample Availability: Samples of the compounds are not available from the authors. | 2022-06-04T15:10:38.769Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "6dba8b9f455fa5f213e98b750a3a81df1ccd64a9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/11/3569/pdf?version=1654150039",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "750140f1f2eef07756aed3981437b0e3629c17a0",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259155345 | pes2o/s2orc | v3-fos-license | Omicron subvariant BA.5 efficiently infects lung cells
The SARS-CoV-2 Omicron subvariants BA.1 and BA.2 exhibit reduced lung cell infection relative to previously circulating SARS-CoV-2 variants, which may account for their reduced pathogenicity. However, it is unclear whether lung cell infection by BA.5, which displaced these variants, remains attenuated. Here, we show that the spike (S) protein of BA.5 exhibits increased cleavage at the S1/S2 site and drives cell-cell fusion and lung cell entry with higher efficiency than its counterparts from BA.1 and BA.2. Increased lung cell entry depends on mutation H69Δ/V70Δ and is associated with efficient replication of BA.5 in cultured lung cells. Further, BA.5 replicates in the lungs of female Balb/c mice and the nasal cavity of female ferrets with much higher efficiency than BA.1. These results suggest that BA.5 has acquired the ability to efficiently infect lung cells, a prerequisite for causing severe disease, suggesting that evolution of Omicron subvariants can result in partial loss of attenuation.
BA.2.12.1, which was responsible for many cases in North America, South America and Europe from 03/2022 to 07/2022, and BA.4 and BA.5 13 . While BA.4 was responsible for a subset of the cases in Europe from May to September, the subvariant BA.5 and its descendants dominated the pandemic in autumn of 2022 14 . The S proteins of BA.4/ BA.5 are identical on the amino acid levels and compared to BA.2.12.1 harbor shared and unique mutations in functionally relevant domains (Fig. 1A), including the receptor binding domain (RBD), which facilitates engagement of the cellular receptor ACE2. However, it is currently unknown whether these variants, like BA.1 and BA.2, exhibit inefficient lung cell entry.
Here, we show that in cell culture BA.5 infects lung cells with similar efficiency as B.1, a virus which circulated early in the COVID-19 pandemic. Furthermore, we demonstrate that BA.5 unlike BA.1 efficiently infects the nasal cavity in ferrets and lung tissue in mice, suggesting that BA.5 has an elevated capacity to spread in the respiratory tract and potentially to cause severe disease.
BA.4/BA.5 spike protein efficiently fuses cells
We first asked whether BA.2.12.1 and BA.4/BA.5 S proteins exhibit altered cleavage at the S1/S2 site, which occurs in transfected and infected cells and is mediated by the host-cell protease furin 15,16 . The S protein of SARS-CoV-2 variant B.1 exhibits an amino acid sequence identical to that of the Wuhan-Hu-1 S protein but contains mutation D614G and was used as control. We found that all S proteins studied were readily detectable in particle preparations, although levels of BA.2 and BA.2.12.1S protein were reduced relative to B.1 S protein.
Cleavage of BA.1 and particularly BA.2 and BA.2.12.1 S proteins was less efficient as compared to B.1 S protein (Fig. 1B, C) while BA.4/BA.5 S protein was cleaved with similar efficiency as B.1 S (Fig. 1B, C). Although an impact of S protein expression levels on our analysis of S protein cleavage cannot be excluded, these results suggest that the BA.4/ BA.5 S protein exhibits increased cleavability relative to its counterparts in previously circulating Omicron subvariants. Augmented S protein cleavage was found to be associated with increased cell-cell fusion in the context of the Delta variant (B.1.617.2) 17 while cell-cell fusion of the Omicron subvariants BA.1 and BA.2 was reported to be reduced 9,11,12 . Therefore, we next analyzed the capacity of BA.2.12.1 and BA.4/BA.5 S proteins to drive cell-cell fusion. Employing 293T effector cells transfected to express S protein and either 293T or A549-ACE2 target cells, we confirmed that cell-cell fusion driven by the S protein of variant B.1.617.2 was increased as compared to B.1 spike while cell-cell fusion driven by the S proteins of BA.1 and BA.2 was reduced ( Fig. 1D). Notably, the S protein of BA.2.12.1 showed an intermediate phenotype with 293T-ACE2 but not A549-ACE2 cells, potentially due to differences in ACE2 expression levels, while the S protein of BA.4/BA.5 drove cell-cell fusion with similar efficiency as B.1 S protein (Fig. 1D). Red areas indicate amino acids conserved in all strains analyzed. B Efficiency of S protein cleavage. Immunoblot analysis of pseudotyped particles containing the indicated S proteins was used to examine S protein particle incorporation and cleavage. S proteins and VSV-M (loading control) were detected by anti-S2 and anti-VSV-M antibodies, respectively. The results were confirmed in two separate experiments. C Quantification of S protein cleavage efficiency. Total S protein signals (bands indicating unprocessed [S0] and processed [S2] S protein) for each S protein were set to 100% and the relative proportions of S0 and S2 were determined. The average (mean) data from three biological replicates are shown. Error bars indicate SEM. D Spike proteindriven cell-cell fusion. 293T effector cells transiently expressing the indicated S proteins (or no S protein) along with the beta-galactosidase alpha fragment were mixed with either 293T target cells transiently expressing ACE2 and the betagalactosidase omega fragment, or A549-ACE2 target cells transiently expressing the beta-galactosidase omega fragment. Subsequently, beta-galactosidase substrate was added and luminescence measured. Presented are the average (mean) data ± SEM of three biological replicates, each performed with four technical replicates. For all panels statistical significance was analyzed by two-tailed Student's t-tests with Welch correction (p > 0.05, not significant [ns]; p ≤ 0.05, *; p ≤ 0.01, **; p ≤ 0.001, ***), see also Extended Data These results indicate that the SARS-CoV-2 Omicron subvariants BA.4 and BA.5 might exhibit increased S protein cleavage at the S1/S2 site and ability to fuse lung cells relative to previously circulating Omicron subvariants BA.1 and BA.2.
V --I D ----I EPE D L P F N K S N K A R S R Y H K G Y K H K Y K H K F I ---S D G D F P F A N S N K N K A R R Y H G Y K H K Y H K I ---S D G D F P F A N S N K Q N K A R R Y H G Y K H L K Y H K I ---S --D G D F P F A N S N K R N K A V R Y H G Y K H
Robust ACE2 binding of BA.4/BA.5 spike protein In order to exclude that phenotypes observed in the cell-cell and virus-cell (below) fusion assay might reflect alterations in binding to the cellular receptor ACE2, we next investigated whether the S proteins of BA.2.12.1 and BA.4/BA.5 bound to ACE2 with different efficiency as compared to BA.1 and BA.2 S proteins. For this, we examined binding of ACE2 fused to the Fc portion of human immunoglobulin G to cells transfected to express S proteins, as previously reported 3 . We found that all S proteins analyzed bound to ACE2 with comparable efficiency (Fig. 2A, Supplementary Fig. 1), indicating that differences in cell-cell fusion and virus-cell fusion (see below) were not due to altered ACE2 binding efficiency. Similarly, an anti-ACE2 antibody efficiently blocked Vero cell entry of rhabdoviral reporter particles pseudotyped with all S proteins studied, although inhibition of entry driven by S proteins of Omicron subvariants was slightly less efficient as compared to entry driven by B.1 S protein and blockade of BA.4/BA.5 S protein-mediated entry was least efficient (Fig. 2B, Supplementary Fig. 1).
Augmented lung cell entry driven by the BA.4/BA.5 spike protein We next analyzed whether the increased cell-cell fusion driven by BA.2.12.1 and particularly BA.4/BA.5 S proteins was associated with increased cell entry. For this, we employed pseudotyped particles (pp), which mirror key aspects of SARS-CoV-2 cell entry 18 , and cell lines commonly used for SARS-CoV-2 research: 293T (human, kidney), Vero (African green monkey, kidney), A549-ACE2 (human, lung, stably expressing ACE2), Caco-2 (human, colon) and Calu-3 (human, lung). In line with previous reports, particles pseudotyped with BA.1 (BA.1 pp ) or BA.2 (BA.2 pp ) S protein were more efficient at entering 293T, Vero and A549-ACE2 cells, while entry into Caco-2 and Calu-3 cells was reduced as compared to B.1 pp 3,10 ( Fig. 2C, Supplementary Fig. 2). Further, we found that 293T, Vero, A549-ACE2 and Caco-2 cell entry of BA.2.12.1 pp and BA.4/BA.5 pp was comparable to that of BA.2 pp and, for 293T, Vero and A549-ACE2 cells, was slightly more efficient than that measured for BA.1 pp (Fig. 2C). In contrast, Calu-3 cell entry of BA.2.12.1 pp and BA.4/BA.5 pp was significantly more efficient (on average 1.7-fold increase) than that measured for BA.1 pp and BA.2 pp (Fig. 2C) proteins (or no S protein) were incubated with the indicated concentrations of soluble ACE2 harboring a C-terminal Fc-tag (derived from human immunoglobulin G; solACE2-Fc) and then incubated with an AlexaFluor-488coupled secondary antibody. Subsequently, ACE2 binding was analyzed by flow cytometry and normalized against the assay background (signals for samples without soluble ACE2, set as 1). Right: Area under the curve (AUC) data for ACE2 binding. Both panels show average (mean) data ± SEM from three biological replicates (each with single samples). Please also see Supplementary Fig. 1. B Impact of ACE2 blockade on S protein-driven cell entry. Left: Vero cells were preincubated with different concentrations of anti-ACE2 antibody and subsequently inoculated with pseudoviruses bearing the indicated S proteins or VSV-G (or no S protein). Cell entry was assessed by measuring the activity of pseudovirus-encoded firefly luciferase in cell lysates at 16-18 h after inoculation and normalized against samples that were not exposed to anti-ACE2 antibody (set as 0% inhibition). Right: AUC data for ACE2 blockade. Both panels show average (mean) data ± SEM from three biological replicates (each with four technical replicates). C Cell entry mediated by S proteins. Cell entry was assessed by measuring the activity of pseudovirus-encoded firefly luciferase in cell lysates at 16-18 h after inoculation of cells with particles containing the indicated S proteins (or no S protein). The average (mean) data ± SEM from 6 to 12 biological replicates (each with four technical replicates) are presented, with entry standardized against B.1 (set as 1). Please also see Supplementary Fig. 2. For all panels statistical significance was analyzed by two-tailed Student's t-tests with Welch correction (p > 0.05, not significant [ns]; p ≤ 0.05, *; p ≤ 0.01, **; p ≤ 0.001, ***), see also Extended Data Table 2. AUC area under the curve.
No apparent differences in protease choice between the spike proteins of BA.1, BA.2 and BA.4/BA.5 The augmented entry of BA.2.12.1 pp and BA.4/BA.5 pp into Calu-3 cells (relative to BA.1 pp and BA.2 pp ) might have been associated with a change in the relative dependence on the host-cell proteases cathepsin L and TMPRSS2 for S protein activation. However, inhibition experiments with protease inhibitors showed that this was not the case: BA.1 pp , BA.2 pp , BA.2.12.1 pp and BA.4/BA.5 pp exhibited comparable sensitivity to the cathepsin L inhibitor MDL28170 and similar results were obtained with the TMPRSS2 inhibitor camostat (Fig. 3A, B). Further, when both TMPRSS2 and cathepsin L were available for entry (Calu-3 and Caco-2 cells) pseudoparticles bearing Omicron S proteins were more sensitive to MDL28170 and less sensitive to camostat as compared to B.1 pp (Fig. 3A, B), reflecting the previously noted preference of Omicron subvariants for cathepsin L. Collectively, these results indicate that increased lung cell entry of BA.2.12.1 pp and BA.4/ BA.5 pp (relative to BA.1 pp and BA.2 pp ) was not due to changes in protease preference.
Deletion of H69 and V70 is required for enhanced lung cell entry driven by BA.4/BA.5 spike protein Table 3.
BA.5 infects Calu-3 lung cells with high efficiency
None of the mice had to be euthanized due to any severe clinical signs after infection and no decrease in body weight was observed upon infection with all variants tested (Fig. 6B). However, analysis of infectious units and viral genome copies showed that BA.5 replicated in lungs about 1000-fold more efficiently than BA.1 (Fig. 6C, D). Somewhat surprisingly, BA.4 also replicated robustly in lung tissue, although roughly tenfold less efficiently than BA.5 (Fig. 6C, D). Finally, BA.5 induced expression of certain cytokines, including IL-6, with higher efficiency than BA.1 while BA.4 showed an intermediate phenotype (Fig. 6E). Collectively, these results indicate that BA.5 unlike the previously circulating Omicron subvariants BA.1 and BA.2 can efficiently replicate in lung tissue.
BA.5 replicates in the nasal epithelium of ferrets
Ferrets are naturally susceptible to SARS-CoV-2 infection, allow for virus amplification in the nasal epithelium and allow for contact transmission of the virus 19,20 . We had previously found that ferret infection by the Omicron subvariant BA.1 was abortive, and all inoculated animals did not seroconvert 21 . Therefore, we investigated whether inoculation of ferrets with BA.5 resulted in virus replication and disease (Fig. 6F). Virus replication was detected in the nasal cavity of all inoculated animals (Fig. 6G) and one animal seroconverted (Fig. 6H), demonstrating that BA.5 acquired increased replicative capacity in the upper respiratory tract of ferrets as compared to BA.1.
Discussion
Our results show that, unlike the previously circulating Omicron subvariants BA.1 and BA.2, the subvariant BA.5 efficiently enters human lung cells and replicates in the upper (ferrets) and lower (mice) respiratory tract. These results suggest that BA.5 might have increased capacity to cause severe disease as compared to previously circulating Omicron subvariants. The fusion of SARS-CoV-2 infected cells with neighboring uninfected cells is driven by the S protein and results in the formation of syncytia, which might contribute to COVID-19 pathogenesis 22,23 . The Omicron subvariants BA.1, BA.2 and BA.3 are less well able to fuse cells as compared to previously circulating variants, including the Delta variant, and reduced cell-cell fusion might partially account for the reduced pathogenic potential of these Omicron subvariants as compared to previously circulating variants of concern 9,11,12,24 . The S proteins of BA.2.12.1 and particularly BA.4/BA.5 showed increased cell-cell fusion as compared to the BA.1 and BA.2 S proteins and it will be interesting to determine whether syncytium formation is increased in BA.2.12.1 and BA.4/BA.5 patients. Further, the efficiency of syncytium formation has been associated with the efficiency of S protein cleavage 17,25 . It is therefore noteworthy that the presence of an additional arginine residue in the S1/S2 cleavage site that is found in certain BA.5 subvariants (exchange H681R) tended to have a more prominent effect on B.1 (exchange P681R) than BA.4/BA.5 S protein cleavage and virus-cell fusion (Supplementary Fig. 4). Table 5. AUC area under the curve, PFU plaque forming unit.
The Omicron subvariants BA.1 and BA.2 fail to efficiently infect lung cells, potentially due to inefficient usage of the protease TMPRSS2, and this phenotype might partially account for the reduced capacity of these variant to cause severe disease 3,[9][10][11][12] . The present study shows that BA.2.12.1 pp and BA.4/BA.5 pp entered lung cells more robustly and this phenotype was dependent on H69Δ/V70Δ, mutations that were previously linked to increased infectivity 26 28 , potentially due to difference in receptor and protease expression levels compared to Calu-3 and Caco-2 cells examined in the present study. We speculate that augmented Calu-3 lung entry might be associated with use of certain attachment-promoting factors or evasion of restriction factors of the innate immune system. Indeed, initial experiments with amphotericin B, which rescues SARS-CoV-2 infection from blockade by the endo-/lysosomal restriction factors IFITM2/IFITM3 [29][30][31] cavity of ferrets and the lung of Balb/c mice. This correlation was not observed for BA.4. Despite robust Calu-3 cell entry of BA.4/BA.5 pp , BA.4 infection of Calu-3 cells was low (i.e. similar to that measured for BA.1). Nevertheless, the virus replicated robustly in mouse lungs. The reason for this discrepancy is at present unknown but one could speculate that a genetic determinant other than the S gene might limit viral spread in human but not mouse lung cells.
Our study reveals that BA.5 has acquired increased capacity to infect lung cells. However, it remains to be determined whether this translates into increased virulence. Recent reports provide initial insights. Two studies examining BA.5 infection in hamster and mouse models detected no apparent differences in lung infection and pathogenicity between BA.2 and BA.5, although competition experiments indicated greater replicative fitness of BA.5 relative to BA.2 32,33 . In contrast, two separate studies demonstrated augmented lung infection and higher pathogenicity of BA.5 as compared to BA.2 in hamster models, with BA.5 but not BA.2 infected animals losing weight and showing extensive lung damage 34,35 . Regarding BA.5 infection of humans, two studies examining patients in South Africa suggested that BA.5 infection was not associated with more severe disease as compared to infection with previously circulating Omicron subvariants 36,37 . It should be noted, however, that the South African population is relatively young and contains a high percentage of previously infected or vaccinated individuals. As a consequence, the impact of BA.5 on the health of older populations with lower levels of preexisting immunity might be more severe. Indeed, studies examining patients in Denmark 38 and Canada 39 reported that risk of hospitalization was increased for BA.5 as compared to BA.1 (Canada) and BA.2 (Denmark) infected patients, respectively.
In sum, the present study and recent reports show augmented lung infection and possibly pathogenicity of BA.5 relative to previously circulating Omicron subvariants, indicating that SARS-CoV-2 evolution might, at least in the short term, not result in attenuation.
ACE2 binding
293T cells were seeded in 6-well plates and transfected with expression plasmids for the corresponding SARS-CoV-2 S protein by calciumphosphate precipitation. As negative control, cells were transfected with an empty plasmid. The medium was changed at 24 h after transfection. Medium was removed at 48 h after transfection, and the cells were resuspended in PBS and transferred to 1.5 ml reaction tubes before being pelleted by centrifugation. All centrifugation procedures were carried out at room temperature for 5 min at 600 × g. The supernatant was then aspirated, and the cells were rinsed in PBS containing 1% bovine serum albumin (BSA, PBS-B) and pelleted. The cell pellets were then resuspended in 250 µl PBS-B containing different concentrations of soluble solACE2-Fc (Bio-Techne) and rotated for 60 min at 4°C using a Rotospin test tube rotator disk (IKA). Cells were pelleted, resuspended in 250 µl PBS-B containing goat anti-Human IgG (H + L) cross-adsorbed secondary antibody, Alexa Fluor™ 488 (1:200, Thermo Fisher Scientific, Catalog # A-11013), and rotated for 60 min at 4°C. Finally, the cells were washed in PBS-B, fixed for 30 min at room temperature in a 1 % paraformaldehyde solution, washed again, and resuspended in 100 µl PBS-B before being analyzed with an ID7000 Spectral Cell Analyzer (Sony Biotechnology, San Jose, CA, USA). Mean channel fluorescence data were further analyzed using the ID7000 software.
Immunoblot
To investigate S protein cleavage and particle incorporation, vesicular stomatitis virus (VSV) pseudotypes bearing S proteins (codon-optimized, with a C-terminal truncation of 18 amino acid residues) were concentrated by high-speed centrifugation (13,300 rpm, 90 min, 4°C) through a sucrose cushion (20 % w/v sucrose in PBS) and lysed in 2× Sample buffer (0.03 M Tris-HCl, 10% glycerol, 2% SDS, 5% beta-mercaptoethanol, 0.2% bromophenol blue, 1 mM EDTA). Proteins were blotted onto nitrocellulose membranes (Hartenstein) after SDS-PAGE and blocked for 30 min in 5% BSA. After blocking, the membranes were incubated overnight at 4°C with primary antibodies reactive against (1:2000). S2 antibody was diluted in 5% BSA and VSV-M antibody in PBS-T containing 5% skim milk, and blots were washed three times with PBS-T for 10 min after each antibody incubation. Immunoblots were incubated with a homemade chemiluminescence solution (0.1 M Tris-HCl [pH 8.6], 250 g/ml luminol, 0.1 mg/ml para-hydroxycoumaric acid, 0.3 percent hydrogen peroxide) analyzed with the ChemoCam imaging system and Che-moStar Professional software (Intas Science Imaging Instruments). The ImageJ software (version 1.53C, https://imagej.nih.gov/ij/) was used to quantify protein bands. Total S protein signals (uncleaved, S0, and cleaved, S2) were normalized against their respective VSV-M signals for the examination of S protein incorporation into VSV particles, and the resulting values were further normalized against the B.1S protein (set as 1). Total S protein signals (uncleaved, S0, and cleaved, S2) were set to 100% for each S protein for quantification of S protein cleavage, and the contribution of S0 and S2 to the overall signal was determined.
Production of VSV pseudotypes
Vesicular stomatitis virus particles pseudotyped with the SARS-CoV-2 S proteins were produced as described previously 41 . Using the calciumphosphate method, 293T cells were transfected with plasmids encoding S protein or VSV-G, or an empty plasmid (control). VSV-Gtranscomplemented VSV*G(FLuc), a replication-deficient vesicular stomatitis virus (VSV) that lacks the genetic information for its own glycoprotein (VSV-G) and instead codes for two reporter proteins, enhanced green fluorescent protein (eGFP) and firefly luciferase (kindly provided by Gert Zimmer), was inoculated onto cells 30 h after transfection 43 . The inoculum was removed after 1 h of incubation and the cells were rinsed in phosphate-buffered saline (PBS). After that, all cells received DMEM medium with anti-VSV-G antibody (1:1000, culture supernatant from I1-hybridoma cells; ATCC no. CRL-2700) to neutralize residual VSV-G, with the exception of cells expressing VSV-G, which received medium without antibody. The culture supernatant was taken after 16-18 h of incubation, cleared from cellular debris by centrifugation at 4000 × g for 10 min, aliquoted, and stored at −80°C until further use.
Transduction of target cells
Target cells seeded in 96-well plates were inoculated with equal volumes of pseudotypes, and transduction efficiency was assessed by detecting luciferase activity in cell lysates at 16-18 h after transduction. For this, cells were lysed for 30 min at room temperature in PBS containing 0.5% Triton X-100 (Carl Roth). Subsequently, luciferase substrate (Beetle-Juice, PJK) was added to cell lysates in white 96-well plates and luminescence was measured using a Hidex Sense plate luminometer (Hidex). For experiments investigating the impact of ACE2 blockade on S protein-driven cell entry, Vero cells were incubated for 30 min at 37°C with twofold serial dilution of anti-ACE2 antibody (recombinant anti-ACE2 neutralizing antibody (Sino Bilogicals, Cat: 10108-MM36)) starting at 10 µg/ml prior to inoculation with pseudotypes.
For experiments addressing the effects of the antifungal amphotericin B (AmphoB) or the protease inhibitors MDL28170 (inhibitor of cathepsin L) and camostat mesylate (camostat, TMPRSS2 inhibitor), target cells were incubated for 1 h in medium containing the respective compound or solvent (AmphoB, water; MDL28170 and camostat, DMSO) prior to inoculation with pseudotypes.
Quantitative fusion assay
293T effector cells grown to 75% confluency in 12-well plates were cotransfected with expression plasmids for the respective S protein or empty vector (1.5 µg/well) and the beta-galactosidase alpha fragment (0.5 µg/well) using Lipofectamine 2000 (Thermo Fisher Scientific) according to the manufacturer's instructions. Subsequently, effector cells were washed, resuspended in 500 µl and added to 293T target cells (96-well format, 100 µl/well, four technical replicates) that were transfected with plasmids encoding ACE2 (0.1 µg/well) and the betagalactosidase omega fragment (0.1 µg/well), or A549-ACE2 target cells (96-well format, 100 µl/well, four technical replicates) that were transfected with plasmid encoding the beta-galactosidase omega fragment (0.1 µg/well). Beta-galactosidase substrate (Gal-Screen, Thermo Fisher Scientific) was added (100 µl/well) after an additional 24 h of incubation, and samples were incubated for 90 min in the dark at room temperature before they were transferred into white 96-well plates and luminescence was measured using a Hidex Sense plate luminometer (Hidex).
SARS-CoV-2 infection of cell lines
Vero E6 cells were seeded in 6-well plates at 1.5 × 10 5 cells/well, Calu-3 cells at 3 × 10 5 cells/well. Cultured cells were infected with early passage virus stocks at an MOI of 0.01 for 1 h at 37°C. Supernatants were harvested at the indicated time points and virus was quantified by plaque titration on Vero E6 cells using a previously published protocol 44 . Isolates B.1, BA.1 and BA.2 were from the in-house strain collection of Charité. Isolates BA.4 and BA.5 were obtained from the WHO BioHub resource.
Infection of mice
Female BALB/c mice of 6-8 weeks of age were obtained from Charles River Laboratories. Mice were maintained in the Animal Care Unit at the University of Iowa under standard conditions of dark/light cycle, ambient temperature and humidity (Lighting-12 light:12 dark cycle, Humidity-30-70%, Temperature range-kept in accordance with the Guide for the Care and Use of Laboratory Animals (https:// grants.nih.gov/grants/olaw/Guide-for-the-care-and-use-of-Laboratoryanimals.pdf) page 44, 20-26°C for mouse). Mice were randomly assigned to different groups, with numbers per group sufficient to obtain statistical significance.
Mice were anaesthetized with ketamine-xylazine and infected intranasally with 10 5 PFU of Omicron variants (BA.1: EPI_ISL_7171744; BA.4: NR-56806, BEI; BA.5: NR-58620, BEI) in a total volume of 50 μl DMEM. Animal weight and health were monitored daily. All mouse experiments with SARS-CoV-2 were performed in a biosafety level 3 (BSL3) laboratory at the University of Iowa.
Quantification of viral titers in infected mice
At the indicated times, mice were euthanized and transcardially perfused with PBS. Lungs were collected and homogenized before clarification by centrifugation and titring. Tissue homogenates were serially diluted in DMEM. Twelve-well plates of Vero-hACE2-TMPRSS2 cells were inoculated at 37°C in 5% CO 2 for 1 h and gently rocked every 15 min. After removing the inocula, plates were overlaid with 0.6% agarose containing 2% FBS. After 2 days, overlays were removed and plaques visualized by staining with 0.1% crystal violet. Viral titers were quantified as PFUs per ml tissue.
Infection of ferrets
Five female ferrets were kindly provided by the Paul-Ehrlich-Institute (PEI, Langen, Germany) and housed in multiple connected cage units. The animals were intranasally inoculated with 200 µl of SARS-CoV-2 Omicron BA.5 (EPI_ISL_12268493.2) at a concentration of 10 5.0625 TCID 50 /ml (calculated by back-titration of the inoculum). Ferrets were sampled 2 days before inoculation and for 4 consecutive days after inoculation (starting at 1 dpi), as well as every 2 days from five to eight dpi via nasal washings. In addition, body weight was determined. Nasal washings were performed under a short-term isoflurane inhalation anesthesia via administration of 750 µl PBS directly into each nostril and collection of the reflux. Physiological condition of the ferrets was monitored daily by trained animal caretakers or a veterinarian.
Quantification of viral RNA in infected mice and ferrets
Mice. Total RNA was extracted from tissues using TRIzol (Invitrogen) according to the manufacturer's protocol. Following DNase treatment, 1 μg of total RNA was used as a template for first-strand cDNA using SuperScript IV RT system (Invitrogen). The resulting cDNA was subjected to amplification of selected genes by real-time quantitative PCR using Power SYBR Green PCR Master Mix (Applied Biosystems). Average values from duplicates of each gene were used to calculate the relative abundance of transcripts normalized to HPRT and presented as 2 −ΔCT . The primers used for cytokine and chemokines were reported previously 45 . For detection of viral genomes, the following primers were used to amplify transcripts for the N protein: 2019-nCoV_N1-F: 5′-GACCCCAAAATCAGCGAAAT-3′; 2019-nCoV_N1-R: 5′-TCTGGTTACT GCCAGTTGAATCTG-3′.
Ferrets. One hundred microliters of the collected nasal washes were used for nucleic acid extraction with the NucleoMag Vet kit (Macherey Nagel). Viral genomes were detected and quantified by quantitative real-time polymerase chain reaction (real-time RT-qPCR). Target sequence for the specific amplification was the viral RNA-dependent RNA polymerase gene (WHO. Coronavirus disease (COVID-19) technical guidance: Laboratory testing for 2019-nCoV in humans. Online available: https://www.whoint/ emergencies/diseases/novel-coronavirus-2019/technical-guidance/ laboratory-guidance). In order to calculate viral genome copy numbers per ml, a standard dilution series with a known copy number concentration -determined by digital droplet PCR -was carried along in each PCR-run.
Statistics
Microsoft Excel (as part of the Microsoft Office software package, version 2019, Microsoft Corporation) and GraphPad Prism 8 version 8.4.3 (GraphPad Software) were used to analyze the data. The tests used to determine statistical significance are indicated in the figure legends.
Ethics committee approval
All mouse studies were approved by the University of Iowa Animal Care and Use Committee and meet stipulations of the Guide for the Care and Use of Laboratory Animals. The ferret infection study was evaluated by the responsible ethics committee of the State Office of Agriculture, Food Safety, and Fishery in Mecklenburg-Western Pomerania (LALLF M-V) and gained governmental approval under the registration number LVL MV TSD/7221.3-2-005/21.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The sequences of SARS-CoV-2 spike proteins were obtained from GISAID database (https://gisaid.org/). All unprocessed data generated in this study are provided in the Supplementary Information. Any additional information required to reanalyze the data reported in this paper is available on request. Source data are provided with this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. | 2023-06-15T06:16:37.676Z | 2023-06-13T00:00:00.000 | {
"year": 2023,
"sha1": "036ad74356e2393422f5eea30f7a3f91429a42fe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41467-023-39147-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51fc0c175a1703540b1dd02ae41927cad53e83e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231648097 | pes2o/s2orc | v3-fos-license | Local-global principles for homogeneous spaces over some two-dimensional geometric global fields
In this article, we study the obstructions to the local-global principle for homogeneous spaces with connected or abelian stabilizers over finite extensions of the field $\mathbb{C}((x,y))$ of Laurent series in two variables over the complex numbers and over function fields of curves over $\mathbb{C}((t))$. We give examples that prove that the usual Brauer-Manin obstruction is not enough to explain the failure of the local-global principle, and we then construct a variant of this obstruction using torsors under quasi-trivial tori which turns out to work. In the end of the article, we compare this new obstruction to the descent obstruction with respect to torsors under tori. For that purpose, we use a result on towers of torsors, that is of independent interest and therefore is proved in a separate appendix.
Introduction
Consider a number field K. Recall that a class of K-varieties F is said to satisfy the local-global principle if any variety Z ∈ F that has points in every completion of K has in fact a rational point over K. The most classical example of such a class is given by the class of quadrics over K (Hasse-Minkowski Theorem).
Some varieties fail to satisfy the local-global principle. In order to explain those failures, given any K-variety Z, Manin introduced in 1970 a subset Z(A K ) Br of the set of adelic points Z(A K ) that always contains the set of rational points Z(K). This allowed him to define a weakening of the local-global principle, the so-called Brauer-Manin obstruction: the Brauer-Manin obstruction is said to be the only obstruction to the local-global principle for a given class of K-varieties F if any variety Z ∈ F for which Z(A K ) Br = ∅ has in fact a K-rational point.
Since Manin's contribution, various classes have been proved to satisfy or to fail the previous condition. One of the most important examples was given in 1981 by Sansuc (cf. [San81]), who proved that the Brauer-Manin obstruction is the only obstruction to the Hasse principle for principal homogeneous spaces under connected linear algebraic groups. This result was then extended in 1996 to all homogeneous spaces under connected linear algebraic groups with connected stabilizers by Borovoi (cf. [Bor96]).
Much more recently, there has been a considerable interest in the study of similar questions for various two-dimensional fields naturally arising in geometry. Consider a field K of one of the following two types: (a) the function field of a smooth projective curve X over C((t)); (b) the fraction field of a local, normal, henselian, two-dimensional, excellent domain A with algebraically closed residue field of characteristic 0 (for instance, K can be any finite extension of the Laurent series field C((x, y)) in two variables over the complex numbers). In this case, X will stand for Spec(A).
The field K then tends to have an arithmetic behaviour similar to that of the usual global fields. In case (a), one can define a local-global principle for the field K by considering its completions with respect to the valuations coming from the set X (1) of closed points of the curve X. Colliot-Thélène/Harari (cf. [CTH15, § §8.2, 10.2]) then defined a Brauer-Manin obstruction to this local-global principle and proved that it is the only obstruction for principal homogeneous spaces under connected linear algebraic groups.
In case (b), one can define a local-global principle for the field K by considering its completions with respect to the valuations coming from the set X (1) of codimension 1 points of X. The first-named author (cf. [Izq19, §4.1]) then introduced a Brauer-Manin obstruction to this local-global principle and proved that it is the only obstruction for principal homogeneous spaces under connected linear algebraic groups.
Moreover, in [CTPS16,§2.3], Colliot-Thélène/Parimala/Suresh defined other reciprocity obstructions for fields of types (a) and (b), which turn out to be stronger in case (b) by [Izq19,Cor. 4.4]. In other words, every counter-example to the local-global principle explained by the Brauer-Manin obstruction introduced in [Izq19] can also be explained by the obstructions of [CTPS16].
The previous results show that principal homogeneous spaces over fields of types (a) and (b) satisfy properties similar to those proved by Sansuc for principal homogeneous spaces over number fields. The present article aims at investigating if these properties extend to the case of homogeneous spaces with connected stabilizers, as it was proved by Borovoi in the case of number fields. In particular, since we intend to use the Brauer-Manin obstructions defined in [CTH15] and [Izq19], we will always consider adelic points with respect to the set of places X (1) .
As we will explain at the beginning of Section 4, one can easily find examples of homogeneous spaces Z under SL n with toric stabilizers that have no rational points but for which the Brauer-Manin set Z(A K ) Br is non-empty (see Section 2 for precise definitions). This follows from the failure of the local-global principle with respect to X (1) for central simple algebras over fields of types (a) and (b) (cf. [CTH15,§2.3] and [Izq19, §2]). So, in order to understand the obstructions to the local-global principle for such homogeneous spaces, one needs to impose extra local conditions on X.
This is easy to do. Indeed, the Brauer-Manin obstruction we are using here only takes into account some of the completions of the field K: in case (a), we are only considering completions with respect to the valuations coming from the closed points of the curve X, while in case (b), we are only considering completions with respect to the valuations coming from prime ideals of height one in the domain A. However, the field K has more valuations, and hence it is natural to ask whether a homogeneous space Z/K with connected stabilizers for which the Brauer-Manin set Z(A K ) Br is non-empty and that has points in all completions of K always has a rational point. The first main theorem of the present article gives a negative answer to this question: Theorem 1.1 (Consequence of Theorem 4.1 and Remark 4.2). In each of the cases (a) and (b), there exists a field K and a homogeneous space Z/K under SL n,K for some n ≥ 1 with toric stabilizers for which the Brauer-Manin set Z(A K ) Br is non-empty, that has points in all completions of K, but that has no K-rational points.
For that reason, one should go beyond the Brauer-Manin obstruction in order to understand the failure of the local-global principle for homogeneous spaces over fields of types (a) and (b). A natural way to do so consists in combining the Brauer-Manin obstruction with another very usual one, the descent obstruction. If it is known that, for smooth and geometrically integral varieties over number fields, the descent obstruction with respect to torsors under connected linear groups does not carry more information than the Brauer-Manin obstruction with respect to the whole Brauer group (cf. [Har02]), the situation turns out to be completely different over fields of types (a) and (b): Theorem 1.2 (Consequence of Theorem 5.3 and Proposition 2.2). Let K be a field of type (a) or (b) as above. Let Z be a homogeneous space under a connected linear group G with connected geometric stabilizers. Assume that Z has points in every completion of K with respect to a discrete valuation. Then there exists a torsor W → Z under a quasi-trivial torus T such that the following statements are equivalent: The torsor W is obtained by using some results in [DLA19], but it can be explicitly given using Galois descent. Moreover, while proving Theorem 1.2, we will see that the assumption that Z has points in every completion of K with respect to a discrete valuation can be replaced by the assumption that Z has points in an explicit finite number of completions of K. We will also see that one needs to take into account not the whole Brauer group of W in the condition W (A K ) Br(W ) = ∅, but only a subquotient that turns out to be finite. Hence, even if we do not know whether all the involved constructions are algorithmically computable, Theorem 1.2 provides finitely many explicit conditions to find out whether the homogeneous space Z has rational points.
At the end of the article, we compare the obstruction of Theorem 1.2, which combines the Brauer-Manin obstruction and the descent obstruction with respect to torsors under quasi-trivial tori, to the descent obstruction with respect to torsors under general tori (see theorem 6.4). We deduce: Theorem 1.3 (Consequence of Corollary 6.5 and Proposition 2.2). Let K be a field of type (a) or (b) as above. Let Z be a homogeneous space under a connected linear group G with connected geometric stabilizers. Assume that Z has points in every completion of K with respect to a discrete valuation and that the subset of Z(A K ) given by the descent obstruction with respect to torsors under general tori (as defined in section 6) is non-empty. Then Z has a K-rational point.
The proof of this result relies on a theorem of independent interest about towers of torsors stating the following. If K is a field of characteristic 0, G is a connected linear K-group with trivial geometric Picard group, T is a K-torus and X is a geometrically integral K-variety, then every T -torsor over a G-torsor over X admits a structure of a torsor over X under a certain extension of G by T . This result will be proved in Appendix A.
Notations and preliminaries
In this section we fix the notations that will be used throughout this article.
Fields Let F be an algebraically closed field of characteristic 0, and let A be a local, integral, normal, henselian, excellent domain with residue field F and fraction field E. Set S := Spec(A) and let s be the closed point of S. Consider an integral, regular, 2-dimensional scheme X endowed with a projective surjective morphism p : X → S. Let K be the function field of X , let X 0 be the special fiber of p and set X := X \ X 0 = p −1 (S \ {s}).
Assumptions. In the sequel, we will assume that X 0 is a strict normal crossings divisor in X and that we are in one of the two following cases: (a) (Semi-global case). The ring A is a discrete valuation ring, all fibers of p are 1-dimensional, and the generic fiber is smooth and geometrically integral.
(b) (Local case). The ring A is 2-dimensional and p is birational. In particular, X = S \ {s}.
Throughout the article, we will say that the field K is a field of type (a) or (b) respectively. Observe that these types of fields are stable under finite extensions. Indeed, if L/K is a finite extension, then L is a semi-global field in the sense of (a) (resp. a local field in the sense of (b)) if, and only if, K is a semi-global field in the sense of (a) (resp. a local field in the sense of (b)).
Example 2.1. (i) In the semi-global case, one can take A = C[[t]], let X be a smooth projective geometrically integral curve over E = C((t)) and X be a regular model of X whose special fiber has strict normal crossings. The field K is then a finite extension of E(x).
(ii) In the local case, one can let K be a finite extension of C((x, y)), let S be the normalization of Spec(C[[x, y]]) in K and let X be a desingularization of X whose special fiber has strict normal crossings.
Local-global principle and Brauer-Manin obstruction A place of K is a discrete valuation of rank 1 of K. Given a subset Ω of the set Ω K of all places of K, we say that a K-variety Z satisfies the local-global principle with respect to Ω if where K v denotes the completion of K at the place v.
There are mainly three different natural choices for the set Ω in this context: (a) the set Ω K of all places of K; (b) the set X (1) of codimension 1 points of X ; (c) the set X (1) of codimension 1 points of X.
Whatever the choice, one can find varieties that fail the local-global principle. That is why one usually tries to introduce obstructions that explain such failures. Fix a K-variety Z. Given a regular model Z of Z over an open subset U of X, define the set of its adelic points as In the sequel, the group B will often be chosen to be: This group has already been used in the articles [CTH15], [Izq17] and [Izq19], where it was proved that the Brauer-Manin obstruction with respect to B(Z) is the only obstruction to the local-global principle for principal homogeneous spaces under connected linear groups over K.
Tate-Shafarevich groups As explained in the previous paragraph, the local-global principle over the field K can be defined with respect to different sets of places. For that reason, in the sequel, when we are given a Galois module M over K and an integer i ≥ 0, we will make use of the following three Tate-Shafarevich groups: Algebraic groups and homogeneous spaces For a linear algebraic K-group G, we use the following notations: • D(G) is the derived subgroup of G; • G • is the neutral connected compontent of G; • G u is the unipotent radical of G • (it is a unipotent characteristic subgroup); •Ĝ the Galois module of the geometric characters of G.
A torus T is said to be quasi-trivial ifT is an induced Gal(K/K)-module.
In the sequel, we will be often considering homogeneous spaces of a connected linear group G with geometric stabilizerH satisfying the following hypothesis: G ss is simply connected andH torf is abelian.
(2.1) This is the same hypothesis used by Borovoi in [Bor96]. In particular, since for connected H we have thatH torf =H tor is abelian, the following result follows immediately from [Bor96, Lem. 5.2].
Proposition 2.2. Let Z be a homogeneous space of a connected linear group with connected geometric stabilizers. Then Z is a homogeneous space of a connected linear Kgroup G with geometric stabilizerH satisfying (2.1).
Let Z be a homogeneous space under a linear connected K-group G with geometric stabilizerH. Following [DLA19, §2], we associate to Z a gerbe M Z and an injective morphism of gerbes M Z → TORS(G), where TORS(G) denotes the trivial gerbe of G-torsors under G. The gerbe M Z is known as the Springer class of Z and can be regarded as a class in a certain non-abelian 2-cohomology set associated to the geometric stabilizerH. It corresponds to the obstruction for Z to being dominated by a principal homogeneous space of G (i.e. the class is neutral if and only if such a dominating torsor exists). See [DLA19] for more details.
Assume now that G andH satisfy (2.1). Since the subgroupH ssu is characteristic in H, we may also consider the induced gerbe M torf Z which, sinceH torf is abelian, defines a natural K-form H torf ofH torf (cf. for instance [Bor93, §1.7]) and M torf Z can then be regarded as a class η torf ∈ H 2 (K, H torf ).
We will use in the sequel the following results on these particular base fields and their corresponding nonabelian cohomology sets for semisimple groups.
Proposition 2.3. Let K be a field of type (a) or (b). Then K has cohomological dimension 2 and for each central simple algebra A over K, its index and its period are the same. Moreover, every principal homogeneous space under a semisimple simply connected K-group has a rational point. Lemma 2.4. Let K be a field of type (a) or (b). Let L be a K-lien (or K-kernel) whose underlyingK-groupḠ satisfiesḠ =Ḡ ssu . Then every class in H 2 (K, L) is neutral.
Proof. By [Bor93,Prop. 4.1], we may assume thatḠ is reductive, hence semisimple. Then [Bor93,Prop. 3.1] tells us that there exists a neutral class in H 2 (K, L), so that we can regard H 2 (K, L) as H 2 (K, G) for some K-form G ofḠ. The lemma is then a direct consequence of results by González-Avilés. Indeed, by Proposition 2.3 and [GA12, Ex. 5.4(vi)], the field K is "of Douai type". This allows us to apply [GA12, Thm. 5.8(ii)], which tells us that a class in H 2 (K, L) is neutral if and only if its canonical image in H 2 (K, G tor ) is trivial. But G tor = 1 and the result follows.⌣ In this section we study a class of homogeneous spaces for which we can prove that the Brauer-Manin obstruction is the only obstruction to the local-global principle. The strategy is to use the validity of this assertion for principal homogeneous spaces and slice our homogeneous space in order to reduce ourselves to this case.
We start with a case where there is no need for any obstruction (and not even a localglobal principle!). Recall that fields of type (a) and (b) were defined at the beginning of Section 2.
Proposition 3.1. Let K be a field of type (a) or (b). Let Z be a homogeneous space under a linear group G with geometric stabilizerH satisfying (2.1). Assume moreover that G = G ssu andH =H ssu . Then Z has a rational point.
Proof. We prove first that we may assume G = G ss . Indeed, using [Bor96,Lem. 3.1], we may consider the map Z → Z/G u , where Z/G u is a homogeneous space of G ss and the fibers are homogeneous spaces of G u . Now, the Springer class of such a fiber lies in H 2 (K, U ) for a certain unipotent K-lien (or K-kernel) U , which only contains neutral classes by [Bor93,Cor. 4.2]. This implies that the fiber is dominated by a torsor under G u , which clearly has rational points since H 1 is trivial for unipotent groups. Thus, if Z/G u has rational points, Z has rational points as well.
Assume then that G = G ss and let us start with the case of trivial stabilizers. By Proposition 2.3, we know that principal homogeneous spaces of G = G ss always have rational points when G ss is simply connected. The result for non-trivial stabilizers follows from Lemma 2.4, since it implies that every such homogeneous space is dominated by a G-torsor.⌣ We allow now some tori to appear, with a technical hypothesis that ensures that the Brauer-Manin obstruction controls the local-global principle.
Theorem 3.2. Let K be a field of type (a) or (b). Let Z be a homogeneous space under a connected linear group G with geometric stabilizerH satisfying (2.1). Assume that the natural arrowH torf →Ḡ tor is injective. Then Proof. Since G acts on Z, we may consider the quotient variety Z ′ = Z/G ssu . Sincē H torf →Ḡ tor is injective, we know by [Bor96,Lem. 3.1] that Z ′ is a homogeneous space of G tor with geometric stabilizerH torf and the geometric fibers of the quotient morphism Z → Z ′ are homogeneous spaces ofḠ ssu with geometric stabilizers (isomorphic to)H ssu . Consider then an adelic point (P v ) ∈ Z(A K ) B(Z) and push it to Z ′ . By the functoriality of the Brauer pairing, its image lies in Z ′ (A K ) B(Z ′ ) . Since Z ′ is a homogeneous space of a torus, it is also a principal homogeneous space of another torus (namely, of G tor /H torf , where H torf is the K-form ofH torf associated to Z) and hence, by [CTH15,Cor. 8.3] (semi-global case) and [Izq19, Thm. 4.2] (local case), we know that there is a K-point P 0 ∈ Z ′ (K). The fiber above P 0 is a homogeneous space satisfying the hypotheses of Proposition 3.1, hence it has a rational point, which implies that Z(K) = ∅.⌣ As we will see in the subsequent sections, this is as far as we can go with the Brauer-Manin obstruction, in the sense that under more general hypotheses the Brauer-Manin obstruction will not be enough to explain counterexamples to the local-global principle for homogeneous spaces.
We conclude this section with a result that tells us that the Brauer-Manin obstruction with respect to B(Z) factors through a finite quotient.
Proposition 3.3. Let K be a field of type (a) or (b). Let Z be a homogeneous space under a connected linear group G with geometric stabilizerH and satisfying (2.1). Assume that the natural arrowH torf →Ḡ tor is injective. Then the Brauer pairing
Counter-examples
As stated in the Introduction, one can easily produce examples of homogeneous spaces Z over K that fail the local-global principle with respect to the small set of places X (1) and such that Z(A K ) Br(Z) is nonempty. Indeed, consider the quasi-trivial K-torus T = R L/K (G m ) for some finite extension L/K such that X 2 (L, G m ) = 0 (this can be done by [CTH15, § 2.3] and [Izq19, § 1]). Take a non-zero class α ∈ X 2 (K, T ) = X 2 (L, G m ). These examples suggest that, in order to get an obstruction that controls the localglobal principle for homogeneous spaces with connected stabilizers, one needs to take into account local information coming from places outside X (1) . However, as we prove here below, after replacing K by a finite extension, one can construct homogeneous spaces Z that fail the local-global principle with respect to the set Ω K of all places of K and such that Z(A K ) Br(Z) is nonempty. In further sections, we will develop more involved obstructions using torsors under quasi-trivial tori, that explain all these failures of the local-global principle.
The main result of this section is the following: Theorem 4.1. Let K be a field of type (a) or (b). Assume that the special fiber X 0 contains three smooth and integral divisors L 1 , L 2 , L 3 such that, for each i = j, the divisors L i and L j intersect at a single point and the intersection is transversal. Then there exists a homogeneous space Z/K under SL n,K and a geometric point z ∈ Z(K) satisfying the following conditions: (i) the (geometric) stabilizer of z is a torus; (ii) for any discrete valuation v on K, the set Z(K v ) is non-empty; (iii) the set Z(A K ) Br(Z) is non-empty; (iv) the set Z(K) is empty. The proof, which has 8 steps, goes as follows: In Step 1 we fix some notations associated to the field K and the divisors L i . In Steps 2-4, we define a torus S and a homogeneous space Z whose Springer class lies in X 2 tot (K, S), which ensures that properties (i), (ii) and (iv) hold. In Step 5, we construct auxiliary varieties where all arrows are torsors under K-rational groups. This gives us isomorphisms of the corresponding unramified Brauer groups. Moreover, both V andṼ are torsors under suitable K-tori that come from constructions by Colliot-Thélène in [CT14], where their unramified Brauer groups were explicitly given. This allows us to compute the unramified Brauer group of Z in Step 6.
Step 7 uses this information in order to compute explicitly the (whole) Brauer group of Z. Finally, we use this in order to check (iii) in Step 8.
Proof.
Step 1: Recalling the constructions from [CTPS16, §5.2]. Set: Since the Picard group of a semi-local ring is always trivial one can construct three elements π 1 , π 2 , π 3 ∈ K × in the following way: (a) First, one chooses π 1 ∈ K × so that the support of D 1 := div X (π 1 ) − L 1 does not contain any of the m i 's.
(b) Secondly, one chooses a place v 0 ∈ X (1) \ X (0) 0 that is not contained in the support of D 1 . One can then choose π 2 ∈ K × so that: (ii) the support of D 2 := div X (π 2 ) − L 2 does not contain any of the m i 's, any of the components of the support of D 1 and any of the intersection points of one of the L i 's with the support of D 1 .
(c) Finally, one chooses π 3 ∈ K × so that: (i) the classes of π 1 and π 3 in the quotient k Step 2: Defining the torus S as well as some other useful auxiliary tori. Set a := π 2 π 3 and b := π 3 π 1 , introduce the field M := K( √ a, √ b), and consider the normic torus Q and the multinormic torusQ defined by the following exact exact sequences: Note that, by [CT14, Prop. 3.1(b)], we have an exact sequence: We can now introduce a torus S by writing down a resolution of Q with R is a quasi-trivial K-torus split by M . In particular, S is also split by M .
Step 4: Defining the homogeneous space Z and proving statements (i), (ii) and (iv). Take a non-zero class α ∈ X 2 tot (K, S). As before, by [DLA19, Prop. 2.1 & Cor. 3.3], there exists a positive integer n, a homogeneous space Z/K under SL n,K and a geometric point z ∈ Z(K) such that the torus SK is the stabilizer of z and α is the Springer class of Z. Since α = 0, the set Z(K) is empty. Since α ∈ X 2 tot (K, S), the set Z(K v ) is non-empty for every discrete valuation v on K. We have therefore settled (i), (ii) and (iv). The remaining steps will be devoted to the proof of statement (iii).
Step 5: Constructing an auxiliary torsorṼ underQ that is stably birational to Z. By Proposition 5.1, which will be proved in next section, one can construct a Khomogeneous space W under SL n,K × R together with a morphism of K-homogeneous spaces p : W → Z which makes W into an R-torsor. Since R is quasi-trivial and v∈Ω K Z(K v ) = ∅, Hilbert's theorem 90 shows that v∈Ω K W (K v ) = ∅. Now consider the quotient V := W/SL n,K and the natural projection q : W → V . The K-variety V is a Q-torsor such that v∈Ω K V (K v ) = ∅. Its class in H 1 (K, Q) therefore belongs to X 1 tot (K, Q). Moreover, the short exact sequence (4.3) and the triviality of the group H 1 (K, G m ) induce an exact sequence of Tate-Shafarevich groups: and hence an isomorphism X 1 tot (K,Q) ∼ = X 1 tot (K, Q). The Q-torsor V therefore comes from aQ-torsorṼ that has points in every completion of K. The natural morphism r :Ṽ → V is a G 2 m -torsor.
Step 6: Using the torsorṼ to compute the unramified Brauer group of K(Z)/K. Since the groups R, SL n,K and G 2 m are all K-rational, we have the following isomorphisms of unramified Brauer groups: But theQ-torsorṼ is given by an equation of the form: for some c ∈ K × , and then, by [CT14,Thm. 4.1], the quotient Br nr (K(Ṽ )/K)/Im(Br(K)) is a cyclic group of order 2 generated by the class of the quaternion algebra A := (x 2 1 − ax 2 2 , b). Hence Br nr (K(Z)/K)/Im(Br(K)) is also a cyclic group of order 2 generated by γ := (p * ) −1 q * (r * ) −1 ([A]).
Step 7: Computing the Brauer group of Z. By Step 6, the algebraic Brauer group Br al (Z) contains the non-zero class γ (note that A is algebraic). Let us prove that Br al (Z) = {0, γ}. Since Br al (Z) ∼ = H 1 (K,Ŝ) by [BvH12, Thm. 7.2], it suffices to check that H 1 (K,Ŝ) has order at most 2. By inflation-restriction, we have an isomorphism: Moreover, by dualizing the exact sequence (4.1), we get: and hence H 1 (K,Ŝ) has order at most 2, as wished.
Finally, using the same argument given at the beginning of this section, we see that the geometric Brauer group of Z is trivial. We conclude then that Br(Z)/Br 0 (Z) = Br al (Z) = {0, γ}.
Step 8: Checking that Z(A K ) Br(Z) = ∅. Consider the map: Now recall that the projection q : W → V is an SL n,K -torsor. Since SL n,K -torsors over a field are always trivial, we can find an adelic point (P v ) ∈ W (A K ) that lifts (r(P v )) ∈ V (A K ). We then have BM((P v ), q * (r * ) −1 ([A])) = 0, and hence Since Br(Z)/Br 0 (Z) = {0, γ} according to Step 7, we deduce that: This finishes the proof of statement (iii) and hence the proof of the theorem.⌣ Remark 4.3. It is highly likely that there are such counterexamples over every field K of type (a) and (b). Indeed, consider a finite extension L/K such that L satisfies the assumptions of Theorem 4.1. Consider then the Weil restriction R L/K (Z), where Z is defined as in Theorem 4.1. This variety is a homogeneous space under R L/K (SL n,L ). It is evident that this variety satisfies conditions (ii) and (iv). Finally, for (iii), note that R L/K (Z) is simply a self-product ofZ and hence Br(R L/K (Z)) = 0, so that Br(R L/K (Z))/Br 0 (R L/K (Z)) = Br al (R L/K (Z)) = H 1 (K, I L/K (Ŝ)) ∼ = H 1 (L,Ŝ) ∼ = Br(Z)/Br 0 (Z).
One should check then that the Brauer pairing is compatible with these isomorphisms and the bijection between M -points of K and M ⊗ K L-points of Z for M/K. This seems to be a tedious straightforward computation.
An obstruction using torsors under quasi-trivial tori
Let Z be a homogeneous space under a linear connected K-group G with geometric stabilizerH satisfying (2.1). In the previous section, we have seen that the usual Brauer-Manin obstruction is not enough to explain the failure of the local-global principle when Z does not satisfy the technical hypothesis from Theorem 3.2, that is, the injectivity of the natural arrowH torf →Ḡ tor . This is why, in this section, we aim at constructing a stronger obstruction by taking into account more places of K than those in X (1) and applying the Brauer-Manin obstruction to torsors under quasi-trivial tori over Z.
For that purpose, we start by defining a new set Ω Z of places of K as follows. Consider the canonical K-form H torf ofH torf associated to Z. We introduce the following notations: • L/K is the minimal extension splitting the group of multiplicative type H torf ; • B is the normalization of A in L; • Y is an integral regular 2-dimensional scheme with function field K endowed with a projective surjective morphism q : Y → Spec (B) such that its special fiber Y 0 is a strict normal crossings divisor; • Y is the generic fiber of q; • Ω Z is the set of places v of K that are induced by a place w ∈ Y (1) ; • Ω 0,Z is the set of places v of K that are induced by a place w ∈ Y (1) and that are not in X (1) .
Note that Ω 0,Z may contain places that are not in X (0) 0 = X (1) X (1) and that both Ω Z and Ω 0,Z depend on the choice of Y.
Finally, we fix an inclusion H torf ֒→ T into a torus T isomorphic to (R L/K G m ) n for some n ≥ 0. Note that such an inclusion always exists.
We start with an application of [DLA19] that allows us to construct a torsor over Z under T .
Proposition 5.1. With notation as above, assume that Z(K v ) = ∅ for every v ∈ Ω Z . Then there exists a K-homogeneous space W Z,G,H under G × T with geometric stabilizer H torf with the following extra properties: • the natural homomorphismH torf → (Ḡ ×T ) tor =Ḡ tor ×T is injective; • there is a morphism of K-homogeneous spaces p : W Z,G,H → Z which makes W Z,G,H into a T -torsor.
Proof. We use the notation of Section 2. Consider the class η torf ∈ H 2 (K, H torf ) naturally associated to Z (see the text after Proposition 2.2) and denote by ξ its image in H 2 (K, T ) ≃ H 2 (L, G m ) n . For v ∈ Ω Z , the hypothesis Z(K v ) = ∅ implies that η torf is trivial in H 2 (K v , H torf ). Hence ξ lies in X 2 Y (L, G m ) n , which is trivial by [CTPS16, Rem. 2.3]. In particular, ξ represents the trivial gerbe TORS(T ). Thus, the fact that ξ is the image of η torf can be interpreted as a morphism of gerbes M torf Z → TORS(T ). By [DLA19, Thm. 3.4], we get all the data in the statement of the theorem (using the notations in [DLA19], take N := G ssu , G ′ := T and M ′ := M torf Z ).⌣ Remark 5.2. This kind of construction was already used by Borovoi in [Bor96] in order to reduce the study of the Brauer-Manin obstruction to the Hasse principle and weak approximation for homogeneous spaces over number fields to simpler cases in which this study had already been done.
We now prove that the Brauer-Manin obstruction for this T -torsor W Z,G,H → Z is enough to explain the eventual failure of the local-global principle for Z.
Theorem 5.3. Let K be a field of type (a) or (b). Let Z be a homogeneous space under a connected linear group G with geometric stabilizerH satisfying (2.1). We keep the notations given at the beginning of this section. In particular, T is a quasi-trivial torus split by L. Assume that Z(K v ) = ∅ for every v ∈ Ω Z , and consider the T -torsor W Z,G,H → Z constructed in Proposition 5.1. Assume moreover that W Z,G,H (A K ) B(W Z,G,H ) = ∅. Then Z(K) = ∅.
In particular, by Proposition 2.2, we obtain the result for homogeneous spaces with connected stabilizers stated in Theorem 1.2.
Proof. Proposition 5.1 gives us the T -torsor W Z,G,H → Z, to which we apply Theorem 3.2. We conclude that where the second implication is obvious.⌣ Remark 5.4. Another way to define this obstruction is as follows. Assuming Z(K v ) = ∅ for every v ∈ Ω Z , we obtain a T -torsor W Z,G,H → Z. Since T is quasi-trivial, every adelic point (P v ) ∈ Z(A K ) lifts to an adelic point (Q v ) ∈ W Z,G,H (A K ). Evaluation at (Q v ) defines then a homomorphism φ Z : B(W Z,G,H ) → Q/Z which is independent of the choice of (P v ) and (Q v ). By definition, this means that, for any (Q v ) ∈ W Z,G,H (A K ) and any α ∈ B(W Z,G,H ), the Brauer-Manin pairing can be computed by the formula In order to get a more conceptual result, we introduce the following obstruction to the local-global principle.
Definition 5.5. For an arbitrary K-variety Z, we define It is easy to see that Z(K) ⊆ Z(A K ) qt,B ⊆ Z(A K ).
Corollary 5.6 (of Theorem 5.3). Let K be a field of type (a) or (b). Let Z be a homogeneous space under a connected linear group G with geometric stabilizerH satisfying (2.1). Then principle, but that the extra input of torsors under quasi-trivial tori seems to be enough to explain everything. It is natural then to compare this new obstruction with the descent obstruction with respect to tori. In this section we prove that, up to considering the new set of places Ω Z , which seems to be unavoidable, the descent obstruction with respect to tori is not weaker than the obstruction introduced in Definition 5.5 and hence it also explains the failures of the local-global principle for homogeneous spaces satisfying (2.1).
Comparing our new obstruction with descent obstructions
Let us recall the classical definitions associated to descent obstructions (cf. [Sko01, §5.3]).
Definition 6.1. Given a torsor f : W → Z under a K-group G, we define For an arbitrary K-variety Z, we define We give now a generalization of Definition 5.5, which we will compare with descent obstructions here below.
Definition 6.2. For B ∈ {B, Br 1 , Br}, define Remark 6.3. Recall that quasi-trivial tori have trivial H 1 by Hilbert's Theorem 90 and hence there is no need to consider Galois twists in the last definition. In particular, for B = B we recover Definition 5.5 since W (A K ) α is either empty or the whole set W (A K ) and thus Note, however, that for B = Br 1 or B = Br this definition does not coincide a priori with the "more natural" set We can state now the main result of this section.
Theorem 6.4. Let K be a field of type (a) or (b). Let Z be a smooth geometrically integral K-variety. We have Z(A K ) tor ⊆ Z(A K ) qt,Br 1 .
We then deduce that the descent obstruction with respect to tori, and the existence of local points at the places in Ω 0,Z , are enough to explain the lack of Hasse principle for the homogeneous spaces considered in this article: Corollary 6.5. Let K be a field of type (a) or (b). Let Z be a homogeneous space under a connected linear group G with geometric stabilizerH satisfying (2.1). Then Proof. If Z(A K ) tor = ∅, then by Theorem 6.4 the sets Z(A K ) qt,Br 1 ⊂ Z(A K ) qt,B are non-empty, and we conclude by Corollary 5.6.⌣ Proof of Theorem 6.4. Consider an adelic point (P v ) ∈ Z(A K ) tor . We must prove that, for every quasi-trivial torus T , for every T -torsor W → Z and every class α ∈ Br 1 (W ), we can find a lift of (P v ) to W (A K ) that is orthogonal to α. Since the Brauer group coincides with the Azumaya Brauer group, we may follow the proof of [Sko01, Prop. 5.3.4], adapting it to our context. Note that this proof uses the bijectivity of the map for all the involved local and global fields, and this holds for a given field L if and only if it has the "period = index" property. Since this is well-known for the fields L considered here (cf. Proposition 2.3) we conclude that, whenever we are given a class α ∈ Br 1 (W ) Since H 2 (K, SL n ) is only composed of neutral classes by Lemma 2.4, the class [a] ∈ H 1 (K, R) comes from a class [b] ∈ H 1 (K, G), whose image in H 1 (K, PGL n ) we denote by [c]. We may then twist the whole diagram (6.1) by the cocycle b in order to get the following diagram of torsors: where A denotes some central simple algebra over K (here SL(A) and PGL(A) are inner twists of SL n and PGL n ). We know then that (P v ) lifts to an adelic point SL(A)) = 1 for these fields (cf. [Sus85]). It is easy to see that these lifts define an adelic point in b U , which we may push down to an adelic point P ′′ in c V lifting (P v ). Since c V is a Galois twist of V , the image of P ′′ in W belongs to W (A K ) f and lifts (P v ). This concludes the proof.⌣
A. A result on towers of torsors
The goal of this appendix is to prove the following result, which is needed in the proof of Theorem 6.4 above. We are greatly indebted to Mathieu Florence for his help with the proof.
Theorem A.1. Let K be a field of characteristic 0. Let G be a connected linear K-group and T an algebraic K-torus. Let Y → X be a G-torsor and let Z → Y be a T -torsor. Assume that Pic (Ḡ) = 0 (which holds, for instance, if G is a torus) and that X is geometrically integral. Then there exists a canonical extension 1 → T → E → G → 1, such that the composite Z → X is an E-torsor. Moreover, if G is a torus, then E is a torus as well.
Remark A.2. One can find similar results in [BDLM20, App. A] and [BD13, Lem. 2.13], but none of these seems to be general enough for our purposes. We hope to generalize this result even further in the future.
Proof. For a K-scheme W/K, we denote by X W , Y W , Z W , T W , G W the W -(group-)schemes obtained by base change from X, Y, Z, T, G respectively. Consider the group Aut T W X W (Z W ) of X W -automorphisms ϕ of Z W that are compatible with the action of T W in the sense that the following diagram commutes: where a denotes the morphism defining the action of T on Z and a W the corresponding morphism after base change. The functor W/K → Aut T W X W (Z W ) defines a group presheaf over the big étale site over K. Denote by Aut T X (Z) the corresponding sheaf and consider the subsheaf Aut T Y (Z) defined by taking the subgroup Aut T W Y W (Z W ) of Aut T W X W (Z W ) for each W/K. We have Indeed, it is well-known that the functor W/Y → Aut T W W (Z × Y W ) over the big étale site of Y is represented, as a Y -scheme, by T Y (cf. for instance [Gir71, III. §1.5]). Moreover, a direct application of Rosenlicht's Lemma gives us, for geometrically irreducible W , where M is a free and finitely generated abelian constant sheaf. We deduce then that the quotient sheaf M associated to the presheaf is a locally free, locally constant sheaf that is finitely generated and abelian. In other words, we have an exact sequence of abelian sheaves In particular, since M is locally constant, it is representable. And since T is affine, we get by [DG70, III.4, Prop. 1.9] that Aut T Y (Z) is represented by an abelian K-group scheme A. We abusively denote by A the sheaf Aut T Y (Z) as well.
Since every element in Aut T W X W (Z W ) induces an X W -automorphism of Y W , we have an exact sequence of sheaves where Aut X (Y ) denotes the sheaf of X-automorphisms of Y . We claim that π is surjective. In order to prove this, we can replace K by a finite extension. We may assume then that both T and G are split over K. Since G is split and connected and Pic (Ḡ) = 0, we have that Pic (G) = 0 (cf. [San81, Lem. 6.9]). By [San81,Prop. 6.10], we get then that the map Pic (X) → Pic (Y ) is surjective. Since T is split, we deduce then that the torsor Z → Y comes by pullback from a T -torsor Z ′ → X. In other words, Z = Y × X Z ′ . The surjectivity is then evident.
Note now that G is clearly a subgroup of Aut X (Y ). Define then E ′ ⊂ Aut T X (Z) to be the group sheaf corresponding to the preimage of G via π. We get an exact sequence Since A is abelian, the extension E ′ induces an action of the sheaf G on the sheaf A. By Yoneda's Lemma, this action is actually an action of the K-group G on the K-group A. Note that T corresponds to the neutral connected component of A and thus it is preserved by the G-action since G is connected. In particular, we may quotient by T in order to get an exact sequence 1 → M → F → G → 1.
Then, if one forgets its group structure, F corresponds to an M -torsor over the scheme G. By [SGA7, Exp. 8, Prop. 5.1], we know that H 1 (G, Z) = 0 and hence, since M is | 2021-01-21T02:41:12.184Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "1747306be27a514a7f1059f8b65606fe98b17560",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.08245",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1747306be27a514a7f1059f8b65606fe98b17560",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
232075673 | pes2o/s2orc | v3-fos-license | Some achieved wedge products of positive currents
In this paper, we study the existence of the current gT for positive plurisubharmonic currents T and unbounded plurisubharmonic functions g.
Introduction
The wedge product is a very vital subject in the field of currents, and mainly targeted indeed. In general, wedge product of currents can not be achieved unless further conditions are considered. Throughout this paper we consider Ω to be an open subset of C n and T to be a current of bi-dimension (p, p), p ≥ 1. We denote by P sh − (Ω) the set of all negative plurisubharmonic functions on Ω. For a function g ∈ P sh − (Ω), put L g to be the set of all locus points of g which consists of the points z ∈ Ω where g is unbounded in every neighborhood of z. The pole set of g is by definition P g = {g = −∞}. It is obvious that L g is closed, and P g ⊂ L g . Recall also that T is said to be closed if dT = 0, and is said to be plurisubharmonic (resp. plurisuperharmonic) if dd c T ≥ 0 (resp. dd c T ≤ 0).
Our main concern is to give a definition of the current gT . Obviously, apart of L g , the current gT is well defined. The real challenge is studying the existence of this product across L g . In such a situation, this product may have no sense due to the behaviors of g and T . For example, In C put T = (− log |z| 2 ) −3 2 |z| 2 and g = log|z| 2 . (1.1) Then we get a positive and integrable current T of bi-dimension (1, 1) on B(0, 1) which is plurisubharmonic outside the origin, and a function g ∈ P sh − (B(0, 1)) ∩ C ∞ (B(0, 1) \ {0}). Despite the fact that H 2(1)−2 ({0}) = 1, the current gT is of infinite mass near the origin. Now we can feel the motivation behind the paper which is basically about finding sufficient conditions on T and L g that make gT well defined. The study is also consistent with the evolution of the subject as the case when T is closed was considered before in many works. In fact, Demailly [8] (1993) proved the existence of gT and dd c g ∧ T as soon as H 2p−1 (L g ∩ Supp T) = 0. Fornaess and Sibony [10] (1994) generalized the work of Demailly to higher Hausdorff dimension when they succeeded to define the currents gT and dd c g ∧ T where H 2p (L g ) = 0. In both studies the closedness property played a main role since it was used to give the relation between gT and dd c g ∧ T . Namely, with such a property, the definition of dd c g ∧ T is given by dd c (gT ) in the sense of distribution. Unfortunately, this relation becomes more complicated once we deal with dd c -signed currents as the terms dg∧d c T and gdd c T have their contribution. This fact causes difficulties to achieve a definition of gT , and forced some interested researchers to study the current dd c g ∧ T separately. In what follows we summarize our main results.
Let T be a positive plurisubharmonic current of bi-dimension (p, p) on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ L g ). Then in each of the following cases the current gT is well defined, that means if (g j ) is a sequence of decreasing smooth plurisubharmonic functions on Ω converging to g in C 1 (Ω \ L g ), then g j T converges weakly * to a current denoted by gT .
The precautions taken in the previous results on the thickness of L g and the properties of T are extremely important to guaranty the existence of gT . Actually, the failure to define gT with the choices of (1.1) due to fact that both gd c T and gdd c T are of infinite mass near the origin. In a recent work, Al Abdulaali-El Mir [3] obtained the current g k T, k > 0 when g is radial and L g is reduced to a single point.
The second part of the paper is devoted to discuss the current dd c g ∧ T . As a consequence of the discussions of this part, we show that the quantity exists under the hypotheses of Theorem 2.6. Furthermore, by a counterexample it is shown that the induced results can not be obtained for the case of positive plurisuperharmonic currents without further hypotheses.
The Current gT
Let us start with a result due to Al Abdulaali [2]. In the following few lines we include the proof in our settings.
Lemma 2.1. Let T be a positive plurisubharmonic current of bi-dimension (p, p) on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ A) for some compact subset A of Ω. Assume that (g j ) is a sequence of decreasing smooth plurisubharmonic functions on Ω converging to g in C 1 (Ω \ A). Then (1) gdd c T is a well defined current on Ω, and the trivial extension dd c g ∧ T exists. (2) dd c g ∧ T is a well defined current as soon as A, in addition, is complete pluripolar and p ≥ 2. (3) dd c g ∧ T is a well defined current when A is considered to be a single point.
Proof. Let W and W ′ be neighborhoods of A such that W ⋐ W ′ ⋐ Ω, and take a positive function f ∈ C ∞ 0 (W ′ ) so that f = 1 on a neighborhood of W . Then we have Thanks to the properties of f , each term of the first line integrals of (2.2) is uniformly bounded. Therefore, one can infer the existence of both extensions gdd c T and dd c g ∧ T . Notice that, the current gdd c T is well defined by the monotone convergence. And by Banach-Alaoglu the sequence (dd c g j ∧ T ) has a subsequence (dd c g js ∧T ) which converges weakly * to a current denoted by S. To show (2), we first note that dd c S is a well defined current as well.
Hence by [7] the residual current R = dd c S − dd c ( dd c g ∧ T ) is positive and As F is a compactly supported current with bi-dimension (p − 1, p − 1), one can deduce that F ≡ 0. The third statement comes immediately from the fact that the distribution µ := (S − dd c g ∧ T ) ∧ β p−1 is positive and supported in A. Indeed, A can be assumed to be the origin, and hence there exists a positive constant c such that µ = cδ 0 where δ 0 is the Dirac measure. Clearly, the constant c is independent from the choice of j s since In other words, dd c g ∧ T is well defined.
As a consequence of the previous result, the wedge products in [4] and [1] can be generalized to case when Proof. Notice first that for every 0 < ε < 1 the function −(−g) 1−ε is plurisubharmonic. Hence, by Lemma 2.1 the current −dd c (−g) 1−ε ∧ T is of locally finite mass across L g . But, by simple computation one has This shows the result.
Proof. Take W , W ′ , f and g j as in the proof of Lemma 2.1, and for 0 < ε < 1 Observe that (u j ) j is a sequence of negative plurisubharmonic functions where Therefore, where O ′ (f ) consists of all terms involving df, d c f and dd c f . The first two line integrals of the right hand side of (2.6) are uniformly bounded, thanks to Lemma 2.1. Furthermore, Cauchy-Schwartz inequality shows that where δ is a positive constant chosen so that δd|z| 2 ∧ d c |z| 2 ≤ dd c |z| 2 . By complying the last two inequalities, we have (2.8) And as M j together with A j are uniformly bounded, one can conclude the definition of |g| 1−ε T . Suppose now that p ≥ 2 and L g = P g . The current −T ∧ dd c (−g) 1−ε is positive and plurisubharmonic on Ω. Hence, the precedent part guarantees the existence of the trivial extension of −|g| 1−ε T ∧ dd c (−g) 1−ε , and obviously the current dg ∧ d c g (−g) 2ε ∧ T is of locally finite mass. Notice also that (2.9) Therefore, by similar argument as above, one can replace u j by g j in (2.6) and deduce that g j T converges to a current denoted by gT .
Theorem 2.4. Let T be a positive pluriharmonic current of bi-dimension (p, p), p ≥ 2 on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ L g ). If L g is compact and L g = P g , then for all 0 < ε < 1 the current |g| 1+ε T is well defined.
Proof. In virtue of the precedent argument, the currents (−g) ε dd c g ∧ T and dg ∧ d c g (−g) 1−ε ∧ T are of locally finite mass across L g . On the other hand This implies that the trivial extension dd c (−g) 1+ε ∧ T exists. Set v j = (−g j ) 1+ε , j ∈ N. By the features of T we have Now, by analogous discussion as in the previous proof, we infer the definition of (−g) 1+ε T since (2.12) Lemma 2.5. Let T be a positive plurisubharmonic current of bi-dimension (p, p), p ≥ 1 on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ L g ). If L g is a single point, then gT is well defined.
Proof. Without loss of generality, one can assume that Ω is the unit ball and L g is the origin. Take χ ∈ C ∞ 0 (B(0, 1 2 )) so that χ = 1 on a neighborhood of B(0, 1 4 ). First notice that for all 0 < t < 1 we have (2.14) Therefore, By a simple computation, one finds that Now, once again Stokes' formula shows that Thus, we infer that (2.21) Clearly, the current gT is obtained by the monotone convergence of g j T .
Theorem 2.6. Let T be a positive plurisubharmonic current of bi-dimension (p, p), p ≥ 1 on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ L g ). If the current gd c T is well defined and L g is compact, then gT is a well defined current on Ω.
The result generalizes the case when dT = 0. One can also implement it for the currents T where dT has L q coefficients, q > 1.
Proof. We keep the notation of the proof of Lemma 2.1 taking into consideration that L g ⊂ W . It is obvious that dg j ∧ d c T = d(g j d c T ) − g j dd c T . Hence by Lemma 2.1, the current dg ∧ d c T is well defined. Now, by applying Stokes' formula we have (2.23) Which means that because of Lemma 2.1. Meanwhile, the left hand side of (2.22) involves terms O(f ) where df , d c f and dd c f appear. And the properties of f make these terms defined and uniformly bounded as the locus points of g are avoided. Hence, This yields to the current gT .
Next we give conditions on the locus points of g that allow the existence of gT regardless the compactness property.
Theorem 2.7. Let T be a positive plurisubharmonic current of bi-dimension (p, p) on Ω and g ∈ P sh − (Ω)∩C 1 (Ω\L g ). If the current gd c T is well defined and H 2p−1 (L g ∩ Supp T ) = 0, then the current gT is well defined.
Theorem 2.8. Let T be a positive plurisubharmonic current of bi-dimension (p, p) on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ L g ). If H 2p−2 (L g ∩ Supp T ) is locally finite, then the current gT is well defined.
Proof. For each z ′ we set L g (z ′ ) = (Supp T ∩ L g ) ∩ ({z ′ } × △ ′′ ). Since H 2p−2 (L g ∩ Supp T ) is locally finite, then by [11] the set L g (z ′ ) is a discrete subset for a.e. z ′ . Without loss of generality, we may assume that L g (z ′ ) is reduced to a single point (z ′ , 0). On the other hand, T is C-flat on Ω. Thus, The slice T, π, z ′ exists for a.e. z ′ , and is a positive plurisubharmonic current of bi-dimension (1, 1) on Ω, supported in {z ′ } × △ n−p+1 . Now, by Theorem 2.5, the sequence g j T, π, z ′ is weakly * convergent since L g (z ′ ) is a single point. Hence the slice formula implies that (2.26) And our desired current is achieved.
3. The Current dd c g ∧ T As mentioned earlier in the introduction, for the case under investigation the current dd c g ∧ T stole the show from the current gT . Actually, Alessandrini-Bassanilli [4] and Al Abdulaali [1] studied the definition of dd c g ∧ T for pluriharmonic current T and g of class C 2 apart of its locus points. Al Abdulaali [2] generalized the latter works to the more general case when T is plurisubharmonic and g of class C 1 where H 2p−2 (L g ) is locally finite. In [9], Dihn and Sibony discussed the case when Ω is a compact Kähler manifold. They obtained the desired current when T is pluriharmonic and g is continuous on Ω.
Theorem 3.1. Under the same hypotheses of Theorem 2.6, the current dd c g ∧ T is well defined.
Proof. For any ϕ ∈ C ∞ 0 (Ω) we have This means that In virtue of the previous results, the sequence dd c g j ∧ T converges to a current denoted by dd c g ∧ T .
By analogous discussion as in the proof of Theorem 2.7, one can apply the slice formula to imply the following assertion. For such number one can obtain the following comparison result. then µ(T, u) ≤ lµ(T, g).
Proof.
We follow a similar technique as in [8]. Since λµ(T, g) = µ(T, λg) for all λ ≥ 0, it is enough to show the result for l = 1. Set u c = max ε (u − c, g) where c is a positive constant and max ε ( where α ε is a regularization kernel on R 2 depending only on (x 1 , x 2 ) . Now take r ′ < b < r < 0. Notice that for c large enough we have u c = g on But the properties of u imply that µ(T, u c ) = µ(T, u − c) = µ(T, u). Hence, We finish the proof by letting first r ′ → −∞, and secondly r → −∞.
In Theorem 3.1, if we consider T to be positive plurisuperharmonic, then the statement fails to remain true. The next example illustrates this fact. Notice that, based on [2], such wedge product exists when the obstacle is assumed to be of zero (2p − 2)-Hausdorff measure.
Example 3.4. In C, set T = g = log |z| 2 . Then T is negative and plurisubharmonic on {|z| < 1} where gd c T is well defined. But despite the fact that L g = {0} is of locally finite 0-Hausdorff measure, the mass of dd c g ∧ T explodes across {0}.
However, local potential currents can be very useful to our settings. Remember that, by [5], if T is positive and closed, then locally there exist a negative plurisubharmonic current U of bi-dimension (p + 1, p + 1) and a smooth form R such that T = dd c U + R. The current U is called the local potential of T .
Corollary 3.5. Let T be a positive plurisubharmonic current of bi-dimension (p, p), p ≥ 1 on Ω and g ∈ P sh − (Ω) ∩ C 1 (Ω \ L g ). If L g is a single point, then dd c g ∧S is a well defined current on Ω where S is the potential of dd c T .
Proof. As our problem is local, one can assume that dd c T = dd c S. Now, if we set F = T − S, then we get a positive pluriharmonic current. Hence by Theorem 2.5, both currents dd c g ∧ F and dd c g ∧ T are well defined. Therefore, one can define dd c g ∧ S by dd c g ∧ T − dd c g ∧ F .
We end this paper by showing a case where the dd c g ∧ T can be defined without paying any attention to the derivatives of g.
Corollary 3.6. Let T be a positive or negative plurisubharmonic current of bi-dimension (p, p), p ≥ 1 on Ω and g ∈ P sh − (Ω) ∩ L ∞ loc (Ω). If dT is of order zero, then dd c g ∧ T is a well defined current on Ω.
Proof. It is so obvious that the currents gT , gd c T and gdd c T are well defined. Therefore, one can define dd c g ∧ T = dd c (gT ) − 2dg ∧ d c T − gdd c T. (3.7) Data Availability Statement. Data sharing not applicable to this article as no datasets were generated or analysed during the current study. | 2021-03-02T02:15:35.302Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "ea173085b6aa018a2778b268b878193638076196",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ea173085b6aa018a2778b268b878193638076196",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
252773342 | pes2o/s2orc | v3-fos-license | Differential Diagnosis of Alzheimer Disease vs. Mild Cognitive Impairment Based on Left Temporal Lateral Lobe Hypomethabolism on 18F-FDG PET/CT and Automated Classifiers
Purpose: We evaluate the ability of Artificial Intelligence with automatic classification methods applied to semi-quantitative data from brain 18F-FDG PET/CT to improve the differential diagnosis between Alzheimer Disease (AD) and Mild Cognitive Impairment (MCI). Procedures: We retrospectively analyzed a total of 150 consecutive patients who underwent diagnostic evaluation for suspected AD (n = 67) or MCI (n = 83). All patients received brain 18F-FDG PET/CT according to the international guidelines, and images were analyzed both Qualitatively (QL) and Quantitatively (QN), the latter by a fully automated post-processing software that produced a z score metabolic map of 25 anatomically different cortical regions. A subset of n = 122 cases with a confirmed diagnosis of AD (n = 53) or MDI (n = 69) by 18–24-month clinical follow-up was finally included in the study. Univariate analysis and three automated classification models (classification tree –ClT-, ridge classifier –RC- and linear Support Vector Machine –lSVM-) were considered to estimate the ability of the z scores to discriminate between AD and MCI cases in. Results: The univariate analysis returned 14 areas where the z scores were significantly different between AD and MCI groups, and the classification accuracy ranged between 74.59% and 76.23%, with ClT and RC providing the best results. The best classification strategy consisted of one single split with a cut-off value of ≈ −2.0 on the z score from temporal lateral left area: cases below this threshold were classified as AD and those above the threshold as MCI. Conclusions: Our findings confirm the usefulness of brain 18F-FDG PET/CT QL and QN analyses in differentiating AD from MCI. Moreover, the combined use of automated classifications models can improve the diagnostic process since its use allows identification of a specific hypometabolic area involved in AD cases in respect to MCI. This data improves the traditional 18F-FDG PET/CT image interpretation and the diagnostic assessment of cognitive disorders.
Introduction
Alzheimer disease (AD), the most frequent form of neurodegenerative dementia [1,2], has an increasing incidence due to the progressive aging of the population [3]. The disease evolution of AD is a progressive continuum starting from the subclinical phase of Mild Cognitive Impairment (MCI), which is characterized by the absence of objective evidence of Diagnostics 2022, 12, 2425 2 of 12 damage to functional autonomy [4,5]. This condition is considered high risk for AD [6]: approximately 10% of MCI cases per year progress to AD or other forms of dementia; however, a fraction of MCI patients will not develop clinical dementia, even after 10 years [5,7,8].
It is, therefore, crucial to identify those MCI cases that are more likely to progress to AD or other forms of dementia. This would allow direct patients toward adequate clinical trials or prevention strategies since no disease-modifying therapy is currently available [1,9].
There are different tools to evaluate the risk of conversion to AD; among them, the assessment of cerebral metabolism by 18 F-fluoro-deoxyglucose-PET/CT ( 18 F-FDG PET/CT) either alone or in conjunction with other procedures is one of the most accurate [10][11][12][13][14][15]. Moreover, this procedure can be strengthened by semi-quantitative evaluation [16,17], which provides reproducible, standardized parameters.
In recent years, Artificial Intelligence (AI) methods-including machine learning, deep learning and radiomics-have been successfully applied to neurological diseases, particularly to contribute to the diagnosis of Parkinson's disease and dementia [18][19][20]. Various authors have advocated the use of automatic classification for discriminating between AD and MCI [21] based on MR images [22][23][24] or 18 F-FDG PET/CT images [21,25,26], also combined with different types of biomarkers.
The aim of our study was to evaluate further the ability of AI applied to semiquantitative data from 18 F-FDG PET/CT of the brain to improve the diagnosis of cognitive disorders and, in particular, AD and MCI.
Study Population
We retrospectively investigated 150 consecutive patients evaluated for cognitive impairment (64 males, 86 females; age = 70.59 ± 9.14 year) who underwent 18 F-FDG brain PET for differential diagnosis of MCI and dementia between November 2017 and January 2021. Of the 150 cases, 67 had suspected AD (29 males and 38 females) and 83 suspected MCI (35 males and 48 females).
All patients were evaluated for neurological familiar diseases and neurological-and general-related diseases (hypertension and diabetes mellitus). Laboratory analyses excluded secondary cognitive disorders, and patients with other ascertained neurological diseases were excluded.
Before performing 18 F-FDG brain PET, all patients underwent neurological examination, neuropsychological tests (Mini Mental State Examination -MMSE-) and Magnetic Resonance Imaging (MRI), the latter in order to evaluate the morphological brain assessment, especially to detect the presence of atrophy and gliosis as potential signs of white matter chronic cerebral vasculopathy. A final subset of n = 122 cases with confirmed AD (n = 53) or MDI (n = 69) was eventually retained for the study. The standard of reference for the diagnosis was a clinical follow-up of 18-24 months. Table 1 reports demographic, clinical, MMSE and MRI data of the study population; Figure 1 shows the STARD diagram for patient selection. Figure 2 summarizes the whole workflow of the study. Before the procedure, written informed consent was obtained from all patients, and their data were treated in accordance with the local privacy rules and regulations. In the informed consent, the patients signed to accept that their data could be used for scientific purposes. The present study was in accordance with the Helsinki Doctrine on Human Experimentation. 18 F-FDG brain PET/CT was performed according to international guidelines [27]. Patients were advised to fast for at least 4 to 6 h to ensure that 18 F-FDG uptake would not be influenced by increased serum glucose levels. The latter were checked before injection, and the radiopharmaceutical administration was allowed only if/when the serum glucose values were <160 mg/dl.
The patients were invited to rest comfortably in a quiet, dimly-lit room for at least 15 min before 18 F-FDG administration and for at least 20 min during the subsequent phase of tracer uptake. They were also instructed not to speak, read, listen to music/sounds or perform any other similar activities during the procedure.
A dose of 3.7 MBq/kg of 18 F-FDG was then injected intravenously using a previously positioned cannula. Images were acquired 45 min after administration for 15 min at a single bed position using a PET/CT GE Healthcare Discovery 710 tomograph. CT parameters were 120 kVs, 45 effective mAs and one rotation. Slice thickness was 3 mm, and the reconstruction interval was 1.5 mm.
Image Reconstruction and Processing
The iterative reconstruction technique was applied for image reconstruction in axial, coronal and sagittal planes. Three nuclear medicine specialists (SN, AS and PB), all with at least > 20 yr experience in PET neuroimaging, independently interpreted the images. The three specialists were informed about the potentially pathological conditions of the patients but blind to the neurological evaluations. 18 F-FDG PET/CT images were analyzed both Qualitatively (QL) and Quantitatively (QN). For QL analysis, images were differentiated in normal and pathological metabolism, considering as normal homogenous and symmetric tracer uptake in cortical areas of both hemispheres and as pathological cortical areas with reduced or asymmetric tracer uptake.
QN analysis was performed on a fully-automated post-processing software (Cortex ID SUITE, GE Healthcare, Chicago, IL, United States). All scans, spatially realigned and normalized, were sampled at 16,000 predefined cortical locations and projected on a threedimensional image. The data were further normalized to the pons and compared with a normal, age-matched segmented database. Finally, a three-dimensional stereotactic surface projection and a Z-score metabolic map were produced [28,29]. In particular, the software computed the radiotracer uptake at 25 predefined regions of interest (ROI), compared the values with those of normal subjects and returned the deviation in terms of Z-score. The ROIs corresponded to the following anatomical cortical areas: prefrontal lateral left (L) and right (R), prefrontal medial L and R, sensorimotor L and R, anterior cingulate L and R, posterior cingulate L and R, precuneus L and R, parietal superior L and R, parietal inferior L and R, occipital lateral L and R, primary visual L and R, temporal lateral L and R, temporal mesial L and R, and whole cerebellum. Z-scores <= −2.0 were considered significant, and for each patient, the maximum negative values achieved in each ROI were also evaluated [19,20].
For univariate analysis and automated classification models, 122 of the 150 initial cases with the diagnosis confirmed by 18-24 months of clinical follow-up (53 AD and 69 MCI) were considered. The remaining 28 patients were excluded from the analysis since the initial clinical suspected diagnosis was not confirmed.
We provide the complete anonymous dataset as Supplementary Material (dataset.xls).
Statistical Analysis
Univariate analysis was performed to determine, for each area, whether there were statistically significant differences in the z scores between the AD and MCI groups. The analysis was based on Welch's test at a significance level α = 0.05; Bonferroni correction was also applied to counteract the effects of multiple tests. Pairwise correlation between the significant features was assessed via Pearson's correlation coefficient.
Classification
The automated classification was carried out to estimate the ability of the z scores from the areas with significant differences to discriminate between AD and MCI cases. Three classification models were considered for this task: classification tree (ClT), ridge classifier (RC) and linear Support Vector Machine (lSVM). For each classifier, optimal hyper-parameter values (see Table 2) were determined by grid-search and four-fold crossvalidation over 50 random splits; the final accuracy was estimated via leave-one-out cross-validation.
Execution, Data and Code Availability
Data analysis and visualization were based on Python 3.8.4 and functions from the matplotlib, numpy, Pandas, scikit-learn and seaborn packages. The experiments were carried out on an ASUS ProArt Laptop PC with Intel Core TM i7-9750H @ 2.60 GHz CPU, 36 Gb RAM and Windows 10 Pro 64-bit. The total execution time was ≈2 min.
Results
Univariate analysis (Table 3) returned 14 areas where the z scores were significantly different between the AD and MCI groups. These were: prefrontal lateral (L and R), prefrontal medial (L and R), posterior cingulate (L and R), precuneus (L and R), parietal inferior (L and R), occipital lateral L temporal lateral (L and R) and temporal mesial L. Figure 3 reports the box-plots, strip-plots and p-values for each area. Figure 3. Box-plots and strip-plots of the Z-score for each area with significant difference between AD and MCI. The correlation analysis ( Figure 4) identified 19 pairs of areas with very strong positive correlation (Pearson's r ≥ 0.8) [30], 44 with moderately strong positive correlation (0.6 ≤ r < 0.8) and 28 with fair positive correlation (0.3 ≤ r < 0.6). There were no pairs of areas with poor (0.0 ≤ r < 0.3) or negative correlation. The pairs with the strongest correlation (r > 0.85) were: posterior cingulate L and R, parietal inferior R and temporal lateral R, parietal inferior L and temporal lateral L, precuneus L and parietal inferior L, prefrontal lateral L and R, parietal inferior L and R, precuneus R and parietal inferior R, prefrontal lateral L and parietal inferior L, precuneus L and R, and prefrontal medial L and R. The correlation analysis ( Figure 4) identified 19 pairs of areas with very strong positive correlation (Pearson's r ≥ 0.8) [30], 44 with moderately strong positive correlation (0.6 ≤ r < 0.8) and 28 with fair positive correlation (0.3 ≤ r < 0.6). There were no pairs of areas with poor (0.0 ≤ r < 0.3) or negative correlation. The pairs with the strongest correlation (r > 0.85) were: posterior cingulate L and R, parietal inferior R and temporal lateral R, parietal inferior L and temporal lateral L, precuneus L and parietal inferior L, prefrontal lateral L and R, parietal inferior L and R, precuneus R and parietal inferior R, prefrontal lateral L and parietal inferior L, precuneus L and R, and prefrontal medial L and R. As can be seen from Table 4, the classification accuracy ranged between 74.59% (91/122) and 76.23% (93/122), with ClT and RC providing the best results. As for the classification tree, it is worth noting that the best classification strategy consisted of one single split with a cut-off value of ≈ −2.0 on the z score from temporal lateral left area: cases below this threshold were classified as AD and those above the threshold as MCI ( Figure 5). As can be seen from Table 4, the classification accuracy ranged between 74.59% (91/122) and 76.23% (93/122), with ClT and RC providing the best results. As for the classification tree, it is worth noting that the best classification strategy consisted of one single split with a cut-off value of ≈ −2.0 on the z score from temporal lateral left area: cases below this threshold were classified as AD and those above the threshold as MCI ( Figure 5).
Discussion
In the Amyloid-PET era, 18 F-FDG PET/CT still plays a significant role in the diagnosis of Alzheimer Disease (AD) and Mild Cognitive Impairment (MCI) [31][32][33]. In 2011, the National Institute on Aging and Alzheimer's Association (NIA-AA) proposed separate diagnostic recommendations for the preclinical mild cognitive impairment and dementia stages of Alzheimer's disease based on different biomarkers able to discriminate, in vivo, the different pathological entities, including beta-amyloid deposition, pathologic tau and neurodegeneration. This classification generated an [A/T/N] system where "A" represents the biomarkers of Ab plaques (including cortical amyloid PET ligand binding or low CSF Ab42), "T" the biomarkers of fibrillary tau (including elevated CSF phosphorylated tau (P-tau) and cortical tau PET ligand binding) and "N" the labeled biomarkers of neurodegeneration or neuronal injury as, CSF T-tau, 18 FDG-PET/CT hypometabolism and atrophy on MRI [34]. This interesting system represented a basis for a biological definition of AD [6] and paved the way for subsequent studies [35][36][37].
Iaccarino [38] showed that in a group of 518 MCI subjects and 269 healthy controls from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), a positive amyloid PET scan was not associated with clinical progression in the majority (≥60%) of subjects, although it represented a significant risk factor, while a negative 18 F-FDG PET/CT scan at baseline strongly predicted clinical stability with high negative predictive values (>0.80) for both groups of subjects. The authors concluded that 18 F-FDG PET/CT brain metabolism or other neurodegeneration measures should be coupled with amyloid-PET to identify clinically stable individuals in order to exclude them from clinical trials.
Ottoy and co-workers [14] showed that patients with MCI progressed to AD at an annual rate of 31%, and this could be best predicted by combining neuropsychological testing with MRI-based Hippocampal Volume and 18 F-FDG PET/CT (specificity = 96%, sensitivity = 92%).
Chatelat et al. [39] reported that 18 F-FDG PET/CT could predict the clinical outcome in patients with MCI who already have an amyloid-PET scan. In their work, a normal 18 F-FDG-PET scan was associated with long-term clinical stability -even in amyloid-positive cases; by contrast, a pathological 18 F-FDG-PET scan was indicative of an increased risk of progressive cognitive decline even in amyloid-negative cases.
Tondo et al. [40] observed 142 subjects with amnestic MCI for 4-19 years and determined that hypometabolism patterns on baseline 18 F-FDG PET/CT could predict long-term outcomes in terms of stability or progression to AD. Specifically, they reported that limbic-predominant hypometabolism pattern was associated with clinical stability, thus making progression to AD very unlikely.
Arbizu et al. [32] investigated the additional value of 18 F-FDG PET/CT beyond clinical neuropsychological examination to support the diagnosis of prodromal Alzheimer's
Discussion
In the Amyloid-PET era, 18 F-FDG PET/CT still plays a significant role in the diagnosis of Alzheimer Disease (AD) and Mild Cognitive Impairment (MCI) [31][32][33]. In 2011, the National Institute on Aging and Alzheimer's Association (NIA-AA) proposed separate diagnostic recommendations for the preclinical mild cognitive impairment and dementia stages of Alzheimer's disease based on different biomarkers able to discriminate, in vivo, the different pathological entities, including beta-amyloid deposition, pathologic tau and neurodegeneration. This classification generated an [A/T/N] system where "A" represents the biomarkers of Ab plaques (including cortical amyloid PET ligand binding or low CSF Ab42), "T" the biomarkers of fibrillary tau (including elevated CSF phosphorylated tau (P-tau) and cortical tau PET ligand binding) and "N" the labeled biomarkers of neurodegeneration or neuronal injury as, CSF T-tau, 18 FDG-PET/CT hypometabolism and atrophy on MRI [34]. This interesting system represented a basis for a biological definition of AD [6] and paved the way for subsequent studies [35][36][37].
Iaccarino [38] showed that in a group of 518 MCI subjects and 269 healthy controls from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), a positive amyloid PET scan was not associated with clinical progression in the majority (≥60%) of subjects, although it represented a significant risk factor, while a negative 18 F-FDG PET/CT scan at baseline strongly predicted clinical stability with high negative predictive values (>0.80) for both groups of subjects. The authors concluded that 18 F-FDG PET/CT brain metabolism or other neurodegeneration measures should be coupled with amyloid-PET to identify clinically stable individuals in order to exclude them from clinical trials.
Ottoy and co-workers [14] showed that patients with MCI progressed to AD at an annual rate of 31%, and this could be best predicted by combining neuropsychological testing with MRI-based Hippocampal Volume and 18 F-FDG PET/CT (specificity = 96%, sensitivity = 92%).
Chatelat et al. [39] reported that 18 F-FDG PET/CT could predict the clinical outcome in patients with MCI who already have an amyloid-PET scan. In their work, a normal 18 F-FDG-PET scan was associated with long-term clinical stability-even in amyloid-positive cases; by contrast, a pathological 18 F-FDG-PET scan was indicative of an increased risk of progressive cognitive decline even in amyloid-negative cases.
Tondo et al. [40] observed 142 subjects with amnestic MCI for 4-19 years and determined that hypometabolism patterns on baseline 18 F-FDG PET/CT could predict long-term outcomes in terms of stability or progression to AD. Specifically, they reported that limbicpredominant hypometabolism pattern was associated with clinical stability, thus making progression to AD very unlikely.
Arbizu et al. [32] investigated the additional value of 18 F-FDG PET/CT beyond clinical neuropsychological examination to support the diagnosis of prodromal Alzheimer's Disease (AD), frontotemporal lobar degeneration (FTLD) and prodromal dementia with Lewy bodies (DLB) in mild cognitive impairment (MCI) subjects. A panel of seven experts (four from the European Association of Nuclear Medicine and 3 from the European Academy) provided recommendations about the incremental value of 18 F-FDG PET/CT to evaluate the etiology of MCI (AD, FTLD or DLB). The study identified 55 relevant papers, which were obtained through a population, intervention, comparison and outcome (PICO) search string. The meta-analysis indicated that 18 F-FDG PET/CT patterns enabled the correct identification of MCI etiology due to AD with accuracy between 58% and 100% and areaunder-curve between 0.66 and 0.97; however, no specific data were found in regard to MCI due to FTLD or DLB.
However, the clinical use of 18 F-FDG PET/CT reached the consensus recommendations in MCI subjects for the high negative predictive value and the existence of different diseasespecific patterns of hypometabolism. Worthy of mention is that 123I-ioflupane and 123I-MIBG can be useful in the case of prodromal DLB [41].
In this scenario, our findings confirm the ability of 18 F-FDG PET/CT to differentiate AD from MCI. Specifically, the univariate analysis (Table 1, Figure 3) indicates that AD and MCI cases had significantly different readings in many strategic areas, in particular, precuneus, posterior cingulate and parietal and temporal regions.
We also demonstrated the ability of automated classifiers (ClT, lSVM and RC) to discriminate between AD and MCI based on the z scores of the significant areas. Our classification accuracy ranged between 74.59% (91/122) and 76.23% (93/122), with ClT and RC providing the best results. As for the classification tree, it is worth noting that the best classification strategy consisted of one single split with a cut-off value of ≈ −2.0 on the z score from temporal lateral left area: cases below this threshold were classified as AD and those above the threshold as MCI ( Figure 5). This suggests that radiopharmaceutical uptake in the temporal lateral left area is the "marker" region able to discriminate AD and MCI in our series of the patient. This is consistent with the hierarchical pathologic progression of neurofibrillary tangles that spread from the middle temporal to the lateral temporal areas. Indeed, tau pathology initially affects transentorhinal, followed by entorhinal, after fusiform and lingual and later reach lateral temporal association areas [42][43][44].
Finally, we wish to underline the usefulness of semi-quantitative analysis to assist visual reading, as it has been widely evidenced by international literature [15,37]. We used Cortex ID suite, a fully automated post-processing software for quantifying 18 F-FDG PET/CT and beta-amyloid brain scans that used three-dimensional stereotactic surface projections (3D-SSP) for statistical image analysis [45,46]. Future works may focus on the combination of semi-quantitative analysis with direct extraction of traditional and/or deep learning imaging features from the brain scans, as discussed in [47][48][49].
In conclusion, our study indicates that the combination of semi-quantitative analysis and automatic classification can improve the diagnostic process through the identification of the metabolically impaired areas specific to the different disorders. The development of computer-aided diagnosis (CAD) systems is also receiving attention not only as a means to support the diagnostic process by the calculation of cut-off values but also to assess correlations between clinical data and pathologies. This improves traditional image interpretation and diagnostic assessment in many neurodegenerative diseases [49][50][51]. The role played by artificial intelligence techniques, i.e., Machine Learning, Radiomics and Deep Learning is pivotal to building diagnostic models for personalized care. This is of particular importance in neurodegenerative diseases such as AD and MCI, as they fall in a sort of "grey area" where a clear diagnosis is often difficult. Our paper confirms the clinical value of 18 F-FDG brain PET/CT as an essential diagnostic first step to contribute to the differential diagnosis of dementia disorders also in the amyloid PET and biological markers (i.e., amyloid and tau protein) era since, as it is well known, accurate and early diagnosis of Alzheimer disease in respect of Mild Cognitive Impairment is crucial for improving the condition of patients. The combined use of semi-quantitative analysis of 18 F-FDG PET and automatic classification seems to be a supportive tool for clinical diagnosis in order to consider effective preventive early measures to delay the appearance of the full-blown disease or to suggest further investigations such as amyloid PET or more advanced treatments and therapeutic approaches [52,53].
Limitations and Future Work
A number of limitations apply to this paper, among which are the retrospective nature of the study and the relatively contained sample size. Furthermore, it is to be noted that our method relies on semi/quantitative data, the calculation of which is delegated to an external software package (Cortex ID). Extraction of custom imaging features directly from the PET/CT scans via hand-crafted methods and/or Deep Learning (as discussed in [54,55]) is an interesting subject for future studies. Institutional Review Board Statement: This retrospective study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were not required for this study due to the retrospective nature and the analysis of data in anonymous form. | 2022-10-10T15:43:05.503Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "7759d96ab3e12792385c5c30b4411ea1f0603b0c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/12/10/2425/pdf?version=1665136881",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea97ee790c5fa251383b3a110d855c9d1bdbb9cc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6570112 | pes2o/s2orc | v3-fos-license | Local Gene Regulation Details a Recognition Code within the LacI Transcriptional Factor Family
The specific binding of regulatory proteins to DNA sequences exhibits no clear patterns of association between amino acids (AAs) and nucleotides (NTs). This complexity of protein-DNA interactions raises the question of whether a simple set of wide-coverage recognition rules can ever be identified. Here, we analyzed this issue using the extensive LacI family of transcriptional factors (TFs). We searched for recognition patterns by introducing a new approach to phylogenetic footprinting, based on the pervasive presence of local regulation in prokaryotic transcriptional networks. We identified a set of specificity correlations –determined by two AAs of the TFs and two NTs in the binding sites– that is conserved throughout a dominant subgroup within the family regardless of the evolutionary distance, and that act as a relatively consistent recognition code. The proposed rules are confirmed with data of previous experimental studies and by events of convergent evolution in the phylogenetic tree. The presence of a code emphasizes the stable structural context of the LacI family, while defining a precise blueprint to reprogram TF specificity with many practical applications.
Introduction
The search for principles describing how specific nucleotide sequences are recognized by proteins remains one of the most fundamental problems to be solved in Biology [1][2][3][4][5]. The relevance of this question is linked to the wide breadth of basic cellular processes to be better understood with its resolution, like how genomes respond to stress by accurately activating/inactivating groups of genes, or how cells differentiate into separate classes following a program of precise spatio-temporal gene expression. Additionally, these principles could turn into genuine rules to engineer protein production, either in isolation or as part of elaborated molecular circuits or networks, with many practical applications.
Given the relevance of this search, when could one say that principles have been actually identified, or that this goal failed? Answers to these questions changed over the years, e.g., [2,[6][7][8][9][10], as the knowledge of how transcriptional factors (TFs) recognize their cognate binding sites (BSs) did. Two mechanistic aspects of this recognition are relevant in this regard [11][12][13], i.e., how selected AA/NT binding partners determine specificity (direct readout) and how specificity could be influenced by additional structural features (indirect readout). Within this second aspect, both the protein structural context in which the contacting AAs are embedded [7,14,15] and the conformational characteristics of DNA upon TF binding [11,12] appear as particularly important modifiers.
In fact, the relative strength of direct and indirect readouts can greatly influence the nature of the recognition rules to be identified. The most simplistic situation could be one in which (simple) direct readouts for the contacting positions were dominant specificity determinants. In this case, one could conceive the presence of deterministic codes of wide applicability. However, the rich repertoire for AA/NT interactions, which includes hydrogen or water-mediated bonds and also van der Waals contacts [16], and the context dependence of these interactions rule out the appearance of deterministic codes [6,7,17]. Instead, one should rather look for probabilistic recognition codes restricted to similar protein structures [3,8,14,18]. The applicability of these principles to large protein groups might ultimately depend on the conservation of the modifiers linked to indirect readouts.
Interestingly, some of these issues can be studied with the use of mutational experiments -either in vivo [19,20] or in vitro [8,10,21,22] -which start with a known TF/BS relationship to characterize changes in specificity once selected AA and/or NT positions are mutated. Since the number of possible sequences grows exponentially with the number of positions to be explored, this approach usually requires the use of large mutant libraries. Consequently, even when the sequence space is explored in a random way [8], or by screening methods [20], the positions to be mutated are always selected among those corresponding to direct readouts. Since the rest of positions remains fixed, the conservation of the structural context within the library directly follows. This implies that any set of recognition rules deduced from the mutational approach is restricted, in principle, to the library elements.
The existence of a natural version of such synthetic code would require a strong conservation of the mode of binding within the family of proteins to which the focal mutated protein belongsdespite the variability in the non-contacting positions [23,24]. Mutational studies can estimate this conservation only in an indirect manner, by finding natural correspondences of some of the synthetic AA/NT relationships studied [8,19]. Regardless of the existence or absence of such correspondences, those mutants with differential specificities could constitute useful tools for Synthetic Biology [20,25].
An alternative approach to this problem, in which the role of indirect readouts is evaluated, deduces the recognition rules by using genomic tools applied to natural sequences of both TFs and BSs [14,15,[26][27][28]. In this case, each residue/base contact is embedded in its own structural context and the possibility of family codes can be explicitly examined. The finding of consistent recognition rules, whereby the sequences of the contacting AAs and NTs correlate, would imply that variations on the rest of residues do not compromise the conservation of the binding mode within the considered set. Moreover, such natural recognition code would suggest that the evolution of new specificities is mainly achieved by alteration of base contacting residues (direct readouts) [14]. Recognition rules following this approach were formulated for several sets which, in each case, involved a limited number of related TFs [14,27,28].
In this work, we asked to what extent a natural wide-coverage recognition code could exist. From the arguments before, this code could be considered as such when it fulfills two important requirements. First, the determinants of the indirect readout should not prevent the identification of consistent sequence correlations between the contacting AAs and NTs for a given regulator family (or a substantial fraction of it). Second, most of these natural associations should be reproducible by mutating the specificity-associated AAs of a particular focal member of the family. Note that these features do not include that the recognition correlations should be expressed in terms of a few deterministic rules -although strong general trends are expected.
We considered as a model system to approach this question the extensive LacI family of transcriptional regulators [26], whose helixturn-helix (HTH) domain ( Fig. 1.A) interacts with a set of cognate BSs [29]. Within this set, we examined a dominant group (involving more than half of the LacI family members) composed by regulators exhibiting the sequence threonine-valine-serine-arginine (TVSR) in the recognition helix of the HTH domain. We searched for recognition rules by introducing a new strategy based on comparative genomics and the use of a pervasive characteristic of prokaryotic regulation: the local control of gene expression [30][31][32][33][34]. A) X-ray model for a LacI dimer bound to a palindromic BS (plotted with Jmol from the PDB structure 1lbg4). Only the binding domain of each monomer is shown (in light/dark purple, respectively). The hinge-helix and the recognition helix of each monomer are colored in yellow and red, respectively. B) Logo for the alignment of 2639 unredundant HTH-LacI domains. The AA coordinates of any particular domain will be referred by its position in this alignment -they match the numbering of the first 71 AAs of Escherichia coli's GalR and GalS regulators. Helix-1, helix-2 (or recognition helix) and the intermediate residues constitute the HTH motif itself. C) Logo for the alignment of the set of BSs associated to 370 LacI family members (BS sequences from RegTransBase [46]). In BS logos we avoided subscripts for left and right half sites coordinates. doi:10.1371/journal.pcbi.1000989.g001
Author Summary
Transcriptional factors (TF) are proteins that bind specific short DNA sequences adjacent to the genes whose transcription they regulate. Although the nucleotide sequence recognized by a given regulator depends on the amino acids contacting the DNA, the mode in which amino acids and nucleotides interact is strongly influenced by the overall protein structure. This prevents the existence of a universal amino acid/nucleotide recognition code. However, recognition rules could be formulated for regulators sharing a similar structure, i.e., for a family or subfamily of TFs. In fact, such rules have already been described for several sets which, in each case, involved a limited number of related TFs. In this study, we ask to what extent a wide-coverage recognition code might actually be found. To answer this question, we use the extensive LacI family of transcriptional regulators. Our analysis suggests that a set of relatively consistent recognition rules does apply within a major subset of this family. These rules could ultimately act as a blueprint for the synthetic redesign of TFs with new specificities.
Our analysis suggests that the determinants of the indirect readout are substantially conserved throughout the TVSR group, in which a set of relatively consistent recognition rules applies. Moreover, the phylogenetic tree associated to this group exhibited several convergence events for the recognition relationships, i.e., distant proteins in the tree sharing the same recognition AA sequence tend to bind similar NT sequences. The natural recognition correlations identified could be reproduced with a synthetic approach, as suggested by comparing the theoretical predictions with previous mutational experiments [19,20] and by the finding of natural BSs previously considered as simple laboratory constructs [35].
Results/Discussion
Same binding patterns could be pervasive to the whole LacI family We aligned non-redundant HTH-LacI domain sequences using information from MicrobesOnline [36], a database that contains approximately one thousand prokaryotic genomes (Methods). The resulting sequence logo ( Fig. 1.B) suggested that the binding patterns previously identified with structural studies could potentially apply to the whole LacI family. Specifically, these studies solved the binding-domain/DNA complex of Escherichia coli's LacI [37][38][39] and PurR [40,41], and Bacillus megaterium's CcpA [42], clearly distinguishing a contrast between structural and DNA-binding residues in the corresponding domains.
Indeed, positions exhibiting a strong conservation in our comparative analysis corresponded to proposed structural residues. In particular, the conservation of the hydrophobic residues in AA-54 (mostly leucine, 82%) indicated that the BS pattern in the family could be dominated by a conserved central CG group (although we did not use this prior knowledge in our analysis). In every structural study, this residue of the hinge-helix inserts into a central CG group located in the minor groove and bends the DNA ( Fig. 1.A). The conserved alanine in AA-51 is similarly related in these analyses to the hinge-helix/CG union by non-specific interactions with the phosphate groups, or through direct contacts with the bases [43]. Exceptions to this union are rare [44,45].
To identify the potential DNA-binding residues resolving BS specificity, we selected those domains in the alignment which were univocally associated to BSs in the RegTransBase v5 [46] (370 domains). These BSs were aligned to produce the logo in Fig. 1.C. Note the palindromic nature of this logo, which manifests the symmetrical contacts made by the monomers that constitute the dimeric regulators on the corresponding left(L)/right(R) half site location of the BSs [in the following, we usually simplify the notation of symmetrical positions, and palindromic sequences, by those in the left half site, e.g., (NT{5 L , NT{4 L ; NT{4 R , NT{5 R )~(TG; CA) as (NT-5, NT-4) = TG].
We then calculated the mutual information (covariance dependency) between the alignment of these 370 domains and that of their corresponding BSs [47] (Fig. S1). This computation identified three main patterns. First, the extensive linkage between the non-conserved nucleotide pair (NT-5, NT-4) and the (AA-15, AA-16) residues located in the recognition helix (this helix includes residues AA-15 to AA-22, see Fig. 1.B). Second, the presence of a strong connection between NT-6 and AA-20 (also in the recognition helix); these coordinates exhibited no other appreciable interdependences suggesting a mode of recognition relatively independent to the previously discussed pair. Finally, the correlation of NT-2 with AA-55, AA-15 and AA-5, in decreasing order of importance.
The mutual information analysis also generalized previous experimental results obtained with a few members of the LacI family, this time with respect to the proposed specificity residues. In particular, the association of the pair (NT-5, NT-4) to (AA-15, AA-16) was demonstrated by structural models [29] and mutational studies [19]. The independent nature of the recognition interaction between NT-6 and AA-20 was also suggested by previous mutational studies of E. coli's LacI [19,29]. In addition, the link between NT-2 and the hinge-helix residue AA-55 ( Fig. 1.B) was proposed in [41]. Moreover, although AA-20 was related to recognition processes, it is a strongly conserved residue -with arginine (R) linked to the presence of a guanine in NT-6 (x 2~4 05:2, pv0:0001, Yates-corrected x 2 -test). This resulted in the same AA sequence (a TVSR sequence for the range AA-17 to AA-20) in 1490 instances of a total of 2639 included domains (56:5%, Fig. S2). We thus restricted the following analysis to the TVSR dominant subgroup.
From all the above, we hypothesized that the distinction among the different BSs associated to the TVSR set would rely mostly on the (AA-15, AA-16) pair. We further considered a stronger version of this hypothesis assuming that regulators sharing the same (AA-15, AA-16) sequence would tend to bind similar BSs regardless of their evolutionary distance. In the following, we tried to confirm these conjectures by analyzing the possible presence of a recognition code assigning specific nucleotides (NT-5, NT-4) to residues (AA-15, AA-16).
Autoregulation helps identify a recognition code
The search of a wide-coverage recognition code required a large scale identification of the native BSs for each TF, with independence of its location in the LacI family phylogenetic tree. This requirement might become problematic if we were to apply the standard protocols of BS search. These methods often rely on the identification of orthologs of experimentally determined target genes to look for conserved upstream BSs -for example, by applying phylogenetic footprinting (PF) techniques [48]. As evolutionary distance between TFs increases, this approach lacks precision because of the complications to define orthologs, e.g., due to events of duplication and loss of genes [49].
We decided to use a complementary strategy to search for BSs. This strategy was based on the hypothesis of the conservation of binding mode and also on the widespread presence of local transcriptional control in bacteria (including both auto-and neighbor-regulation [34]). Thus, we first grouped regulators sharing the same sequence of recognition residues (AA-15, AA-16), regardless of the evolutionary distance among the full TF sequences. Within each of these groups, or recognition classes, we looked for potential BSs in the intergenic regions located before the operon encoding the TF itself, and before the downstream operon, respectively ( Fig. 2.A). We applied PF for BS search on these sequences with a subsequent refinement based on iterated position weight matrices (PWMs) (this protocol was aimed to minimize the rate of false positives linked to bioinformatic BS searches [49], see Methods).
We obtained in this way a nucleotide logo from each alignment of BSs associated to a recognition class (Figs. 2.B-D and Appendix in Text S1 for the complete set). We also computed the consensus logo of the full TVSR group (Fig. 2.E), where the contrast between conserved and non-conserved NTs is especially apparent. Although we used uninformed priors in the BS-finding algorithms to avoid circularity biases, the obtained consensus logo corresponded to the one expected from a situation where the TF binding mode is conserved (compare Fig. 1.C, computed from a previously known BS set [46], to Fig. 2.E). Note the conservation of G in NT{6 L (and C in NT{6 R ), for we considered a group of domains with arginine in AA-20. Computation of the familial binding profile [23,24] -a method that can also suggest the conservation of the binding mode within a TF family-for the TVSR set produced the same qualitative patterns in the consensus logo.
Two contrasting scenarios to test for a wide-coverage recognition code
Once we obtained the BS logos associated to each AA recognition class, we could naively suppose that the presence of logos with high information content in both NT-4 and NT-5 would confirm the hypothesis of a recognition code. In the same vein, ambiguities in these nucleotides would reject the hypothesis (for example, in the set V 15 A 16 , Fig. 2.B, where both T and A are found in NT{5 L ). However, low-information positions could alternatively be explained by degeneracies in the recognition process, an expected attribute of extant codes [3]. In this latter case, the code conjecture would still hold true.
How can we distinguish these contrasting situations? Imagine a simplistic scenario in which a particular recognition AA sequence corresponds to a (recognition) class uniquely constituted by two different TFs. Imagine also that there were only two types of half site with different (NT-5, NT-4) sequences in the BSs observed for this TF class. Consequently, the corresponding BS logo would exhibit low-information (NT-5, NT-4) positions. This ambiguity could be caused because the particular (AA-15, AA-16) sequence for this class showed some degeneracy in recognition (as discussed above; we termed this intrinsic degeneracy), or because each TF exhibited a precise specificity to either type of half site, i.e., the recognition AA pair is not acting as the only determinant of specificity.
We can further illustrate this with the help of Figure 3. In principle, the two species of half sites involved could be combined into palindromic (P1, P2 in Fig. 3.A) or non-palindromic architectures (M1, M2 in Fig. 3.A). When each TF monomer had a high affinity for both half sites (Fig. 3.B left), they could bind efficiently to P1, P2 and either mixture (we considered both mixtures to have the same binding energy). In a second situation ( Fig. 3.B, center) both TFs had again similar affinities, but this time the monomers bound preferentially to one type of half site and, consequently, to one palindrome. Although a mixed configuration could still be compatible with (weaker) regulatory tasks, the probability of binding to the other palindrome strongly decreased. These are two instances of intrinsic degeneracy. Finally, in a third scenario each TF was very specific to a single half site type; so that only P1 or P2 were accessible (no mixtures), an example of logo ambiguity due to an extrinsic degeneracy ( Fig. 3.B, right).
Ambiguities explained as intrinsic degeneracies are compatible with our starting hypothesis and would only reflect a degenerate code. The code hypothesis must be revised or even rejected when extrinsic degeneracies are common. This would presumably reflect critical changes in the determinants of the indirect readout.
Comparative data suggests the presence of a widecoverage code A BS logo can thus be degenerate because i) the recognition process is degenerate in itself (intrinsic degeneracy) or ii) the logo is computed from BSs recognized by TFs with different specificities (extrinsic degeneracy). To distinguish between these two scenarios, we identified and classified degeneracies (Methods). Fig. 3.C shows the notation used for the different degeneracies. One could simultaneously observe several of these degenerate scenarios for any alignment involving more than two different types of half sites.
Table S1 included all correlations obtained between the pair of residues (AA-15, AA-16) and the nucleotides NT-4 and NT-5, together with the corresponding degeneracies when observed. This table contains 48 different recognition classes, involving a total of 38 intrinsic and 6 extrinsic degeneracies (some classes exhibiting both). The different types of identified degeneracies corroborated the potential of this protocol to detect distinct BSs within a TF class. The extrinsic degeneracies observed constitute a small number of exceptions to an otherwise consistent confirmation of the code conjecture.
We showed a subset of these results, with only significant palindromic combinations, in Fig. 4.A. Recognition sequences were sorted by the left semisequence of the palindromes they recognize, and connected according to their resolved degeneracies. For instance, R 15 S 16 shows an extrinsic degeneracy between (NT-5, NT-4) = CA and (NT-5, NT-4) = GG. The variability of the recognition correlations in AA-15 became manifest also in this figure, a flexibility previously pointed out by mutational studies [19]. Our genomic approach confirmed then that the role of AA-16 as the strongest determinant of specificity applies throughout the TVSR group [19].
Since the general mode of binding in the LacI family involves DNA bending, one could expect that the direct readout of the contacting residues would be strongly conditioned by the characteristics of this specific type of indirect reading [11,12,50]. This would directly imply that TFs with the same contacting residues could recognize different NT sequences. However, the small number of extrinsic degeneracies found suggests that the degree of bending remains substantially conserved throughout the TVSR group.
The consistent next step after proposing an AA/NT recognition code was to validate its predictions. We approached this issue in the next sections in three complementary ways. First, we compared the theoretical predictions with experimental data from LacI mutants (Fig. 4.B and Fig. S3) [19,20]. Second, we confirmed the existence of natural counterparts of BSs previously interpreted only as synthetic constructs (Fig. 4.C). Finally, by computing a gene tree including all TFs with at least one BS in Table S1, we identified several convergence events in the recognition processthe same AAs/NTs association in different tree locations (Fig. 5) that additionally supported the hypothesis of the conservation of the mode of binding, and that overall indicated the presence of a relatively consistent recognition code.
Mutational studies support code predictions
We compared the theoretical predictions with two experimental studies analysing the DNA binding specificities of Escherichia coli's LacI repressor [19,20]. Fig. 4.B shows a comparison between the recognition rules in Table S1 and data from the first of these studies, the pioneering work of Müller-Hill and colleagues [19] in Table S1. AAs sequences recognizing a same sequence of NTs were grouped. Here, we only considered significant palindromic NT sequences; for example, (NT-5, NT-4) = TG means (NT{5 L , NT{4 L ; NT{4 R , NT{5 R )~(TG; CA). We included the case for (AA-15, AA-16) = YQ corresponding to the synthetic SymL site in C). Recognition degeneracies are represented as unidirectional arrows (asymmetrical intrinsic), bidirectional divergent arrows (symmetrical intrinsic), and bidirectional convergent arrows (extrinsic). Colors for polar (green), basic (blue), acidic (red) and hydrophobic (black) amino acids. B) Agreement between synthetic and natural data. Recognition of (NT-5, NT-4)-palindromes by different (AA-15, AA-16)-LacI mutants (YQ is the wild type). Data from [19] -from which we only considered those sequences (AA-15, AA-16) with a natural correspondence in Table S1. Rest of BS positions as in SymL. The larger the TF/BS affinity, the stronger the repression of the b-galactosidase activity. Experimental conditions limited repression to a factor of 200. Arrows indicated again degeneracy classes. Predictions for wild type YQ correspond to asymmetric natural BSs (see text). (NT-5, NT-4)-palindromes involved in the predicted correlations for PM (AT?GT, see Table S1) lack an experimental test. Accordingly, PM do not exhibit a strong affinity for any of the tested palindromes (see Fig. S3), C) Natural and synthetic operators. A dot distinguishes the half sites. Flanking nucleotides separated by a space to help visualization of the highly conserved central region of the BSs. Colors identify different palindromic or mixed combinations in the specificity nucleotides (see Table S2 for more details). doi:10.1371/journal.pcbi.1000989.g004 which several repressor mutants where isolated and characterized. In this figure, the experimentally measured repression of (NT-5, NT-4)-palindromes by different (AA-15, AA-16)-LacI mutants is shown in boxes (with TG=Y 15 Q 16 being the wild type interaction), where the theoretical predictions are superimposed. These predictions are indicated by arrows, following Table S1, with dots denoting non-degenerate associations [links to a single (NT-5, NT-4) pair]. The agreement between theory and experiments emphasizes the presence of an intrinsically degenerate code, with the only discrepancy of the wild type Y 15 Q 16 .
This inconsistency of the wild type class is due to the difference between the BSs considered in our study and those examined experimentally. Theoretical correlations were derived from natural BSs exhibiting variations over the asymmetric O1 site for E. coli's LacI (Fig. 4.C). This specific BS presents an intervening base (NT-2bis, Fig. 4.C) which introduces an asymmetry between the protein contacts made over the left and right half sites [29,38]. However, LacI can bind a palindromic BS lacking the intervening nucleotide. This BS is called SymL (Fig. 4.C) because it is synthetically built from the symmetrization of the left half site of O1 [29]. The mutational studies were based on variations over SymL [19] -for example, the SymL' site in Fig. 4.C. In such synthetic constructs the palindromic affinity of LacI is severely restricted to (NT-5, NT-4) = TG. Moreover, LacI is unable to bind the SymL/SymL'-like mixture (Table S2) obtained from the delection of NT{2 R in the natural O1 site [51].
In a more recent work, Lewis and colleagues [20] characterized the associations between a set of 20 3 E. coli's LacI mutants for the triplet (AA-15, AA-16, AA-20) -corresponding to the AA coordinates 17, 18 and 22 of LacI, respectively-and the 4 3 palindromic (NT-6, NT-5, NT-4)-variants of the SymL operator. We plotted in Fig. S3 a comparison between the recognition pairs obtained in these experiments (corresponding to the TVSR group) and the theoretical predictions involving significant NT palindromic combinations (Fig. 4.A). We noticed again a strong agreement between theory and experiment, which becomes more evident when considering that regulators sharing the same AA-16 sequence tend to bind similar NT sequences. Note also that some of the theoretical correlations could remain untested due to the specific mutant sampling of the screening protocol.
Our predictions appeared nevertheless at odds with some experiments done with lac family members in the latter study [20]. In this case, the recognition triplet (AA-15, AA-16, AA-20) of LacI was swapped to that of nine different members of the family, i.e., MalR, RbtR, FruR, PurR, RbsR, GalR, CytR, RafR and ScrR (the last four in the TVSR group). The sequence of (NT-6, NT-5, NT-4) in SymL was changed accordingly for these regulators to that of a natural BS in which they were known to bind. Only the mutants associated to GalR and FruR worked [20]. This seemingly contradiction is partly linked to the presence of members out of the TVSR group (see below) and the use of single BSs in the repressor-operator characterization (see Text S1, section 3 for a detailed discussion).
The agreement between the familial (genomic-based) specificity predictions and the corresponding mutational experiments in the TVSR set (Fig. 4.B and Fig. S3), this set being 56:5% of the whole family, suggests that the preferential binding of arginine in AA-20 to guanine in NT-6 might turn the structural environment under which the recognition partners (AA-15, AA-16)/(NT-5, NT-4) operate with strong stability, so that indirect readouts did not prevent the emergence of a consistent recognition code.
Code predictions help identify natural correspondences of a synthetic binding mode
The binding of LacI to the synthetic site SymL was believed to be a laboratory construct, not representative of the characteristic binding mode of this regulator [35]. However, two observations from our study supported the presence of a natural counterpart of this synthetic binding mode. First, the natural BSs for the related recognition sequence H 15 Q 16 resembled either the perfect palindromic sequences of SymL and SymL', or their mixture (Table S2, see the corresponding logo in Fig. 2.D). Second, although every BS involved in the Y 15 Q 16 logo in Fig. 2.C incorporated the inserted nucleotide, we also found several BSs related to the synthetic SymL construction (Fig. 4.C and Table S2) in the first BS search based on PF. In agreement with the mutant model [19,51], neither natural SymL'-like BSs nor mixtures were detected for Y 15 Q 16 in this PF scan.
That the recognition sequences of Y 15 Q 16 -and H 15 Q 16 -TFs are highly related was also suggested by its location in the gene tree. Fig. 5 shows the gene tree of all TFs with at least one BS in the table of correlations (623 TFs for 811 BSs in Table S1) and the three TFs with Y 15 Q 16 binding to SymL-like BSs. In this tree, branches corresponding to these two recognition classes appeared closely located. In fact, a recent mutational work [52]
Recognition convergence strengthens structural stability hypothesis
If only a restricted number of specificity determinants (AA to NT pairs) were possible within a particular regulatory family, we should expect instances of convergent evolution for the same recognition AAs in divergent backgrounds. This is indeed what we observed. In the gene tree plotted in Fig. 5 (see also Fig. S4), branches corresponding to several of the largest recognition classes were highlighted. We identified convergence events in the recognition process (i.e., same AAs associated to the same NTs throughout the tree). These findings validated the initial hypothesis that the binding mode was highly conserved and that, as a consequence, evolution finds the same solutions repeatedly (the presence of relatively consistent recognition rules). Such structural stability of the TVSR set could apply to other regulator families.
Conclusions
This work reveals the first comprehensive resolution of a recognition code for a large group of proteins within a family of transcriptional regulators. This resolution is based on the use of comparative genomics [15], the identification of local transcriptional regulation as a fundamental regulatory architecture in prokaryotes [30][31][32][33][34] and the hypothesis of the stability -in the large phylogenetic distances considered-of the domain structure around the recognition sites [10,14].
This last hypothesis is confirmed by the patterns of differential residue and BS conservation obtained. Indeed, we only found a few instances of TFs that would invalidate our conjecture, i.e., TFs with the same sequence in the specificity pair (AA-15, AA-16) but recognizing incompatible BSs (extrinsic degeneracies). Moreover, the convergence events and the agreement of the correlations with mutational data (including the extension of the rule of the AA-15 flexibility to become a dominant family attribute) support the assumption that the mode of binding is conserved for a large fraction of the family.
A few caveats to our approach should be noticed. First, we considered a stringent protocol to select for BSs. This method combined PF, iterated PWM refinement, and further removal of BSs with potential spurious nucleotides exhibiting no special affinity (see Text S1, section 2). In this way, those AA/NT relationships incorporated into the code should exhibit at least a minimal moderate affinity. Of course, any false positive removal is made at the cost of losing some true positives. An example of this was the loss of the BS for RafR [26], which was detected in the initial PF search but removed after the processing protocol. In any case, this was a consequence of the dominance within the TVSR set of a canonical mode of binding associated to an ideal BS backbone given by the conserved pattern (T)G-A-CG-T-C(A) in Fig. 2.E. A second limitation to our approach is the reliability of the extrinsic/intrinsic degeneracy analysis. The most reliable ones correspond to TF classes with many members and many detected BSs, e.g., the TF class corresponding to V 15 A 16 (see the Appendix in Text S1). This second limitation could be overcome as more genomes become available.
In contrast to what appears to happen with the LacI family as a whole [20], the natural recognition correlations within the TVSR subfamily could be largely reproduced by mutational experiments. Thus, the genomically-derived correlations will be useful to complete the specificity map derived with mutational approaches only [19,20]. Moreover, the use of natural correlations will be probably essential to guide the redesign of a library of regulators that can target the maximal number of arbitrary sequences in the nonconserved positions of the consensus sequence. Note that, beyond the code established between the pairs (AA-15, AA-16) and (NT-5, NT-4), the mutual information analysis of Fig. S1 suggested that there existed alternative AA and NT positions also involved in specificity tasks. In particular, the sequence in NT-2 was associated in this analysis to those of AA-5, AA-15 and AA-55. The same applies for a mutual information analysis restricted to the TVSR set (data not shown). This specificity role of AA-55 was demonstrated in the particular case of the purine repressor [41]. As AA-15 could be coupling the recognition of NT-2 to that of the pair (NT-5, NT-4), the resolution of the specificity map for the triad (NT-5, NT-4, NT-2) could be beyond the scope of any mutational approach without a previous genomic blueprint.
In summary, the main advantage of the BS search based on local regulation is its potential applicability to any annotated genome and TF family, without the limitations linked to orthology and functionality definitions, i.e., the functional relationship between the TF and the regulated operon trivially exists in the case of autoregulation. The explicit correlations obtained in this analysis can thus be refined with sequence data from newly sequenced genomes, and could ultimately act as a blueprint for the synthetic redesign of TFs with new specificities. These correlations constitute the first candidate to a relatively consistent recognition code applicable to an extensive subfamily of transcriptional regulators.
Selection of sequences for HTH-LacI domains
5597 AA sequences for HTH-LacI domains (Smart SM00354) were obtained from MicrobesOnline [36]. The median length value of this domain (including both the HTH and hinge-helix regions) is 71+3:5 AAs. To guarantee the functionality of the domains, we selected from the starting set every sequence whose length is inside the range of 71+7 AAs, and removed those lacking the 26-AA Pfam domain PF00356 -this label corresponds to the HTH core of the HTH-LacI domain. We also discarded three cases of proteins containing two SM00354 domains. Finally, we removed overrepresented sequences due to strain variations in the database to get a final set of 2639 sequences.
Domain alignment
We use Muscle [53] to add each of the HTH-LacI domains to a previous Smart curated alignment involving 49 SM00354 domains [54]. After the removal of columns exhibiting gaps in more than 80% of its sequences, we obtained a seed-alignment with 71 AA positions. Then, for each of the 2639 sequences we applied the following protocol: i) the sequence is added to the seed-alignment using the mentioned option of Muscle; ii) all those positions that imply the insertion of a gap in the seed-alignment are removed from the sequence; and iii) the sequence (in its aligned configuration) is removed from the seed-alignment and saved. After the process was completed, none of the 71 positions in the final alignment of the 2639 domains ( Fig. 1.C) exhibited gaps in more than 5% of sequences. We extracted all the recognition helix sequences from the alignment. 1490 out of 2639 domains belonged to the TVSR group (Dataset S1).
Selection of intergenic regions for BS search
We could extract from the operons predictions included in MicrobesOnline [36] the non-coding region located upstream of the operon encoding the HTH-LacI domain (up to 200 bp), and also the non-coding region located before the downstream neighbor operon (Fig. 2.A, Dataset S1). When the regulated operon is located downstream of the regulator, both operons are usually encoded in the same strand (unidirectional architecture [31]). Thus, in the case of downstream regulation we only considered the unidirectional orientation -this occurs in *56% of domains. We did not included alternative convergent orientation (downstream operon encoded in the opposite strand) because under this architecture neighbor regulation is much less common [31]. Sequences were truncated if the next upstream coding region was reached (Fig. 2.A, red lines). From every region we also obtained an extended version of 250 bp that includes the range of coding positions from +1 to +50. These extended regions were never truncated (Fig. 2.A, green lines).
Recognition TF classes and first BS search by PF
Within the TVSR group we divided the intergenic regions in groups associated to domains sharing the same (AA-15, AA-16) sequence. On each group (recognition class), we made a first BS scan using PF techniques as implemented in the Gibbs Motif Sampler [55], with the following parameters: estimated total number of BSs in a given group of regions equals the number of these regions; one BS per region at the most; palindromic BSs of 14 bp without fragmentation. Results were robust to changes in these parameters, including the estimated BS length and the palindromic nature of the sites. To avoid circularity we did use uninformed priors based on the average background composition [56]. The PF scan was applied over the truncated version of the intergenic regions to avoid coding zones, which, as it happens with BSs, are more conserved than the non-functional intergenic sequences. Finally, we discarded BSs with confidences below 40%.
Second BS search by PWM
After the first BS scan we had at most one BS per intergenic region. We refined and extended our results through an iterative process of PWM construction and BS selection. This time, we considered that there might be multiple BSs per intergenic region and BSs located in the coding zone. Firstly, we built a PWM from the BSs found in the PF scan using a constant pseudocount function s~0:5 [49] (results were robust under variations on this parameter). Secondly, we slided this PWM over the extended version of the intergenic regions and generously selected all those sites with a score over the minimal one in the starting BS set. The sites selected in this search is what we called the candidate sites. Finally, we applied the following protocol to look for the most significant candidates: i) generation of a null set of 10 7 scores that was obtained by sliding the PWM over random versions of the intergenic regions; ii) selection of every candidate whose score had a p-value below 10 {5 when compared to the null set; iii) construction of a new PWM from the candidates selected in ii); and iv) computation of the score for all candidates under the new PWM.
Using the new PWM to generate a new null set, these four steps were iterated until convergence. The resulting set of 942 BSs was the end product of the whole process of search (Dataset S1). All the found BSs exhibited Z-scores above Z~4. Each BS was read in the sense strand -consequently, its left and right semisequences were univocally determined. See Text S1, section 1 and Fig. S5 for a comparison with more standard approaches to BS search.
Consensus logo
We extracted the consensus sequence of BSs associated to a same recognition class and then aligned the whole set of consensus sequences to obtain the consensus logo (Fig. 2.E). Using the alignment of consensus sequences instead of the raw alignment of all found BSs avoids the over-representation of those BSs corresponding to the most populated classes. The raw alignment exhibited the same qualitative behavior to that of Fig. 2.E.
Identification and classification of degeneracies
We successively applied the following protocol to each set of BSs associated with the same recognition AAs (see section 2 of Text S1, Table S3, Fig. S6, and Fig. S7 for a more detailed description). First, a triangular matrix F containing the frequencies of the 136 possible combinations for the quartet of positions (NT{5 L ,NT{4 L ; NT{4 R ,NT{5 R ) was computed. Second, a matrix S was extracted from F by selecting combinations found to be statistically significant (with respect to those observed in the genomic background). Third, significantly under-represented mixtures were identified in S, as the absence of mixed combinations is linked to extrinsic degeneracies (Fig. 3.B, right). Finally, each extrinsic degeneracy partitioned S into two submatrices in which the two types of intrinsic degeneracy were resolved. In the absence of any significant high frequency in a submatrix we kept the symmetrical recognition scenario of the null model ( Fig. 3.B, left). Moreover, the presence of a significant frequency usually corresponded to a palindromic combination. In this case, we considered an asymmetrical recognition process with a dominant palindrome ( Fig. 3.B, center).
Gene tree
The full AA-sequences of the 626 TFs with at least a BSs in Table S1 (623 TFs) plus the 3 TFs with Y 15 Q 16 binding to natural SymL-like BSs (Fig. 4.C and Table S2) were aligned and refined with Muscle. This alignment was trimmed with Gblocks [57]. Finally, we use PhyML to build the tree in Fig. 5. Supplementary Fig. S4 contains a more detailed version of this tree in which each protein is labeled with its VIMSS ID plus the recognition AA pair. In this larger version, we plotted all the BSs associated to each TF (we found four BSs per TF at most).
Supporting Information
Dataset S1 Sequences of proteins, domains, intergenic regions and binding sites. Found at: doi:10.1371/journal.pcbi.1000989.s001 (0.67 MB ZIP) Figure S1 Mutual information (covariance dependency) values between 370 domains in our alignment for which we could univocally associate BSs in RegTransBase v5 (reference [46], main text) and the alignment of these BSs (see Text S1, section 1 for details on the use of RegTransBase data). Logos for these alignments are explicitely shown. The global mutual information pattern reflects the symmetrical nature of the contacts made by the monomers over the corresponding half site. Mutual information analyses cannot solve interactions between highly conserved NT and/or AA positions -note how they correspond to the darkest rows and columns (see reference [47], main text). This is the case of the links between the hinge-helix AA-51 and AA-54 with the central CG group. On the other hand, the largest covariances for NT-5 corresponded to AA-15 and AA-16. Although several AAs (like AA-15) exhibited appreciable scores for NT-4, the maximal mutual information is obtained with AA-16. NT-6 is strongly correlated with AA-20, with no more appreciable correlations for these NT and AA coordinates. Finally, NT-2 is correlated, in decreasing order of importance, with AA-55, AA-15 and AA-5. Figure S3 Comparison of theoretical predictions with experimental data. Black boxes correspond to (AA-15, AA-16)/(NT-5, NT-4) binding partners in a protocol of phenotype screening of binding mutants (reference [20], main text). Vertical gray lines separate groups of amino acid sequences sharing the same AA-16. Green dots indicate the theoretical sequence correlations [involving significant (NT-5, NT-4)-palindromes, see Fig. 4.A in main text]. One should consider in this comparison that: i) regulators sharing the same AA-16 sequence tend to bind similar nucleotide sequences, and ii) due to the sampling effects of the screening method, some of the theoretical recognition predictions remained possibly untested. The main discrepancy observed corresponded to those regulators with a methionine in AA-16, where we found a consistent signal of binding to (NT-5, NT-4) = TT which is abstent in the mutational experiment. This trend was however in agreement with the experimental data reported in reference [19], main text. Note also here the considerable number of mutants that were still able to bind the wild type sequence of SymL, (NT-5, NT-4) = TG. Found at: doi:10.1371/journal.pcbi.1000989.s004 (0.16 MB TIF) Figure S4 Full version of the gene tree in Figure 5, main text. This tree involves the same transcriptional factors (TFs) of the simplified tree; however, we plotted now all the binding sites (BSs) associated to each TF (we found four BSs per TF at most). Each external quartet of colored boxes corresponds to the specificity- | 2016-03-14T22:51:50.573Z | 2010-11-01T00:00:00.000 | {
"year": 2010,
"sha1": "c17423e12ac979e9c60059efff8e9228a585ff94",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1000989&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ce2db9bd9716da61152bf1f28f180b5e181ff619",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Engineering",
"Medicine",
"Computer Science"
]
} |
239472587 | pes2o/s2orc | v3-fos-license | Co-Design Practices in Diet and Nutrition Research: An Integrative Review
Co-design, the method of involving users, stakeholders, and practitioners in the process of design, may assist to improve the translation of health evidence into tangible and acceptable intervention prototypes. The primary objective of this review was to identify and describe co-design techniques used in nutrition research. The secondary objective was to identify associations between co-design techniques and intervention effectiveness. An integrative review was performed using the databases Emcare, MEDLINE, PsycINFO and Google Scholar. Eligible studies included those that: (1) utilised participatory research or co-design techniques, (2) described development and/or evaluation of interventions aimed at improving dietary behaviours or nutrition, and (3) targeted community-dwelling adults aged ≥18 years. We identified 2587 studies in the initial search and included 22 eligible studies. There were 15 studies that utilised co-design techniques, with a strong focus on engagement of multiple stakeholder types and use of participatory research techniques. No study implemented a complete co-design process. Most studies (14/15) reporting outcomes reported positive health (maximum p < 0.001) or health behaviour outcomes attributed to the intervention; hence, associations between co-design techniques and effectiveness could not be determined. Currently published intervention studies have used participatory research approaches rather than co-design methods. Future research is required to explore the effectiveness of co-design nutrition interventions.
Introduction
Over the past half-century, dietary intakes have changed dramatically, with increased consumption of processed foods containing added sodium, unhealthy fats, and refined carbohydrates/sugars [1]. Men and women across all age groups consume high amounts of discretionary (unhealthy) food with underconsumption of fruits and vegetables relative to health guidelines. These dietary factors are major drivers for common chronic conditions including cancer, heart disease and Type 2 diabetes [2], which are leading contributors to early death, illness, and disability [3]. Improving dietary behaviour is a cornerstone in the prevention and treatment of chronic diseases [4], but remains a significant challenge [4]. To effectively achieve dietary behaviour change, interventions must be embedded in best practice, associated with effectiveness, and be relevant and appealing to target populations to facilitate successful translation into practice. Hence, nutrition interventions are increasingly focused on patient-centred models [4].
Person-or community-centred care is the foundation of dietetic practice [5]. This refers to healthcare providers building relationships with people and their communities to manage health conditions in a personalised approach that provides equal sharing of power [6]. In this manner, public health (nutrition) research interventions should consider a similar approach, including "bottom-up" participatory research or participatory action research (PAR) designs. The benefits of PAR are widely acknowledged and include the development of research outputs closely aligned to community needs, while helping to build community capacity and promoting research equity [7]. Notably, PAR defies traditional "top-down" research methods to disassemble traditional power imbalances between participants and researchers.
Co-design, also known as co-creation, co-production, or participatory design, in a healthcare setting, refers to the integration of design thinking, stakeholder experiences, scientific evidence and participatory principles in the collaborative design of local solutions to local problems [8][9][10]. Co-design is considered to produce solutions based on understanding of the local context to meet the needs of all stakeholders [11], offers insights into the lived experience of the public and helps to answer the why questions as opposed to sciencebased research, which predominantly looks at what is happening. Therefore, co-design may have greater acceptance by providers and target users [9], and offer a more sustainable and effective translation approach into clinical practice. Furthermore, controlled trials provide rigorous evidence of the inherent value of community inclusion in public health research processes, particularly for increasing the effectiveness of interventions, achieving local customisation and strong community engagement [12], and improving the quality and appropriateness of study design [13].
Co-design research methods can also be effective in helping to overcome barriers to translation and improving the uptake and effectiveness of nutrition interventions, as it is unclear to what extent co-design has been incorporated into nutrition research, nor what the benefits are. However, studies evaluating the effectiveness of co-design appear to be scarce [8,14], and the effectiveness of co-design methodological techniques used in different research disciplines remains unfixed. Co-design and PAR approaches have also been the subject of intense debate with key criticisms including poor reporting practices [15] and tokenism: "small-scale, poorly funded and with limited incentives" in co-design activities [16]. Furthermore, there remains limited published studies, systematically reviewing the participatory design of nutrition/diet-based interventions. Hence, the purpose of this study was to conduct an integrative, systematic review to identify and describe participatory and co-designed methodological techniques previously used in nutrition research and to identify any associations between the use of participatory or co-design techniques and intervention effectiveness. This will assist to guide future nutrition research that deploys co-design or participatory research methods.
Methods
Ethics approval was not obtained for this study since human participants were not involved. An integrative review using a systematic review search approach was undertaken following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A protocol for this review was written, agreed upon by all co-authors and registered with Open Science Framework (osf.io/s8cv7). All stages of literature searching and screening were conducted by the first author (B.S.J.T.) with assistance from the coauthors as specified.
An integrative review approach was used as it enables systematic and rigorous review of studies that contain diverse methodologies, for example, experimental, nonexperimental, quantitative, and qualitative work [17]. Integrative reviews share common search strategies for promoting rigor associated with systematic reviews, but diverge at the point of data analysis, with integrative reviews drawing upon inductive techniques such as the identification of noting patterns and themes, seeing plausibility, clustering, counting, making contrasts and comparisons, discerning common and unusual patterns, subsuming particulars into general, noting relations between variability, finding intervening factors, and building a logical chain of evidence [17].
Eligibility Criteria
Eligibility criteria are outlined below, according to the PICOS structure.
Population
Target population was restricted to community-dwelling adults who were 18 years old or older. Studies including children and adolescents were excluded due to vast differences in characteristics, learning and behavioural issues compared with adults. Studies conducted in any geographical locations including metropolitan, rural, and remote areas and online settings were included. Gender and health status of the study population was not limited, i.e., healthy populations or individuals with a specific health condition were included. Study populations excluded were ex vivo, in vivo, and in vitro studies and studies where the participants were not humans, or animal models.
Intervention
Eligible studies reported an intervention that aimed to improve dietary behaviours or any aspect of nutrition-for example, interventions that aim to increase vegetable consumption, examine the effects of a specific food, ingredient, or compound on a healthrelated outcome, or to encourage compliance with an entire diet. Mixed interventions (interventions targeting dietary and other health components risk factors such as physical activity) were also included. Interventions could be delivered via any format including digital, face-to-face, or mixed. To be included, co-design or participatory research methods must have been used to develop the intervention. The definition of co-design varies with different authors; however, the general rule is that the research methods should include active collaboration between participants, researchers and other relevant stakeholders in the process of intervention design [18]. To exclude interventions with limited participant involvement in their development, only studies classified as having involved "collegiate", "collaborative", or "consultative" participation were included. Classification was based on the seminal definitions of the four modes of participation described by Cornwall and Jewkes [19], built upon by Biggs [19,20]. Contractual participation is considered to reflect shallow participation, while consultative research approaches genuine participation, and collaborative and collegiate meet standards for genuine participation [19].
Control or Comparator
Studies were not limited based on control group or comparator.
Outcomes
Studies that described either the design, development or evaluation of a co-designed dietary behaviour intervention were eligible. Therefore, eligible outcomes include: Description of characteristics of an intervention: Studies that describe the participatory, co-design, or stakeholder engagement techniques that they applied in the development of a diet or nutrition intervention were included. Hence, papers describing study protocols were eligible.
Outcomes: Studies were included if they reported dietary behavioural outcomes (e.g., increased fruit and vegetable intake) and/or health outcomes (e.g., weight loss). Study outcomes could also be the development of a co-designed intervention, a new tool, or specific intervention components. Qualitative studies undertaken to directly inform the design of a specific intervention were included, while qualitative or consultative research for general knowledge purposes was ineligible (e.g., identifying barriers and facilitators to dietary behaviours).
Study Design
As this is an integrative review, any type of primary study (qualitative or quantitative) was eligible. Studies of any sample size, protocols for planned studies and studies that reported descriptions of the co-design process or methods without outcome evaluations were included. Studies that reported process evaluations of the intervention were also eligible. Other publication types such as review articles, opinions or editorials and conference abstracts of less than 1000 words were excluded.
To be included, study methodologies had to meet collegiate, collaborative, or consultative levels of participation [18]. Studies that met just the contractual level were excluded (see Table 1). Table 1. Definitions, explanations, and eligibility of different levels of participation as described by Cornwall and Jewkes [19] and Biggs [20].
Definition Further Explanation Eligible for Review
Collegiate (deepest form of participation) Researchers and local people work together as colleagues with different skills to offer, in a process of mutual learning where local people have control over the process.
Deepest level of participation. Researcher's role shifts from director to facilitator and catalyst.
Collaborative
Researchers and local people work together on projects designed, initiated, and managed by researchers.
Collegiate techniques are applied but are influenced by institutional agendas. Genuine participation occurs within the confines of a larger, pre-designed research process.
Consultative
People are asked for their opinions and consulted by researchers before interventions are made.
People are involved as informants for the purposes of verifying and amending research findings. People are contracted into the projects of researchers to take part in their enquiries or experiments.
People are involved to fulfil a data collection role and they have no control or input into projects that are scientist-led, designed, and managed. X
Data Sources and Search Strategy
The search strategy centred upon two concepts: (1) co-design and (2) dietary intervention, and was developed with input from an academic librarian at Flinders University (Adelaide, South Australia). After conducting experimental searches, the proximity searching technique was used for the "dietary intervention" concept to improve search precision and reduce the number of results produced. In Google Scholar where proximity searching is not applicable, "dietary intervention" was collapsed into two distinct concepts, i.e., "diet" and "intervention". Hence, the search strategy used in Google Scholar was the combination of synonyms of "co-design", "participatory action research", "diet" and "intervention".
The systematic search was conducted on 11 August 2020 using four electronic databases (Emcare, MEDLINE, PsycINFO and Google Scholar). Consistent with evidence regarding optimal coverage for health and medical topics, Emcare, MEDLINE and PsycINFO databases were searched with the following filters applied: (1) humans, (2) full text available, (3) published date 2010-2020 and (4) English language [21]. Search terms included synonyms of the search concepts "co-design" and "dietary intervention" (see Table 2) and were searched in all fields. screening was undertaken in duplicate, independently, by the same individuals and any disagreements were resolved through discussion with an independent adjudicator. Table 2. Search concepts and synonyms included in searches.
Concept 1: Co-Design
Concept 2: Dietary Intervention co-design* OR codesign* OR co-creat* OR cocreat* OR "participatory design" OR "design research" OR "collective creativity" OR "user-centred design" OR design* OR "consumer participation" OR pre-design* OR participatory OR "participatory action research" OR "action research" OR "community-based participatory research" OR "co-production" OR "user-centred" OR "human-centred" OR "human-centred design" OR "design thinking" OR "experience based design" OR "experience-based design" OR "experience based co-design" OR "experience-based co-design" OR "experience based codesign" OR "experience-based codesign" diet* OR nutrition* OR eat OR eating OR food* OR meal* OR "meal plan*" OR menu* adj1 intervention* OR activit* OR strateg* OR program* OR service* OR plan* OR advice OR regime* OR therap* OR provision AND → → The arrow is assumed to be understood as an indicator that synonyms under concept 1 AND synonyms under concept 2 were searched.
Classification of Studies Based on Modes of Participation
Due to variation in participation across different studies further screening was conducted to classify the extent to which participatory techniques were utilised in each study according to the four modes of participation described by Cornwall and Jewkes [19]. Since contractual research involves only minor and superficial consultation with participants, articles were only included in this review if the intervention design reached collegiate, collaborative, and consultative participatory standards. Two authors (B.S.J.T. and J.C.R.) classified the studies in independent duplicate and any conflicts were discussed and resolved.
Data Extraction and Management
Data were extracted into a purpose-developed data extraction table. Information regarding the studies' characteristics (aim, participants, inclusion of other stakeholders, setting, intervention, main outcome or finding, PAR standard reached) and co-design methods (theoretical framework, co-design approach, data collection/analysis techniques, research stage at which participant feedback was sought, and extent of engagement) were included.
Sufficiency of Reporting
An assessment of sufficiency of reporting was undertaken using an adapted version of the eight-item checklist for reporting non-pharmacological interventions, originally adapted from the Consolidated Standards of Reporting Trials (CONSORT) checklist [14,22,23]. Studies were scored against items relating to the (1) setting, (2) stakeholders, (3) facilitators, (4) co-design methods, (5) materials, (6) length of design and sessions, (7) interval and frequency of sessions and (8) description of the overall co-design process.
Results
Identification and selection of studies is summarised in Figure 1. After full-text screening, 36 studies were eligible. Following further screening to exclude contractual modes of participation, 22 studies were included in this review.
Results
Identification and selection of studies is summarised in Figure 1. After full-text screening, 36 studies were eligible. Following further screening to exclude contractual modes of participation, 22 studies were included in this review.
Characteristics of Included Studies
Study characteristics are depicted in Table 3.
Theoretical Frameworks
Twelve studies reported using a theory-based framework in developing the intervention. Social Cognitive Theory [52] was the most common theory used (n = 3), followed by PAR (n = 2) and Intervention Mapping (n = 2).
Recruitment Methods
Recruitment methods varied. Three studies [36,44,48] did not report methods of participant recruitment and only two studies [33,47] reported providing reimbursement to participants for their contribution.
Extent of Participation
The extent to which participants had input was classified according to the six intervention development phases identified by Eyles et al. [14]. All, except two studies, assessed user needs to inform intervention focus. Six studies had end-user input in pilot/real-world testing, whereas four studies included end-users in prototype testing. Four studies assessed background knowledge and evidence, two studies assessed user needs to inform technology, and two studies involved participants in developing intervention content.
Intervention Effectiveness
Fourteen out of 22 studies reported outcomes, with 13 studies reporting statistically significant changes in diet-or nutrition-related outcomes or behaviour attributed to the intervention [29,30,33,35,40,41,43,[45][46][47][48][49][50]. No studies empirically evaluated the effect of participant engagement on the results of the study or effectiveness of the intervention. Since all studies that reported outcomes reported positive outcomes, the relationship between stage of end-user consultation and intervention effectiveness was explored. The highest percentage (75%) of studies that showed positive outcomes were studies that involved endusers in prototype testing. This was followed by studies that assessed user needs to inform intervention focus (67%) and those that involved end-users in pilot testing (67%). None of the studies that assessed user needs to inform the technology used for the intervention found positive outcomes. Six (55%) out of 11 studies which involved multiple phases reported positive outcomes [29,32,33,35,44,45].
Discussion
This integrative review identified 22 original research studies or protocols for nutrition/diet intervention studies that featured participant engagement in the design or development and were published in the last decade. Within the participatory design methods and processes used, there was no evidence of the explicit use of co-design; however, some studies utilised co-design techniques and 11 studies engaged participants to a collegiate or collaborative level, indicating genuine partnership and meeting PAR requirements. No studies empirically evaluated the specific impact of participant engagement on intervention effectiveness.
To our knowledge, just one published study has reviewed the use of co-design practices within digital health research [14], while the current study is the first to review co-design practices within nutrition research specifically. Similar to Eyles et al., the participatory techniques reviewed in the current study were varied, ranging from conventional methods including focus groups and surveys, to less conventional methods such as photovoice. Different methods were used specific to various research contexts. For example, the photo-voice method, a visual technique of capturing participants' concerns which may be sensitive, is pertinent to the needs of the Indigenous population [53]. The interventions included in the current review varied and substantially focused on community-based programs seeking to provide tools and resources to help people, families, or workplaces to adopt healthier dietary behaviours.
The research designs used were also varied. Qualitative research was common for studies that were at the earlier stages of design (i.e., to inform intervention development), while pre-post and randomised controlled trials were used to evaluate co-designed in-terventions. Finally, the research populations and samples included a range of end-user stakeholders from specific communities (e.g., ethnic or cultural groups), typically consisting of intervention end-users, although it was not uncommon to include other stakeholders such as health practitioners or other professionals. Promisingly, a substantial number of studies reached a sufficient level of participation whereby power over decision making is shared, suggesting genuine inclusion in the research process [54]. Overall, the body of research demonstrates a heterogenous application of participatory and co-design research techniques that were adapted to the unique needs and characteristics of the health problem or population at hand. Hence, future participatory research could adopt methods suited to similar contexts and evaluate their suitability where necessary.
Methodological Considerations of the Included Research
A strength of the included studies was that details of the participatory design methods and stakeholders involved in intervention development were reported sufficiently in many key areas. However, it was challenging to determine timeframes of the intervention development process including the total number and time interval between sessions. Although materials used in the design process were named, most were not adequately described. Insufficiently detailed reporting of methodological considerations is a limitation of PAR research previously identified [14,55,56], highlighting the need for more detailed methodological reporting in this field. Notably, studies which involved collegiate participation (the highest form) did not necessarily translate into sufficient reporting. Similarly, consultative participation (the lowest) did not necessarily equate to insufficient reporting.
In this review, only four studies involved a minority population group (African Americans; Bangladeshi migrants) and only two studies involved Indigenous population groups. These findings are contrary to the general acceptance that participatory design is common and best practice in research involving under-served and/or Indigenous populations [57]; however, absence of reporting in the peer-reviewed literature could be a factor and future reviews should include grey literature. Encouragingly, variation in population groups in the current review suggests participatory approaches are applied broadly across populations.
The Effectiveness of Co-Design in Nutrition Research
A secondary objective of this study was to understand the effectiveness of co-design in nutrition/diet-based interventions. However, no studies identified empirically evaluated the effect of participant engagement on these outcomes. Of the 14 studies that assessed intervention outcomes, all but one reported a statistically significant effect in the desired direction, which may have been a result of publication bias towards successful studies [58]. Nonetheless, this review found that a higher percentage of studies reported positive health or health behaviour outcomes if they involved end-users in prototype testing, pilot testing or assessed their needs to inform the intervention's focus. For example, Adams et al. utilised photo-voice techniques to guide their intervention design and made sure that it was appropriate for the social contexts and met the cultural and practical needs of local Australian Aboriginal people [24]. In future, research may benefit from greater inclusion of end-users at early stages of research design to preliminarily identify the optimal direction.
There is a dearth of evidence assessing the association between different modes of participation and nutrition intervention effectiveness. Future studies should include controlled trials of treatments that vary levels of participation and co-design or no participation. Future studies should improve reporting, including deficiencies identified in the current review (see below) and to facilitate future best practices. Additional reporting should seek to cost participation in design and estimate return on investment through long-term follow-up. To date, to our knowledge, no study has examined whether co-design is more effective than traditional approaches to intervention development with evaluations tending to be descriptive in nature and not experimental [8,14]. This highlights the need for robust, empirical evaluations that can evaluate these effects. While a randomised controlled trial approach would help to establish the efficacy of a co-designed intervention, it is likely that a RE-AIM approach that considers translational outcomes as well as efficacy outcomes [59] is more appropriate for assessing the effect of co-design.
A lack of robust evaluations of the impact of co-design have been noted in related fields. Similar to our findings, a recent rapid review found 11 studies reported on the use of co-design in research within acute healthcare settings and that while many studies provided qualitative and descriptive data regarding the perceived value of co-design, robust evaluations were limited [60].
Strengths and Weaknesses of the Review
A strength of this review was that systematic approaches were used across three different scientific databases with studies independently reviewed in duplicate by two co-authors. The integrative review approach also enabled inclusion of both quantitative and qualitative research including studies at different stages of the research process (e.g., protocols, intervention development papers). The addition of Google Scholar to the search also identified a further study that was included. Despite this, by limiting the search to the last 10 years and to articles published in English only, it remains possible that eligible studies were missed. For example, our search strategy did not capture work reflecting a Kaupapa Māori approach to co-design of health or diet interventions [61], potentially because these approaches and cultural groups conceptualise dietary behaviours within a wholistic, whole-of-health model that our search strategy did not detect. Therefore, future reviews should consider cultural and local differences in language and conceptualisation of health to ensure coverage of different groups. The exclusion of grey literature where co-design work may be more commonly published is another limitation of this review. Additionally, although intervention effectiveness was examined, risk of bias could not be assessed due to variation in outcome measures reported in the included studies. However, reporting practices were analysed, which is a valuable outcome of this review.
Implications for Future Research
This review has several implications for future research. Reporting practices around participatory research have previously been reported to be poor, highlighting the need for researchers to use standard checklists for reporting interventions designed using participatory or co-design methods. Eyles et al. highlighted that checklists can be adapted from existing relevant and appropriate checklists [14], such as those described by Hoffman et al. [62] or Borek et al. [63]. Sufficiency of reporting can provide clearer guidance for future studies to employ methods that are replicable and consistent. Furthermore, it is important to note that co-design techniques and tools are often adapted to the specific research questions, contexts, and populations at hand. Future research would benefit from more open and detailed descriptions of these adaptation processes and the rationales that underpin co-design decision making. To take this even further, it would be ultimately beneficial if researchers begin to openly publish their co-design techniques, similar to how datasets and survey instruments (for example) are increasingly published in opensource libraries, where they can be accessed by researchers and amended for other research purposes.
Other than that, it is suggested that researchers give a higher level of consideration to the time and resources required to design interventions within participatory research. It is important to think about the range of multilevel stakeholders' representatives that researchers plan to invite to a co-design activity and consider carefully what their drivers and motivations to participate might be. Co-design does not have to be undertaken independently from other research methods; in fact, it works well with other quantitative methods as part of a mixed methods model. Lastly, to determine whether co-design is more effective than traditional approaches to intervention development, high-quality process evaluations and randomised controlled trials should be conducted to assess intervention effectiveness compared to non-co-designed comparator interventions or waitlist control groups.
Conclusions
Reviews summarising the methods and processes used in participatory and co-design of dietary interventions remain limited. The 22 studies included in this review used participatory research, but not co-design, methods. More studies reported positive health or health behaviour outcomes if they involved participants in prototype testing, pilot testing or needs assessment to inform the intervention focus. Most of the studies did not achieve an adequate level of reporting for their intervention development processes. Further research to explore co-designed nutrition/diet interventions and their effectiveness is warranted. | 2021-10-17T15:13:21.749Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "a73525a46c320decc11234dde29c18a02cad324d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/10/3593/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc17547186aa280f27b5cf6b12b926cf5525c543",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15349366 | pes2o/s2orc | v3-fos-license | The Role of Ionotropic Glutamate Receptors in Childhood Neurodevelopmental Disorders: Autism Spectrum Disorders and Fragile X Syndrome
Autism spectrum disorder (ASD) and Fragile X syndrome (FXS) are relatively common childhood neurodevelopmental disorders with increasing incidence in recent years. They are currently accepted as disorders of the synapse with alterations in different forms of synaptic communication and neuronal network connectivity. The major excitatory neurotransmitter system in brain, the glutamatergic system, is implicated in learning and memory, synaptic plasticity, neuronal development. While much attention is attributed to the role of metabotropic glutamate receptors in ASD and FXS, studies indicate that the ionotropic glutamate receptors (iGluRs) and their regulatory proteins are also altered in several brain regions. Role of iGluRs in the neurobiology of ASD and FXS is supported by a weight of evidence that ranges from human genetics to in vitro cultured neurons. In this review we will discuss clinical, molecular, cellular and functional changes in NMDA, AMPA and kainate receptors and the synaptic proteins that regulate them in the context of ASD and FXS. We will also discuss the significance for the development of translational biomarkers and treatments for the core symptoms of ASD and FXS.
I. INTRODUCTION
The mechanisms that underlie the learning and memory, cognitive and social deficits associated with autism spectrum disorders (ASD) and Fragile X syndrome (FXS) are complex and depend to a large extent on glutamate receptors. Glutamate receptors are comprised of ionotropic glutamate receptors (iGluRs) and metabotropic glutamate receptors (mGluRs). Findings from various experimental systems implicate iGluR dysfunction in ASDs and FXS. They include human genetic studies, clinical drug trials, human neuroimaging, and postmortem brain studies, animal models of ASD and FXS, and in vitro cell cultures, which will be discussed in the present review. The metabotropic glutamate receptors (mGluRs) are modulators of brain function and synaptic plasticity and have recently been shown to play a significant role in syndromic forms of ASD such as FXS through enhanced protein synthesis-dependent synaptic plasticity and perhaps peripheral mechanisms such as modulation of gastrointestinal functions. Therefore, mGluRs have been explored as therapeutic targets in FXS. However, clinical trials with group I mGluR (mGlu5) antagonists show that these drugs are not effective in all FXS subjects. For example, the Novartis mGlu5 antagonist AFQ056 improves *Address correspondence to this author at the Autism and Obsessive Compulsive Spectrum Program, Department of Psychiatry, Montefiore Medical Center, Albert Einstein College of Medicine, 111 East 210 th St, Bronx, New York 10467-2490; Tel: 847-975-0364; E-mail: genoveva_uzunova@msn.com behavioral symptoms in a fraction of FXS subjects (7 from 30) who have methylation of the FMR1 gene promoter and no detectable FMR1 messenger RNA, but not in individuals with partial promoter methylation. In addition to FXS, there are many other forms of ASD which have varied causes and neurobiology. Interestingly, different forms of ASD are associated with either hypo-or hyperfunction of the glutamatergic system despite presenting with similar symptoms, providing further evidence for involvement of different mechanisms and receptors. Early studies indicated that direct targeting of NMDA receptors with pharmacologic antagonists was associated with undesired side effects on cognition and neurotoxicity. However, it may be possible to target the functions of iGluRs in synaptic plasticity and ASD by modulating regulating proteins and/or signaling pathways, with less potent or more specific NMDAR antagonists, and with pharmacologic drugs targeting AMPA or kainate receptors. The purpose of this review is to describe key studies that implicate iGluRs and their interacting proteins and signaling pathways in ASD and FXS. Moreover, we will highlight mechanisms that may be important for development of new neuropharmacological treatments.
Statistic Manual of Mental Disorders-Fifth Edition (DSM5) [1]. ASDs are characterized with impairments in two core symptoms domains: social/communication deficits and occurrence of repetitive behaviors, the symptoms being in a continuum from mild to severe in core and associated symptom domains. Within the social/communication domain there may be problems in social-emotional reciprocity, nonverbal communication behaviors, and developing and maintaining relationships. Within the area of restricted/ repetitive behaviors, interests or activities there may be stereotyped, repetitive speech or motor movements, excessive adherence to routines or resistance to change, highly restricted, fixated interests, hypo-or hyper-reactivity to sensory input. There may be varying degree of intellectual disability, accompanying symptoms such as seizures, anxiety, mood swings, aggression, sleep problems, attention problems, hyperactivity, common with other psychiatric disorders, and gastrointestinal complaints [4]. ASD is relatively common occurring in 1 in 88 individuals, with a reported increase in incidence in recent years [2,3]. ASD is about 4 times more common in boys than in girls. It is possible that the increased incidence is due to an ascertainment bias associated with greater awareness and more systematic screening of ASD, and perhaps also with changes in the diagnostic criteria [4].
FXS is a common monogenic cause of autism which has been invaluable in understanding the neurobiology of ASD and development of "targeted" drug treatments for the core symptoms [5][6][7][8]. FXS is caused by CGG repeats in the 5' untranslated (UTR) region of the FMR1 gene which results in varying degree of loss of the Fragile X Mental Retardation Protein (FMRP), an RNA binding protein that regulates the synthesis and trafficking of many brain RNAs involved in synaptic plasticity and learning and memory [9]. Besides FXS, monogenic forms of ASD are tuberous sclerosis complex (TSC) [10], Rett syndrome [11,12], neurofibromatosis (NF1) [13], Shank 2 and 3 deletion syndromes [14,15]. Interestingly, recent study finds that genes causing syndromic forms of ASD such as TSC1/2 and SHANK3 can be implicated also in non-syndromic forms of ASD [16]. Genetic factors are important in the etiology of ASDs with monozygotic twins exhibiting a very high concordance rate [17]. Many neuronal genes are implicated in ASD and it is difficult to specify a shared genetic cause [18,19]. Several excitatory and inhibitory neurotransmitter systems are implicated -glutamate [20][21][22][23], GABA [24], serotonin [25], norepinephrine [26,27], dopamine [26,28], acetylcholine [29,30], endocannabinoids [31,32] and neuropeptides [33][34][35]. A current understanding is that ASD and FXS are neurodevelopmental disorders of the synapse with abnormal synaptic connectivity, and that there is an imbalance in excitatory and inhibitory neurotransmission [36]. The main excitatory neurotransmitter system in the CNS, the glutamatergic system, plays a very important role in neuronal development and cognition. It should be noted that the pathophysiology of ASD is very complex and there are many interacting factors that may contribute to the disease manifestation and progression such as immunological, environmental factors, occurrence of neuronal loss, and glial factors. For example immunological factors may be caused by exposure of the mother to various viruses and environmental factors may be stress and toxins. The precise causal role of these factors, however, is heavily debated. Here, we restrict our review to role of iGluRs in ASD.
The treatments of ASD and FXS are complex and depend on the presenting symptoms. They are a combination of applied behavioral analysis, medications, occupational therapy, physical therapy and speech-language therapy (PubMed Health Information). Currently there are very few medications approved for treatment of ASD, none of which target the core symptoms. Two of the atypical antipsychotic drugs risperidone and aripiprazole are approved by the US Food and Drug Administration for treatment of aggression and irritability in children ages 5-16 with autism. These drugs are approved for treatment of schizophrenia which is also a neurodevelopmental disorder and has common features with ASD such as social deficits and neurobiological changes involving NMDA, GABA and dopamine receptors. Other medications used in clinical practice for treatment of patients with ASD are serotonin reuptake inhibitors such as fluoxetine approved for treatment of depression and obsessivecompulsive disorder (OCD) in children 7 years and older, divalproex sodium used to treat manic symptoms and epilepsy, and the psychostimulant drug methylphenidate used to treat attention-deficit hyperactivity disorder (ADHD). There are several reviews on the pharmacological treatments of ASD and FXS for further reading [37][38][39][40][41].
Due to research advances in understanding the neurobiology of FXS a new group of drugs which are antagonists of group I metabotropic glutamate receptors (gp I mGluR) are developed and have shown therapeutic efficacy in human FXS clinical trials [42,43]. Importantly, studies in animal models show that drugs which reduce gp I mGluRsignaling in brain target the core symptoms of ASD [44][45][46]. Pharmacological drug enhancement of GABAergic neurotransmission has also shown potential to improve social function in FXS clinical trial [47]. In addition to mGlu and GABA receptors, there are pathological changes in FXS and ASD involving iGluRs. In some patients it may be more beneficial to target iGluRs due to the heterogeneous etiology, presentation, genetics and molecular neurobiology of ASD, individual drug sensitivity and other pharmacological factors [48]. There is evidence for this from human clinical trials and animal models. It is also possible that for treatment of ASD a combination of several drugs, acting at mGluR and iGluR targets may be beneficial. This notion is supported by evidence from clinical trials with mGlu5 antagonists which show partial therapeutic effectiveness in FXS [42,43]. As mentioned, ASD are neurodevelopmental cognitive disorders and brain glutamate receptors are critical in regulation of neuronal development, synaptic plasticity and learning and memory. The changes in iGluRs in ASD and FXS may be result of altered gp I mGluR signaling as has been shown for AMPARs in FXS but they also may occur as a result of changes in other neuronal proteins and signaling pathways such as proteins regulating iGluR receptor expression and trafficking (Arc, MAP1B, GRIP1, STEP, Shanks, neuroligins). Therefore, it is important to understand the molecular and cellular changes in iGluR expression and signaling as this may provide additional drug targets.
III. STRUCTURE, EXPRESSION AND FUNCTIONS OF IGLURS
The actions of the major excitatory neurotransmitter glutamate in the mammalian central nervous system (CNS) are mediated by iGluRs and mGluRs.
Structure of iGluRs
IGluRs are ligand-gated ion channels classified into NMDA (N-methyl-D-aspartate), AMPA ( amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) and kainate receptors based on structural, pharmacological and physiological properties [49][50][51][52][53][54]. IGluRs are tetramers encoded by 18 genes, several of which undergo alternative splicing or RNA editing, conferring different physiological properties to the receptor proteins. IGluRs have four distinct domains: an extracellular N-terminal domain, an extracellular ligandbinding domain, transmembrane domain and an intracellular carboxy-terminal domain [51]. NMDA receptors (NMDARs) are obligate heteromers formed as tetramers from coassembly of GluN1, GluN2A-GluN2D, GluN3A or GluN3B subunits. Each NMDAR channel contains a combination of two GluN1 and two GluN2A-GluN2D subunits, or two GluN1 with one GluN2 and one GluN3 subunits. They form Ca2+ permeable ion channels, require both glutamate and glycine for activation, and are blocked by Mg2+ ions. The non-NMDA iGluRs are AMPA and kainate receptors. AMPARs may be homo-or hetero-tetramers formed from the GluA1-GluA4 subunits. They are Mg2+-insensitive. The GluA2 subunit confers impermeability to Ca2+ to the AMPAR channel as a result of RNA editing of the Q/R site of the GluA2 mRNA. AMPARs are further distinguished by having long-C-terminal tails -GluA1 and GluA4 or short Cterminal tails -GluA2 and GluA3. The C-termini of AMPARs have sites for phosphorylation by different enzymes such as protein kinase A (PKA), protein kinase C (PKC), and interact with PDZ-domain containing proteins such as protein 4.1N that interacts with GluA1, PKC alpha binding protein (PICK1) and glutamate receptor interacting proteins 1 and 2 (GRIP1/2) that interact with the GluA2 and 3 subunits and are important for AMPA receptor trafficking. AMPARs co-assemble with auxiliary subunits -transmembrane AMPAR regulatory proteins (TARPs) such as stargazin, that are important for AMPAR expression, trafficking and functions. Kainate receptors are tetramers formed from combinations of the GluK1-GluK5 subunits. They have quite different synaptic roles in comparison with the other iGluRs despite structural and functional similarities, and they are mainly modulators of synaptic transmission and neuronal excitability [54]. The proteins PICK1 and GRIP that bind to AMPA receptors appear to play role in stabilizing kainate receptors at synapses. Another Kainate Receptor Interacting Protein, KIRP6, is important for modulation of receptor channel gating. IGluRs play key roles in basal synaptic transmission, different forms of synaptic plasticity, learning and memory, neuronal development and are involved in the neurobiology of many neurodevelopmental and neuropsychiatric disorders such as autism, FXS, schizophrenia, epilepsy, ADHD, Tourette syndrome, Alzheimer's disease and Huntington's disorder [54][55][56].
The functions of iGluRs are determined by their synaptic expression, trafficking, posttranslational modifications and signaling, and regulation by interacting proteins [57][58][59][60][61]. Different neuronal postsynaptic density (PSD) proteins, cell adhesion molecules, and cytoskeletal proteins interact with and regulate the expression and trafficking of iGluRs. GluA1 AMPARs interact with the PDZ-domain containing proteins synapse-associated protein-97 (SAP97) [62] and protein 4.1 [63], and GluA2/3 interact with PICK1 [64], GRIP1/2 proteins [65] and N-ethyl maleimide-sensitive factor (NSF) [66] which regulate the expression and trafficking of these receptors. The membrane-associated guanylate kinase (MAGUK) SAP97 has several splice isoforms and regulates the trafficking and localization of AMPA, NMDARs and potassium channels [62,67,68]. Postsynaptic density protein-95 (PSD-95) is important in the regulation of localization and trafficking of AMPA [69][70][71][72] and NMDARs [73,74]. Regulation of the expression and trafficking of iGluRs is complex and some of the interacting PSD proteins have redundant functions. These mechanisms are important for synaptic plasticity and the neurobiology and treatment of ASD and FXS.
Expression of iGluRs
Regulation of the expression and trafficking of AMPA, NMDA and kainate receptors are described in excellent reviews [58,60,75,57,76,77,78,79,80,54] and will not presented in detail here. There is constitutive trafficking (cycling) of AMPA and NMDARs which is rapid, regulated trafficking during synaptic plasticity -various forms of longterm potentiation (LTP) or long-term depression (LTD), and trafficking during homeostatic plasticity [59]. The subunit composition of AMPA and NMDARs may change during development, in response to synaptic activity and in response to pathological processes in the central nervous system (CNS). The molecular mechanisms of iGluR receptor expression and trafficking are complex and may be cell-typespecific. AMPARs are highly dynamic and undergo constant trafficking in and out of synapses by a combination of endo/exocytosis and lateral diffusion [81]. AMPARs can diffuse at such high rates within the PSD that their surface trafficking is presumed to participate not only in setting receptor numbers at individual synapses but also in finetuning synaptic transmission during short-term plasticity. How these events are altered in ASD and neurodevelopmental disorders is not well understood. NMDARs, once considered not as mobile as AMPARs, also undergo trafficking during development, in response to neuronal activity and sensory experience by interaction with PSD proteins such as PSD-95 and SAP-102 [56]. Besides receptor protein trafficking, protein synthesis and proteasomal degradation, alternative splicing, and mRNA trafficking are important for expression of iGluRs and their synaptic function. iGluRs play roles in different forms of synaptic plasticity: Hebbian forms of plasticity-LTD, LTP which are considered as cellular models of learning and memory, and homeostatic plasticity. In general, during LTP AMPARs are delivered to the synapse by exocytosis whereas during LTD AMPARs are trafficked away from the synapse by endocytosis. The mechanisms of trafficking are different during different forms of LTP and LTD [82]. One form of synaptic plasticity enhanced in mouse models of FXS and ASD is hippocampal gp I mGluR-LTD triggered by activation of gp I mGluR, dependent on rapid dendritic protein synthesis, and expressed with AMPAR internalization [83]. It is notable that stress which is important in the pathogenesis of ASD and FXS plays role in AMPAR trafficking [84]. Some molecules implicated in AMPAR trafficking during homeostatic plasticity are implicated in ASD and FXS, among them being tumor necrosis factor alpha (TNF ) [85], retinoic acid [86], PICK1 [87], activity-regulated cytoskeletal gene and protein (Arc/ Arg3.1) [88] and phosphatidylinositide-3 kinase (PI3K) signaling [89].
Roles of iGluRs in neuronal development
The maturation of glutamatergic synapses is associated with changes in the composition and functional properties of iGluRs and synaptic morphological changes. The expression of NMDA and AMPARs follows a specific sequence during development; immature synapses have only NMDARs, followed by appearance of AMPARs and "unsilencing" of synapses with appearance of LTP [90,91]. Early in development neurons express GluN2B receptors which are later substituted GluN2A receptors [92]. The precise mechanisms of synapse unsilencing with neuronal activity and maturation are somewhat contradictory. Earlier studies establish that NMDAR blockade in hippocampal neurons decreases AMPARs and increases silent synapses [93]. AMPAR blockade increases the appearance of AMPARs [93]. However, another study finds that the primary role of NMDARs during neuronal development appears to be limiting the number of functional synaptic inputs and synapse maturation [94]. Thus, postnatal GluN1 deletion in hippocampus increases the numbers of synapses containing AMPARs [94]. The roles of NMDARs in the structural properties of synapses are also a matter of debate. It is reported that NMDARs may influence generation of new spines [95] but another study does not find that deletion of GluN1 has an effect on spine number [94]. These differences may be attributed to the method of NMDAR blockade -NMDAR antagonists [94], siRNA [96] or mosaic genetic deletion in single neurons [94]. Moreover, it is hypothesized that the role of NMDARs in regulation of AMPARs depends on the maturational state of the circuit. Thus, early in neuronal development NMDARs negatively regulate AMPARs, whereas in the adult neuronal circuits NMDARs have positive effect on AMPAR numbers and synaptic function [96]. It is not well established if the expression of iGluRs during development is altered in ASD and FXS but there is some experimental evidence for this in the Fmr1 knockout (KO) mouse which will be discussed here. The expression changes of iGluRs may cause altered synaptic development, plasticity and neuronal connectivity which are important for ASD and FXS.
Role of iGluRs in learning and memory and cognitive processes
IGluRs are involved in different forms of synaptic plasticity -LTP, LTD and homeostatic plasticity (synaptic scaling) [82,97,59]. LTP and LTD are cellular mechanisms of learning and memory and are important for information storage in the brain and cognitive processes. Homeostatic plasticity is believed to maintain neuronal activity within a physiological range in the presence of changes and participates in the refinement of neuronal circuits [98]. Activation of NMDARs may trigger LTP or LTD, both of which may be accompanied with trafficking of postsynaptic AMPA and NMDARs. Generally, during NMDARdependent LTP, AMPARs are inserted into synapses by exocytosis whereas during NMDAR-LTD, AMPARs are trafficked out of synapses by endocytosis [79,99,80,100]. Activation of gp I mGluR may trigger LTD which may be expressed with endocytosis of AMPARs, similarly to NMDAR-LTD [101][102][103]. A key distinction between NMDAR-LTD and mGluR-LTD at CA1 hippocampal synapses is that mGluR-LTD and its expression in the form of AMPAR endocytosis depend on rapid dendritic protein synthesis [83]. Another important postsynaptic expression mechanism of LTP and LTD is phosphorylation of AMPARs during LTP [104,105] and dephosphorylation during LTD [104,106]. For example, phosphorylation of GluA1 is observed during CA1 hippocampal LTP [107,108] and phosphorylation at S831 and S845 is sufficient to lower the threshold for LTP induction, increasing the probability of synaptic plasticity [105]. NMDAR-LTD at CA1 hippocampal synapses is associated with dephosphorylation of the GluA1 subunit at serine 845, a cAMP-dependent PKA substrate [106]. In addition, presynaptic expression mechanisms may contribute to LTP and LTD involving changes in neurotransmitter release and cleft glutamate concentration [109,82,110]. LTP and LTD are usually accompanied by structural changes such as synaptic growth during LTP and synapse elimination as a result of LTD. The classical forms of LTP and LTD triggered by activation of NMDARs or mGluRs are expanded with additional mechanisms. For example, activation of muscarinic acetylcholine receptors [111,112] may also trigger LTD in hippocampus that is expressed with AMPAR endocytosis. These types of synaptic plasticity may also have relevance for ASD since muscarinic cholinergic receptors are implicated in FXS [112]. Hebbian forms of plasticity (LTP and LTD) and homeostatic plasticity appear to involve different molecular mechanisms of AMPAR trafficking [57,98,87,113,98]. The molecular and cellular mechanisms of several forms of synaptic plasticity involving iGluRs are studied in brain in animal models of ASD and FXS with hope to understand the neurobiological mechanisms responsible for the cognitive deficits, and they will be summarized in this review.
IV. HUMAN STUDIES IMPLICATING IGLURS IN ASD AND FXS
Human studies provide contrasting evidence that autism may be both hypo- [114] and hyper-glutamatergic [115] disorder. Evidence for hypoglutamatergic state in autism is provided by the therapeutic effects of piracetam, a positive AMPAR modulator [116]. In contrast, significantly higher concentrations of glutamate are reported in the serum of ASD patients in comparison with normal controls [22]. Memantine which is an uncompetitive NMDAR antagonist [117] has shown efficacy in autism [118]. These contradictory findings may be due to the heterogeneity of ASD, the patient population (e.g., pediatric versus adult), and to differences in the ontogenetic period investigated, brain regions studied, experimental methods, and types of glutamate receptors involved. For example, increased activity of NMDARs may be associated with increased NMDA-LTD and increased AMPAR internalization which is expressed with decreased AMPAR activity. Therefore, both and NMDAR antagonist (memantine) and AMPAR potentiator (piracetam) may be have therapeutic effects. Interestingly, recent analysis of gene expression patterns in autistic postmortem brain at two developmental time points finds strikingly different patterns of gene expression between young and more mature brains of autistic individuals [119]. Among several groups of genes with different functions, genes related to glutamate receptors in brain which are found to be expressed aberrantly are protein synthesis genes such as mTOR in young autistic prefrontal cortex (PFC), and signaling genes involved in glutamate regulation of D1 signaling in mature autistic PFC.
Defects in Genes Encoding iGluRs and their Interacting/Regulating Proteins in ASD
Genetic studies strongly support involvement of iGluRs in ASD, the most frequent reports so far on changes affecting NMDARs. Sequencing of candidate genes in ASD probands identifies disruptive mutations in the GluN2B gene (GRIN2B) which may contribute, together with other genes such as CHD8, DYRK1A, PTEN and TBR1 to 1% of sporadic ASDs [120]. There are reports for association between haplotypes in the GRIN2B gene and ASDs in Korean families [121], and between the GRIN2A gene and autism [122]. Support for role of NMDARs in autism is provided by a study which identifies de novo mutations in GRIN2A and GRIN2B in patients with sporadic schizophrenia and autism, respectively [123]. Interestingly, these NMDAR subunits have differential expression during development with GluN2B expressed early in development, followed by GluN2A later in development and as synapses mature [61]. This sequence in expression of NR2 receptor subunits seems to be reflected in the genetic changes found in this study. Support for NMDAR involvement in autism is provided by a study which finds de novo mutations in GRIN2B in individuals with mental retardation: a frame shift, a missense and two splice-site mutations [124]. In a cohort of subjects with idiopathic epilepsy and/or mental retardation, a GRIN2A nonsense mutation is identified in a threegeneration family. It is speculated that mutations disrupting different GluN2 subunits may have differential effects on the physiological properties of the receptors which affect the Mg 2(+) block and Ca 2(+) permeability of the receptor channel [124].
There are isolated reports for genetic alterations in AMPA and kainate receptors in human ASD. Genetic study finds that a child with autism has an interstitial deletion of chromosome 4q which results in hemizygocity for the GluA2 AMPAR, hemizygocity for the glycine receptors GLRA3, GLRB, and the neuropeptide receptors NPY1R and NPY5R [125]. Another study identifies chromosome 6q21 as a candidate region for autism, and the kainate subtype glutamate receptor 6 (GluR6 or GRIK2) gene within this region as a functional candidate [126]. Studies report that the GluK2 gene is in linkage disequilibrium with ASD [126,127]. A complex mutation in the GluK2 gene is described which cosegregates with moderate-to-severe nonsyndromic autosomal recessive mental retardation in a large, consanguineous Iranian family [128], and results in loss of function of the GluK2 protein.
An association is found between autism and single nucleotide polymorphisms within the mitochondrial aspartate/glutamate carrier SLC25A12 gene located on the autism susceptibility locus chromosome region 2q24-q33 [129]. However, an association of the SLC25A12 gene with autism is not established in another set of 327 families with autistic offspring, pointing out the genetic heterogeneity in ASD. Genetic changes are reported in synaptic proteins regulating the expression and functions of iGluRs. In a very large genetic study of 1181 autism families with at least 2 affected individuals, linkage and copy number variation analyses implicate chromosome 11p12-p13 and neurexins, respectively, among other candidate loci. Neurexins are their presynaptic interacting partners neuroligins (NLGN) are implicated in glutamatergic synaptogenesis and expression of AMPA and NMDARs, highlighting glutamate-related genes as promising candidates that contribute to ASDs [130]. Mutations are identified in human GRIP1 [131], Shank3 [132,133], Shank 2 [15,134], and E3 ubiquitin ligase (Ube3A) [135,136], which may have effects on the expression and functions of AMPA and NMDARs in ASD. Mutations are reported in the NLGN1-4 genes in human ASD, making them very strong candidate genes for these disorders [137][138][139][140][141][142][143][144]. Interestingly, mutations in the X-linked NLGN4 may be associated with a wide range of neuropsychiatric conditions such as autism, Asperger's syndrome, mental retardation, Tourette syndrome, attention deficit hyperactivity disorder, depression and anxiety [141].
Clinical Pharmacological Studies
Clinical trials with pharmacological drugs provide invaluable evidence for involvement of iGluRs in ASD and FXS. Oxytocin has shown some efficacy in humans with ASD, including enhancement of social cognition [145,34,146]. Electrophysiological studies indicate that the molecular and cellular mechanisms of oxytocin in the infralimbic medial prefrontal cortex (IL-mPFC), region important for social cognition, are through synaptic plasticity and glutamatergic neurotransmission [147]. In oxytocin-treated brain slices suppression of basal glutamatergic neurotransmission in the IL-mPFC layer V pyramidal neurons is observed, which may be mediated by a reduction in glutamate release. Treatment of brain slices with oxytocin for 1 hour converts long-lasting depression into long-lasting potentiation of glutamatergic neurotransmission. It is concluded that the suppression of basal glutamatergic neurotransmission and facilitation of activity-dependent synaptic plasticity in the IL-mPFC might be critical for the effect of oxytocin on social cognition [147].
Evidence for a role of iGluRs in ASD is provided by the therapeutic effectiveness of topiramate, which antagonizes AMPA/kainate receptors, in children with pervasive developmental disorder [148], autism [149,150] and adults with OCD [151]. Clinical studies using the NMDAR antagonist memantine in ASD show that it has efficacy in improving social withdrawal and inattention [152] and memory and hyperactivity, lethargy, and irritability [153,154], providing support for role of NMDARs in ASD. In one study, significant improvements in language function, social behavior, and self-stimulatory behaviors are observed in ASD subjects treated with memantine [154]. It should be noted that memantine, besides being a noncompetitive NMDAR antagonist, is an antagonist at 5-HT3 [155,156], nicotinic acetylcholine receptors [157], and agonist of D2 receptors [158]. The clinical studies with memantine so far are valuable but need to be expanded in a larger ASD patient population. Further support for involvement of NMDARs in ASD is obtained from the modest effect of amantadine, another NMDAR antagonist [159], in a placebo-controlled clinical trial of 39 children with ASD [160]. When assessed on the basis of parent-rated Aberrant Behavior Checklist -Community Version (ABC-CV), amantadine does not show improvement of irritability and hyperactivity over the placebo response. However, in the amantadine-treated group there are statistically significant improvements in absolute changes in clinician-rated ABC-CVs for hyperactivity and inappropriate speech. Clinical Global Impression (CGI) scale ratings are higher in the amantadine group, suggesting that further studies with this or other drugs acting on the glutamatergic system are warranted.
Another drug supporting role of NMDARs in autism is acamprosate, which is believed to attenuate hyperglutamatergic states by antagonizing of NMDA and mGlu5 receptors, and by modulating intracellular calcium release [161][162][163]. In a small study acamprosate is effective in improving social impairment in youth with autism [164]. In another study, acamprosate use is associated with significant improvement in social behavior and reduction in inattention/ hyperactivity in 9/12 youth FXS subjects [165]. D-cycloserine, which is an NMDAR glycine site partial agonist and is effective in treatment of schizophrenia [166] has shown effectiveness in improving the social withdrawal, a core symptom in children with ASD [167].
Postmortem Brain Studies
Postmortem brain studies find that humans with ASD have specific abnormalities in AMPA receptors (AMPARs) and glutamate transporters in the cerebellum that may be directly involved in the disease pathogenesis [168]. Gene expression patterns from postmortem cerebella from autism subjects and healthy controls are tested using cDNA microarrays followed by PCR validation, Western blotting and receptor autoradiography. The mRNA levels of several genes are significantly increased in the autism subjects, including excitatory amino acid transporter 1 and glutamate receptor AMPA1 (GluA1). Abnormalities in the protein or mRNA levels of several additional molecules in the glutamate system are identified, including glutamate receptor binding proteins. AMPAR density is decreased in ASD cerebellum.
Human Neuroimaging Studies
Human neuroimaging studies provide evidence for abnormal glutamate levels in ASD. Concentrations glutamate+glutamine (Glx) are determined by 3T proton magnetic resonance spectroscopy imaging ( 1 H MRS) in high-functioning medication-free adults with ASD and ageand Intelligence Quotient (IQ)-matched healthy controls (HC) in the anterior cingulate cortex (ACC), thalamus, temporoparietal junction (TPJ), and areas near or along the intraparietal sulcus (IPS), which are associated with networks subserving alerting, orienting, and executive control of attention in ASD. Compared to HC group, the ASD group shows significantly lower Glx concentration in right ACC [169]. In another study use of in vivo single-voxel ( 1 H MRS) and proton magnetic resonance spectroscopic imaging ( 1 H MRSI) shows that there is hyperglutamatergia in the pregenual anterior cingulate cortex (pACC) in middle to late childhood and adolescence in ASD [115]. These findings are interpreted to be in correspondence with abnormal increase of excitation relative to inhibition in key neural systems in autism [170].
V. NMDAR ALTERATIONS IN ANIMAL MODELS OF ASD AND FXS, AND IN VITRO IN CELL CULTURE
In FXS there are changes involving mGluRs, AMPARs and NMDARs which may be brain-region-specific [171][172][173] and may be expressed differently in ontogeny [174][175]. In Fmr1 KO mice there is enhanced mGlu5 receptor signaling and enhanced CA1 hippocampal mGluR-LTD that can be corrected using mGlu5 pharmacological inhibitors [176][177][178][179][180] which restore the normal behavioral, biochemical and neuroanatomical phenotype. However, in FXS and ASD there are additional alterations in iGluRs and proteins regulating glutamate receptors. This may be a reason why in some FXS and ASD studies mGlu5 antagonists have limited efficacy in reversing abnormal changes [181,182]. Here we will review the significant findings in the scientific literature on changes in NMDAR expression, signaling and functions in animal models of ASD and FXS and cell culture. Some studies detect changes in the function of iGluRs and autisticlike behaviors and do not measure directly the expression levels of receptors.
The neurobiological effects of the NMDAR antagonist memantine have been explored in cultured cerebellar granule cells (cGC) from Fmr1 KO mouse as a model of monogenic autism [183]. In the Fmr1 KO there is delayed maturation of dendritic spines and fewer excitatory synapses. Memantine treatment of cGC has a stimulatory effect on dendritic spine maturation and excitatory synapse formation, and promotes the adhesion of cGC [183]. These processes are important for proper neuronal development and connectivity and have been linked to NMDARs. NMDARs inhibit the expression of AMPARs during development [94]. Early expression of GluN2A in organotypic hippocampal slices reduces the number of synapses formed, decreases the spine density and frequency of AMPAR mEPSC [184]. This is attributed to the low-affinity interaction of GluN2A with calcium-calmodulin kinase II (CaM KII) and the effect of GluN2A to reduce LTP. In contrast, overexpression of GluN2B does not affect synapse number and growth; however, it increases spine motility, adding and retracting spines at a higher rate. The C terminus of GluN2B and its ability to bind CaMKII is sufficient to allow proper synapse formation and maturation. The switch from GluN2B to GluN2A content in synaptic NMDARs observed in hippocampus during development may contribute to reduced plasticity by controlling the binding of active CaMKII [185]. These studies provide a likely explanation for the effects of memantine on excitatory synapses in cCG, and suggest that there is aberrant NMDAR expression and functions in the Fmr1 KO mouse resulting in fewer and less mature excitatory synapses and less AMPARs.
Several studies describe alterations in NMDA-receptor dependent synaptic plasticity, NMDAR levels, phosphorylation and signaling in mouse models of FXS or ASD which are brain region specific ( Table 1). In the prefrontal cortex (PFC) of the Fmr1 KO mouse, a region important for cognitive processes, there is decreased LTP due to deficient D1 receptor facilitation that is accompanied with decreased insertion of GluA1 and deficient GluN2B phosphorylation [186,187]. Expression of mGlu5, GluN2A and GluN2B subunits are not different in the PFC between WT and Fmr1 KO mice. LTP in PFC, GluN2B phosphorylation and insertion of GluA1 can be restored in Fmr1 KO by administration of a D1 agonist SKF81297 in combination with a gp I mGluR-antagonist DL-AP3, but not by treatment with either drug alone. Behavioral tests indicate that the simultaneous treatment with D1 agonist and gp I mGluR antagonist inhibits hyperactivity and improves the learning in the Fmr1 KO mice. Thus, a combination of D1 agonist and gp I mGluR antagonist influences the properties of iGluRs and is a potential drug therapy for the learning and memory deficits in FXS.
Individuals with FXS have deficits in attentional function, inhibitory control, and cognitive flexibility which are thought to be associated with the PFC. In the Fmr1 KO mice a robust cognitive impairment is identified which may correspond to the deficits in cognitive flexibility in individuals with FXS. Importantly, the levels of proteins involved in synaptic function, including the NMDAR subunits GluN1, GluN2A, and GluN2B, the scaffolding proteins PSD-95 and SAPAP3, and the plasticity-related gene Arc, are decreased in the prefrontal cortex of Fmr1 KO mice and are partly correlated with behavioral performance [188].
The clinical hallmarks of autism include excessive adherence to patterns and impaired detection of socially important patterns, and the dentate gyrus (DG) region of the hippocampus has a putative role in pattern separation (for time, space, and features) and pattern completion. There is diminished medial perforant path-granule cell LTP in the DG of Fmr1 KO mice [189]. In addition, a smaller peak amplitude of NMDAR-mediated excitatory postsynaptic currents (EPSCs) is observed, whereas AMPAR-mediated EPSCs are comparable, yielding a lower NMDA/AMPA ratio.
The Fmr1 KO2 mice are a newer generation of Fmr1 KO mice created by deletion of both the promoter and exon 1 of the Fmr1 gene as a result of which they are deficient in both the Fmr1 mRNA and FMRP [190], unlike the previous Fmr1 KO which has remaining levels of Fmr1 mRNA and FMRP [191]. In acute hippocampal slices from Fmr1 KO2 mice early in development (14 days) in CA1 neurons there are reduced AMPAR-mediated currents and increased NMDAR-mediated responses, which reduce the AMPA/ NMDA ratio. The reduction in AMPA/NMDA ratio is not observed at 6-7 weeks. The changes in iGluR currents are accompanied by corresponding decreases in the synaptic GluA1 and GluA2 and increase in the synaptic GluN1 receptors in hippocampal synaptoneurosomes at 14 days, but not at 6-7 weeks [174]. In correspondence, NMDAR-LTP induced by low-frequency stimulation (LFS) is significantly enhanced in the Fmr1 KO2 early in development at 14 days and not at 6-7 weeks. NMDAR-LTD or short-term depression are normal in the Fmr1 KO2. Interestingly, MPEP (2methyl-6-phenylethynyl-pyridine) does not have an effect on the enhanced NMDAR-LTP. The authors propose that these hippocampal changes in iGluRs and synaptic dysfunction early in development cause learning and memory deficits that contribute to the Fragile X phenotype, together with parallel developmental irregularities in the cortex. They speculate whether NMDAR antagonists will be a suitable therapeutic option for FXS early in development which will depend on the nature of the NMDAR increase. For example, if it is compensatory, NMDAR antagonists may have a worsening effect.
Another study does not find significant differences in the expression of GluN1, GluN2A, GluN2B, and GluA1 in hippocampal slices from 1 or 2 week old Fmr1 KO and WT mice using Western blot [192]. However, upon further analysis in vitro it is found that knockdown of FMRP (which resembles the Fmr1 KO condition) strongly stimulates the translation of GluN2A 3'UTR reporter but not the GluN1 or GluN2B 3' UTR reporters [192]. Therefore, using this in vitro assay the authors determine somewhat indirectly that there is increased hippocampal expression of GluN2A in Fmr1 KO neurons, which may be obscured in vivo by different factors such as compensatory mechanisms during development or assay sensitivity. In another study, immunofluorescent immunostaining of primary hippocampal neurons from Fmr1 KO mice finds significant increase in dendritic GluN2A early in development, 7 days in vitro (DIV) in comparison with neurons from WT mice, which corresponds approximately to mouse age 14 days [193]. These findings are similar to the ones in the Fmr1 KO2 mouse in that there is increased NMDAR expression early in development although the GluN2A subunit is characterized instead of the obligatory GluN1 subunit [174]. What effects these expression changes will have on the NMDAR channel properties and synaptic plasticity remains to be determined. However, they are different than the results obtained in the Fmr1 KO mouse using Western blot [192]. The difference may be observed because indirect immunofluorescent assays are more powerful in detecting small local differences in receptor expression in dendrites and at synapses than Western blotting using total protein extracts; therefore sometimes discrepancies in studies may occur due to the experimental technique used. This is why it may be good to ascertain changes in receptors using several techniques. Nevertheless, the elevated levels of GluN2A early in development in the Fmr1 KO may have effects on synaptic development and plasticity, and may be attributed in part to regulation by microRNAs such as miR-125b [192], which bind to FMRP, and have inhibiting effects on synapse development and synaptic plasticity.
Another study does not find differences in the AMPA/ NMDAR ratios in CA1 hippocampal slices from 2-week old Fmr1 KO mice [194], similarly to earlier studies that do not report changes in LTP in hippocampus from Fmr1 KO mice 3-4 weeks old [172,83]. The investigators find reduced LTP due to selective decrease in the synaptic delivery of GluA1 due to deficient Ras-PI3K-Protein kinase B (PKB) signal transduction [195]. This discrepancy between the different studies in AMPA and NMDA responses may be due in part to the remaining levels of Fmr1 mRNA and FMRP in the Fmr1 KO. In human FXS there are varying degrees of changes in the levels of FMR1 mRNA and FMRP which may result in varying changes in iGluR expression and functions. Thus, FXS subjects with premutation in the 5' region of the FMR1 gene (~55-200 CGG repeats) have decreased levels of FMRP and milder cognitive impairment than individuals with full mutation alleles (>200 CGG repeats) in which the FMRP is absent [185,196]. The individuals with premutation are also less likely to develop ASD than those with full mutation [196]. It should be noted that in humans the FMR1 premutation may result in FXS, ASD and cognitive impairment through two mechanisms: decrease in the FMRP [197] and increase in FMR1 RNA which may result in RNA toxicity, effects on the miRNA pathway, cell death and mitochondrial pathways [196]. This may have implications for using the Fmr1 KO mouse, which still has residual levels of FMRP, as a model for FXS. Levels of FMRP and Fmr1 mRNA should be measured when determining changes in glutamate receptors and synaptic plasticity. Additional reason for the observed discrepancy in NMDAR expression in different studies may be the difference in maturation of neurons in cell culture [193] and in vivo [192] which may result in differences in receptor expression.
The Fmr1 KO mouse exhibits age-dependent deficits in LTP at association (ASSN) synapses in the anterior piriform cortex (APC). To investigate the mechanisms for this, whole-cell voltage-clamp recordings of ASSN stimulationevoked synaptic currents are made in APC of slices from adult Fmr1 KO and WT mice, using the competitive NMDAR antagonist, 3-(2-Carboxypiperazin-4-yl)propyl-1phosphonic acid (CPP), to distinguish currents mediated by NMDA and AMPARs. NMDA/AMPA ratios are lower in Fmr1 KO than in WT mice, at ages ranging from 3-18 months. Since the amplitude and frequency of AMPAR mEPSCs are not found to be different in Fmr1 KO and WT mice at these ages, the results suggest that NMDARmediated currents are selectively reduced in the Fmr1 KO [173]. Analyses of voltage-dependence and decay kinetics of NMDAR-mediated currents do not reveal differences between Fmr1 KO and WT mice, suggesting that reduced NMDA currents in Fmr1 KO mice are due to fewer synaptic receptors rather than differences in receptor subunit composition. Evoked currents and mEPSCs are also examined in senescent Fmr1 KO and WT mice at 24-28 months of age. NMDA/AMPA ratios are similar in senescent Fmr1 KO and WT mice, due to a decrease in the ratio in the WT mice, without significant change in AMPAR-mediated mEPSCs. These findings of age-dependent changes in NMDARs suggest that pharmacological treatments in FXS may have age-specific effects.
Mutations in the X-linked gene Methyl CpG binding protein 2 (MeCP2) are linked to Rett syndrome, a severe form of autism, and it is believed that MeCP2 is involved in brain maturation. Differences in NMDAR expression and functions are noted in studies using Rett syndrome mouse models. However, the findings are inconsistent, maybe due to the different animal models and brain regions studied. Mapping expression of the immediate-early gene Fos as a marker of neuronal activation in the brains of WT and MeCP2 Null mice before and after the appearance of overt symptoms (3 and 6 weeks of age, respectively) in one study reveals significantly less Fos labeling at 6 weeks in Null in comparison with WT mice in limbic cortices and subcortical structures [199]. In contrast, Null mice have significantly more Fos labeling than WT mice in hindbrain, most evident in cardiorespiratory regions of the nucleus tractus solitarius (nTS). Using nTS as a model, whole-cell recordings demonstrate that increased Fos expression in Nulls at 6 weeks is associated with synaptic hyperexcitability, including increased frequency of spontaneous and miniature EPSCs and increased amplitude of evoked EPSCs. No such effect of genotype on Fos or synaptic function is seen at 3 weeks. In the mutant forebrain, reduced Fos expression, as well as abnormal sensorimotor function is reversed by the NMDAR antagonist ketamine, which upregulates Fos expression in limbic forebrain of mice [198]. In another study, removing MeCP2 from mouse brains at the early developmental stage that coincides with Rett development, late juvenile or adult stages results in active shrinking of the brain and higher than normal neuronal cell density. Deletion of MeCP2 in juvenile or adult mice results in Rett-like behavioral deficits. The mature dendritic arbors of pyramidal neurons are severely retracted and dendritic spine density is dramatically reduced. Hippocampal astrocytes have significantly less complex ramified processes. There is a striking reduction in the levels of several synaptic proteins, including CaMKII / , AMPA, and NMDARs, and the synaptic vesicle proteins vesicular glutamate transporter (Vglut) and synapsin, which represent critical modifiers of synaptic function and dendritic arbor structure. Since the mRNA levels of these proteins remain unchanged, it is likely that MeCP2 regulates these synaptic proteins posttranscriptionally, directly or indirectly. It is concluded that genetic changes in MeCP2 lead to changes in glutamatergic synaptic receptors and proteins which influence neuronal development and networks [199].
Mutations in the tuberous sclerosis complex 1 or 2 genes (TSC1 or TSC2) cause the disease tuberous sclerosis complex (TSC) in humans that is characterized with multiple benign tumors, neurological deficits, autism, cognitive dysfunction, and epilepsy. Deletion of the TSC1 or TSC2 genes disrupts the TSC1/2 complex, results in aberrant activation of the mammalian target of rapamycin complex 1 (mTORC1), and upregulation of protein translation. Deletion of Tsc1 in mouse CA1 hippocampal neurons causes enhancement of glutamatergic synaptic function evidenced by larger evoked AMPAR and NMDAR currents and increased frequency of mEPSCs [200]. Protein translationdependent mGluR-LTD is absent in Tsc1 KO neurons, whereas NMDAR-LTD is not affected [200]. These changes occur in the absence of changes in dendritic spine number, morphology, or presynaptic release probability. They suggest that loss of Tsc1 in hippocampal neurons impairs the ability to activate the signaling pathways necessary for mGluR-LTD and causes enhanced excitatory drive which may have consequences for circuit information processing and network excitability. This in contrast to FXS in which there is upregulation of basal protein translation and enhancement of hippocampal mGluR-LTD [83,201]. These studies imply that synapses are the neuroanatomical substrates in FXS and ASD and that the nature of the changes involving excitatory neurotransmission is disease-specific.
Another study uses rats selectively bred for low rates of play-induced pro-social ultrasonic vocalizations (USVs) to model certain core symptoms of autism and to understand the role of NMDARs in autism [22]. Low-line animals exhibit autism-like behaviors: they engage in less social contact time with conspecifics, show lower rates of play induced pro-social ultrasonic vocalizations (USVs), and show an increased proportion of non-frequency modulated (i.e. monotonous) USVs compared to non-selectively bred random-line animals. Gene expression patterns in the lowline animals have significant enrichment in autism-associated genes and particularly the NMDAR family. Treatment of low-line animals with the NMDAR glycine site partial agonist GLYX-13 rescues the deficits in play-induced prosocial 50-kHz and reduced monotonous USVs. Thus, the NMDAR is shown to play a functional role in autism, and enhancement of NMDAR activity with GLYX-13 shows promise for the treatment of autism in this particular model [22].
The role of NMDARs in ASD is shown in Balb/c mice which are a model of impaired sociability and social motivation relevant to ASDs [202,203]. Impaired sociability of 8-or 4-week old Balb/c mice is attenuated by agonists of the glycine(B) site on the NMDAR, such as d-cycloserine. Because stereotypies can compete with the salience of social stimuli, the investigators compare Balb/c and Swiss Webster mice on several spontaneous stereotypic behaviors emerging during social interaction with a social stimulus mouse. Similarly to 8-week old mice, spontaneous stereotypic behaviors during social interaction are more intense in the 4week old Swiss Webster mice; furthermore, d-cycloserine reduces their intensity. Thus, d-cycloserine improves both sociability and stereotypic behaviors and these effects may lack strain-selectivity. The data suggest that targeting the NMDAR can have promising therapeutic effects on two prominent domains of psychopathology in ASDs: impaired sociability and spontaneous stereotypic behaviors.
Based on the effectiveness of mGlu5 antagonists in FXS, examination of the effects of the mGlu5 antagonist 2-methyl-6-(phenylethynyl) pyridine MPEP on sociability and stereotypic behaviors in Balb/c and Swiss Webster mice shows a mixed situation; MPEP has complex effects on sociability, impairing some measures of sociability in both strains, while it reduces the intensity of some spontaneous measures of stereotypic behaviors emerging during free social interaction in Swiss Webster mice [204]. Conceivably, mGlu5 antagonism exacerbates diminished endogenous tone of NMDAR-mediated neurotransmission in neural circuits relevant to some measures of sociability in Balb/c mice; the mGlu5 receptor contributes to regulation of the phosphorylation state of the NMDAR. It is concluded that medication strategies aimed to attenuate the severity of stereotypies in ASDs via antagonism of mGlu5 receptors must be pursued cautiously because of their potential to worsen some measures of sociability, providing rationale to develop drugs targeting NMDARs, in addition to mGluRs, in ASD.
Mouse models with NMDAR hypofunction as a result of the administration of the NMDAR antagonist MK801 [205] or constitutive [206,207] or selective deletion in parvalbumin interneurons [208] of the obligatory GluN1 subunit reproduce behavioral, cellular and electrophysiological abnormalities observed clinically in ASD. Adult constitutive GluN1 KO mice show behavioral deficits relevant to all core ASD symptoms, including decreased social interactions, altered ultrasonic vocalizations and increased repetitive behaviors. NMDAR disruption recapitulates clinical endophenotypes such as reduced prepulse inhibition (PPI), auditory-evoked response N1 latency delay and reduced gamma synchrony. Auditory electrophysiological abnormalities closely resemble those seen in clinical studies of autism. The -aminobutyric acid type B (GABA B )-receptor agonist baclofen improves excitatory/inhibitory balance, the auditory-evoked gammasignal to noise ratio and broadly reverses the behavioral deficits in the constitutive GluN1 KO mouse [207]. Therefore, a molecular defect in NMDARs results in increase in intrinsic pyramidal cell excitability and selective disruption of parvalbumin-expressing GABAergic interneurons which may provide useful translational biomarker for ASD (Fig. 1). These ASD mouse models are characterized with NMDAR hypofunction, unlike the Tsc1 KO mouse which has increased hippocampal NMDAR excitatory neurotransmission. However, there may be brain-region, species-specific and developmental differences in NMDARs in different subtypes of ASD.
The connection between GABA B receptors and iGluRs seen in the GluN1 KO mouse is established also in the Fmr1 KO mouse [209]. Pharmacologic activation of the GABA B receptor in neurons from Fmr1 KO mice with the selective GABA B R agonist STX209 (arbaclofen, R-baclofen) reduces elevated basal protein synthesis and elevated AMPAR internalization. Acute administration of STX209 in vivo, at doses that modify behavior, decreases mRNA translation in the cortex of Fmr1 KO mice. Chronic administration of STX209 in juvenile mice corrects the increased spine density in Fmr1 KO mice without affecting spine density in WT mice. Thus, activation of the GABA B receptor corrects synaptic abnormalities central to FXS pathophysiology, suggesting that STX209 is a potentially effective therapy for the core symptoms in FXS and ASD. In a clinical trial with human FXS subjects arbaclofen has shown promising effects in improving social function and behavior but not on the primary endpoint measure of irritability, as determined using the Aberrant Behavior Checklist-Irritability subscale [47,210]. Moreover the results from clinical trials with STX209 in FXS and ASD subjects indicate that it is overall welltolerated and may be effective in amelioration of some symptoms of FXS and ASD in a fraction of the patients. Future challenges to the field are to choose suitable primary outcome measures for ASD and FXS clinical trials and predict which patients will respond favorably to the drug.
VI. AMPAR ALTERATIONS IN ANIMAL MODELS OF ASD AND FXS AND IN VITRO IN CELL CULTURE
There are many studies in animal models and primary neurons suggesting that AMPARs are important in the neurobiology of ASD and FXS ( Table 1). AMPARs mediate the expression of several forms synaptic plasticity related to cognitive processes such as LTP, LTD and homeostatic plasticity. AMPAR alterations in animal models of ASD and FXS are usually manifested as changes in the expression and trafficking of receptors. Other parallel mechanisms are AMPAR phosphorylation/dephosphorylation, alterations in the trafficking of AMPAR mRNAs and synthesis/degradation of the receptor proteins. Another mechanism by which AMPARs are involved in the neurobiology of ASD and FXS is the role of AMPARs in stress that is known to exacerbate neuropsychiatric disorders including ASD [211,212] and FXS [213][214][215][216], and has been shown to have effects on hippocampal synaptic plasticity, AMPAR trafficking and phosphorylation [217,218,84,219,220,221]. AMPAR involvement in ASD and FXS affects mainly the cerebellum, hippocampus, amygdala, and prefrontal cortex.
The cerebellum is a brain region frequently implicated in the neurobiology of ASD and FXS [222], which is important for cognition and learning during development [223]. AMPARs have significant role in synaptic plasticity in this brain region. As pointed out, human postmortem brain studies report decreased AMPAR expression in cerebellum in ASD [168]. Correspondingly, in the Fmr1 KO mouse synaptic plasticity (mGluR-dependent LTD) in cerebellum is altered (enhanced) resulting in deficits in learning [224]. One study using a mouse model with deletion of the neuronal Islet Brain-2 protein (IB2) which is integral part of the PSD, is positioned near Shank 3, and is lacking often in Phelan-McDermid syndrome, a cause of autism, finds significant decrease in AMPAR-mediated and increase in NMDAR-mediated glutamatergic transmission in cerebellar mossy fiber to granule cell synapses [225]. This is accompanied by motor and cognitive deficits in IB2 KO mice suggestive of an autism phenotype. However, it should be noted that in addition to glutamate, endocannabinoids may be involved in synaptic plasticity and learning and memory in cerebellum [226,227]. Another brain region important for ASD and FXS that exhibits changes in iGluRs is the hippocampus. The Fmr1 KO mouse has enhanced gp I mGluR-signaling and enhanced hippocampal gp I mGluR-LTD [83] that have provided basis for the mGluR theory of FXS [228]. This theory has guided development of mGlu5 antagonists for treatment of FXS, which have shown effectiveness in animal models and human clinical trials [42,43,[176][177][178]. In the Fmr1 KO mouse enhanced hippocampal gp I mGluR-LTD is characterized with enhanced internalization of AMPARs [229][230][231][232]. Unlike the normal hippocampal mGluR-LTD [101,103], in the Fmr1 KO mGluR-LTD does not require new protein synthesis due to basally elevated proteins regulating AMPAR trafficking, as a result of absence of translational repression by FMRP. Proteins regulating AMPAR endocytosis such as Arc, microtubuleassociated protein 1B (MAP1B), STrial-Enriched protein Tyrosine Phosphatase (STEP), amyloid precursor protein (APP), termed "LTD" proteins, are basally upregulated in neuronal dendrites from Fmr1 KO mice, and do not increase further during mGluR-LTD. The role of FMRP in excessive mGluR-dependent internalization of AMPARs is shown in normal rat neuronal hippocampal cultures using FMRP siRNA [233]. In the Fmr1 KO mouse, in addition to decreased AMPA receptors during hippocampal mGluR-LTD [229,230,234], the expression of AMPARs in hippocampus is decreased at basal state during early development, 7-12 DIV [235]. This is likely due to the basally elevated levels of LTD proteins. At early developmental time points (7-12 days in vitro hippocampal neurons) decrease in GluA2 is reported [235] whereas at later developmental time points, decreases in both GluA1 [234,229] and GluA2 [234] are reported. Although both GluA1 and GluA2 levels may be measured to determine AMPAR internalization in hippocampus, the leading model suggests that the GluA2 AMPAR subunit controls endocytosis during mGluR-LTD [80,236]. The molecular mechanisms of AMPAR endocytosis in the Fmr1 KO mouse during hippocampal mGluR-LTD have been studied in detail [234,229,237,231]. The regulation of dendritic protein synthesis and therefore, AMPAR internalization, during hippocampal gp I mGluR-LTD is achieved through the eukaryotic elongation factor 2 (eEF2 kinase) pathway [229,103] and this mechanism is altered in the Fmr1 KO mouse [229]. The neurodevelopmental cytoskeletal protein MAP1B is synthesized during hippocampal mGluR-LTD in neuronal dendrites [103,238], and is elevated in hippocampus of Fmr1 KO mice [239]. MAP1B plays role in AMPAR endocytosis during mGluR-LTD through interaction with GRIP1 by keeping internalized GluA2 away from the synaptic surface [103], and this mechanism is likely enhanced in the Fmr1 KO mouse. Since MAP1B siRNA blocks the endocytosis of AMPARs during normal hippocampal mGluR-LTD [103], it may be possible to block the enhanced mGluR-LTD and restore the decreased levels of AMPARs in FXS by decreasing MAP1B protein levels using small interfering RNA (siRNA). STEP is a brain-enriched tyrosine phosphatase that normally opposes synaptic strengthening by dephosphorylating key neuronal signaling molecules including NMDARs, AMPARs, extracellular signalregulated kinase 1 and 2 (ERK1/2), stress-activated protein kinase p38 (p38), and the tyrosine kinase Fyn [240,241]. STEP regulates AMPAR internalization by modulating dephosphorylation of the GluA2 subunit. Reducing STEP in the Fmr1 KO mouse diminishes seizures and restores select social and nonsocial anxiety-related behaviors [242]. Taken together, these studies suggest that restoring normal levels of AMPARs in hippocampal neurons has therapeutic potential in FXS. Interestingly, one study does not find decrease of GluA1 protein in hippocampus and cerebellum from Fmr1 KO mice but only in PFC which is accompanying to reduced cortical LTP [171]. This difference may be due to the experimental approaches used which may not detect subtle changes in AMPAR endocytosis, and the age of the mice (8-10 weeks), underscoring the importance of the experimental approaches used. This study does not find changes in the expression of NR2 subunits in PFC, cerebellum or hippocampus from Fmr1 KO mice [171].
Another molecular mechanism by which FMRP may regulate the levels of synaptic AMPARs in neurons is through control of the local synthesis of AMPAR subunits, PSD-95, and CaMKII alpha downstream of mGluRactivation [243]. Besides activation of gp I mGluRs, activation of other G-protein coupled receptors such as Gqcoupled, M1 muscarinic acetylcholine receptors (mAChRs) can trigger LTD that shares similar expression mechanisms with mGluR-LTD as it also requires protein synthesis, activation of the ERK and mammalian target of rapamycin (mTOR) translational pathways, stimulation of translation of FMRP and FMRP-binding mRNAs, and is expressed with AMPAR internalization. Both mGluR-and mAChRdependent protein synthesis and LTD are enhanced in Fmr1 KO mice [112]. Therefore, mAChR antagonists may have therapeutic potential in FXS, in addition to gp I mGluR antagonists. Additional regulation of AMPAR levels and synaptic plasticity in FXS may be achieved through regulation of FMRP ubiquitination and degradation [244,238]. For example, during hippocampal mGluR-LTD there is brief increase in synthesis of FMRP which is degraded rapidly by the ubiquitin-proteasome pathway [238]. This mechanism is lacking in the Fmr1 KO mouse and further contributes to the reduced AMPARs.
Another form of hippocampal synaptic plasticity altered in Fmr1 KO mice is homeostatic plasticity dependent on retinoic acid (RA) [86]. Suppression of synaptic activity increases synaptic strength by inducing synthesis of RA, which activates postsynaptic synthesis of AMPARs in dendrites and promotes synaptic insertion of newly synthesized AMPARs. FMRP is essential for this process and RA-dependent dendritic translation of GluA1 is impaired in Fmr1 KO mice.
The amygdala are implicated by several studies in FXS and ASD, and the strong emotional symptoms of FXS likely involve the amygdala [197]. Synaptic plasticity in the amygdala is investigated using whole-cell recordings in brain slices from adult Fmr1 KO mice [182]. MGluRdependent LTP at thalamic inputs to principal neurons in the lateral amygdala is impaired resulting in reduced surface expression of GluA1 AMPARs in the Fmr1 KO mice. Additionally, there is lower presynaptic release manifested by a decrease in the frequency of spontaneous miniature excitatory postsynaptic currents (mEPSCs), increased pairedpulse ratio, and slower use-dependent block of NMDAR currents. Surprisingly, pharmacological inactivation of mGlu5 with MPEP fails to rescue either the deficit in LTP or surface GluA1. However, the same acute MPEP treatment reverses the decrease in mEPSC frequency, a finding of potential therapeutic relevance.
As discussed earlier, in PFC facilitation of synaptic LTP by D1 receptor is impaired in Fmr1 KO mice, and this correlates with decreased surface GluA1 [187]. Surface GluA1 insertion in Fmr1 KO is increased by administration of the D1 agonist SKF81297 in combination with gp I mGluRantagonist, DL-AP3. This treatment inhibits hyperactivity and improves the learning ability of the Fmr1 KO mice. Interestingly, this study does not find consistent D1 modulation of basal AMPAR transmission in adult slice recordings, while in cultured PFC neurons D1 receptor activation causes GluA1 subunit surface expression and synaptic insertion. This inconsistency between adult slices and cultured neurons may be explained by developmental differences (cultured neurons may express proteins not expressed in vivo) or the differences between in vivo and in vitro neuronal networks. The investigators use PFC slices from mice, 5-6 weeks of age, and cultures prepared from Spague-Dawley rats, E18 at 10-14 DIV. This study emphasizes the need for careful comparison of animal and cell culture studies.
Alterations involving AMPARs in other animal ASD models besides the Fmr1 KO mouse are largely due to defects in proteins regulating the expression and trafficking of AMPARs and will be discussed in a following section of this review. A study using a KO mouse model of Angelman syndrome shows that there is dysfunction in AMPAR trafficking, enhanced AMPAR endocytosis and decreased hippocampal synaptic AMPARs [245]. Mutations of the E3 ubiquitin ligase Ube3A, located within chromosome 15q11-q13, are reported in individuals with Angelman syndrome. This neurological disorder manifests with autism, motor dysfunction, mental retardation, speech impairment, and seizures. Ube3A KO mice lacking the Ube3A ligase gene display a number of features of Angelman syndrome such as high frequency of seizures, general ataxia, abnormal EEGs, and poor performance on tests of learning and memory [245]. Analysis of the synaptic expression of AMPAR in cultured hippocampal neurons from Ube3A KO mice and their WT littermates reveals that Ube3A KO neurons have significantly reduced synaptic and surface GluA1 expression compared to WT neurons, without any changes in the surface expression of NMDARs. The reduced GluA1 expression is due to elevated levels of Arc in the KO neurons since small hairpin RNA (shRNA) targeting Arc in Ube3A KO restores surface GluA1 expression. These experiments suggest that the excessive internalization of AMPARs in Ube3A knockout neurons is likely a result of failure to ubiquitinate and degrade Arc, and may be a mechanism through which Ube3A deficiency is responsible for cognitive deficits in Angelman syndrome. However, the defect in synaptic GluA1 AMPARs is not the only thing that has gone awry in Angelman syndrome. It is possible that Ube3A substrates, in addition to Arc, play roles in nervous system development and contribute to development of the neurological and psychiatric disturbances.
Postnatal Tsc1 loss in mouse hippocampal cultures is associated with increased levels of Arc protein, significantly decreased GluA1 and GluA2 receptors and functional reduction of glutamatergic synaptic strength. This is a homeostatic compensatory mechanism due to suppression of hippocampal inhibitory neurotransmission associated with loss of Tsc1, which is insufficient and is accompanied with excitatory-inhibitory synaptic imbalance [246]. These changes are accompanied by hypexcitability in Tsc1 KO mice, are expressed through the mTOR pathway and are ameliorated by chronic treatment with rapamycin.
Another mechanism through which AMPARs are involved in ASD and FXS is through their role in excitotoxicity and cell death [247]. This is determined by the permeability of the receptor to Ca 2+ ions, and is dependent on the expression of the GluA2 subunit which limits Ca 2+ permeability [248]. Epilepsy is common in ASD and FXS, and AMPARs are implicated in its pathophysiology and treatments [249,250]. This is a broad topic that is a subject of a review on its own.
VII. KAINATE RECEPTOR ALTERATIONS IN ANIMAL MODELS OF ASD AND FXS AND IN VITRO IN CELL CULTURE
Due to a report of a complex mutation in GluK2 that cosegregates with nonsyndromic autosomal recessive mental retardation [128], studies are carried out in mice to understand its significance for ASD. Electrophysiological data show that this mutation causes loss-of-function of the GluK2 protein. Subsequently, a study in GluK2(-/-) KO mice shows that the GluK2 receptor plays a role in ASD by regulating the maturation of synaptic circuits involved in learning and memory [251]. It is shown that the functional and morphological maturation of hippocampal mossy fiber to CA3 pyramidal cell synapses is delayed in GluK2 (-/-) KO mice, and this deficit is manifested by a transient reduction in the amplitude of AMPA EPSCs at a critical time point of postnatal development, whereas the NMDA component is spared. A decrease in GluA1 expression is observed. Analysis of the time-course of structural maturation of CA3 synapses by confocal imaging of yellow fluorescent protein (YFP)-expressing cells shows that major changes in synaptic structures occur subsequently to the sharp increase in synaptic transmission, and that the course of structural maturation of synaptic elements is impaired in GluK2(-/-) KO mice [251] (Table 1). In another study, the mutation M8361 of the GluK2 gene which is linked to autism is studied in cell culture -oocytes and COS-7 cells. It is established that this is a gain-of-function mutation leading to enhanced plasma membrane expression and increased current amplitudes of GluK2(M8361) compared to wild-type GluK2 that seems to be regulated by Rab11 [252]. These studies show that kainate receptor mutations found in ASD affect synaptic maturation and may affect simultaneously expression and functions of kainate and AMPARs.
VIII. ALTERATIONS IN IGLUR RECEPTOR-INTERACTING PROTEINS IN ANIMAL MODELS OF ASD AND FXS AND IN VITRO IN CELL CULTURE
As emphasized, mutations in neuronal synaptic proteins which interact with and regulate the expression and functions of iGluRs are found in human subjects with ASD. These proteins include postsynaptic neuroligins (NLGN) 1-4 and their presynaptic binding partners neurexins (NRXN) [253], Shank 1-3 proteins [134,254], and GRIP1 [131]. Animal models have been created with the affected genes knocked out or mutants knocked in [255][256][257]. These mice reliably reproduce certain molecular, cellular and behavioral characteristics of ASD and have been instrumental in increasing our understanding of the disease mechanisms, and hopefully will aid in development of new drug treatments.
The NLGNs are a family of transmembrane celladhesion molecules expressed mainly postsynaptically which regulate excitatory and inhibitory synapse development, validation, maturation and functions [142,258] (Fig. 2). NLGNs regulate synapse formation and synaptic transmission by interactions with NRXNs, PSD-95, AMPA and NMDARs, Shanks, and cooperate with leucine-rich repeat transmembrane neuronal proteins (LRRTM) in their interaction with NRXNs and excitatory synapse-regulating functions [259][260][261]. NLGN1 is associated with excitatory glutamatergic synapses, NLGN2 is associated with inhibitory GABAergic synapses [262], NLGN3 is found both at glutamatergic and GABAergic synapses [263], and NLGN4 is associated with glutamatergic and inhibitory glycinergic synapses [264]. Studies in KO mice suggest that similar synaptic plasticity and behavioral mechanisms involve NLGNs in nonsyndromic forms of ASD and in syndromic forms such as FXS [265].
The molecular and cellular mechanisms by which NLGNs are involved in the pathogenesis of ASD and FXS and bring about changes in iGluRs are complex. Overall, they involve alterations in synapse development, expression of iGluRs and resulting alterations in synapse formation and synaptic plasticity, leading to deficits in learning and memory and neuronal communication. It is shown in vitro that interaction of NRXN1 and NLGN1 is important for the formation of excitatory synapses and the recruitment of PSD-95 and NMDAR scaffolds and subsequently, recruitment of GluA2 AMPARs [266,267]. The levels of AMPARs and AMPAR-mediated synaptic transmission are decreased in hippocampal slices from NLGN1 KO mice, confirming an important role of NLGN1 in driving AMPARs to nascent synapses through a diffusion trap mechanism, involving PSD-95 scaffolds (NLGN1/PSD-95 clusters assembled by NRXN-1 multimers) in competition with existing synapses [268]. NLGN1 KO mice also display alterations in NMDARmediated transmission. A study characterizes the effects of NLGN1 on synapse formation and AMPAR recruitment in primary cortical cultures from NLGN1 KO mice and WT littermates using live imaging of fluorescently tagged PSD-95, GluA2 and the presynaptic vesicle molecule SV2A to follow the constancy of the contents of these molecules at individual synapses over time. It is shown that loss of NLGN1 is associated with larger fluctuations in the synaptic contents of these molecules and a poorer preservation of their contents at individual synapses [269]. In turn, AMPARs may also play a role in the stabilization of synapses and elimination of inappropriate connections [270]. Thus, removal of surface AMPARs leads to a decrease in the number and stability of excitatory presynaptic inputs, whereas overexpression increases synapse number and stability. Overexpression of GluA2 AMPARs along with NLGN1 in 293T cells is sufficient to stabilize presynaptic inputs from cortical neurons onto heterologous cells which is not dependent on receptor-mediated current and instead relies on structural interactions mediated by the N-terminal domain of GluA2 [270]. Therefore, in ASD, loss of AMPARs may MPEP fails to rescue LTP deficit in LA but restores deficits in presynaptic release [182] The table summarizes studies of animal models of ASD and FXS that are discussed in the text involving changes in iGluRs, and presents the associated molecular/neuroanatomical, functional and behavioral alterations. The numbers in the "References" column refer to the citation numbers in the text.
-no change; -decrease; -increase. For ease of comparison of the results between the studies, each study is presented separately in a row, even if the same animal model is used (e.g., the Fmr1 KO mouse). It is evident that in one animal model changes may affect more than one iGluR subtype such as AMPA and NMDA receptors. If a treatment approach (pharmacological drug or genetic approach) is used in the study to correct the iGluR levels, functional and/or behavioral changes, it is indicated in the table. In the Fmr1 KO mouse, administration of mGlu5 receptor antagonists such as MPEP, MTEP, fenobam and CTEP in animal models has shown therapeutic promise in reversing biochemical, neuroanatomical, synaptic plasticity and behavioral aberrations associated with FXS, but these studies are not indicated here because the focus of the review is on iGluRs. The extracellular domains of NLGNs may also be important in the recruitment of AMPA and NMDARs at synapses and synaptic plasticity [271,272]. Recently, NLGN1 isoform-specific cis interaction of the extracellular domain with the GluN1 NMDAR subunit is reported [272], which regulates the postsynaptic expression of NMDARs. The regulatory effects of NLGN1 on NMDA and AMPAR expression at excitatory synapses may explain in part the altered expression of NMDA and AMPAR subunits in mouse models FXS discussed earlier. NLGN1 deficiency is reported in Fmr1 KO mice and overexpressing hemagglutinin (HA)-neuroligin1 in Fmr1 KO mice improves social behavior but does not have a positive effect on learning and memory [273,274]. However, a more significant mechanism for the decrease AMPARs in the Fmr1 KO mouse is likely the enhanced mGluR-LTD and the associated increased rates of AMPAR internalization due to increased basal levels of "LTD proteins" [232]. It is also possible that the mechanisms of AMPAR expression vary during development with NLGN-dependent mechanisms having greater significance during early development.
NLGN1 KO mice display deficits in spatial learning and memory that correlate with impaired hippocampal LTP [275]. NLGN1 KO mice also exhibit a dramatic increase in repetitive, stereotyped grooming behavior, a potential autism-relevant abnormality. This repetitive grooming abnormality is associated with a reduced NMDA/AMPA ratio at corticostriatal synapses. The increased repetitive grooming phenotype can be rescued in adult mice by administration of the NMDAR partial coagonist dcycloserine [275]. Interestingly, the NLGN1 KO mice do not exhibit changes in the expression of NMDAR subunits in brain as measured by Western blot. However, there is about 30% compensatory increase in the expression of NLGN3 and about 20% decrease in the expression of NRXN-and . Furthermore, there may be local changes not detected by Western blot with total brain proteins but with a more sensitive technique such as indirect immunofluorescent staining. In turn, NMDAR activity may regulate the structure and function of NLGN1 and synaptic transmission [276]. NMDAR activation is shown to trigger NLGN1 cleavage at single spines which requires Ca2+ /calmodulin-dependent protein kinase, and is mediated by proteolytic activity of matrix metalloprotease 9 (MMP9). Cleavage of NLGN1 causes rapid destabilization of its presynaptic partner neurexin-1 and depresses synaptic transmission by abruptly reducing presynaptic release probability.
Different mutations in NLGNs are found in human autism that may have brain region-specific effects on iGluR receptors and synaptic transmission. These mutations have been reproduced in mice to understand their effects. For example, the R451C substitution in NLGN3 in knock-in mice results in behaviors consistent with shift of synaptic transmission to inhibition -impaired social behaviors, enhanced learning in the water maze test and increased synaptic inhibition in somatosensory cortex [257]. However, in the hippocampus this mutation causes about 1.5 fold increase in AMPA-mediated excitatory synaptic transmission, induces upregulation of GluN2B and increase in LTP. The NLGN3 KO mice do not exhibit any of these changes. Another original approach is to introduce the R704C mutation that targets a conserved arginine residue in the cytoplasmic tail of all NLGNs, and is associated with autism in human genetic studies (the genetic defect affects NLGN4 in humans with ASD), into mouse NLGN3, and examine its effect on synapses in vitro and in vivo [256]. Electrophysiological and morphological studies reveal that the NLGN3 R704C mutation does not significantly alter synapse formation, but causes a marked selective decrease in AMPARmediated synaptic transmission in pyramidal neurons of the hippocampus, without similarly changing NMDA or GABA receptor-mediated synaptic transmission, and without altering presynaptic neurotransmitter release. These results suggest that the cytoplasmic tail of NLGN3 has a central role in synaptic transmission by modulating the recruitment of AMPARs to postsynaptic sites at excitatory synapses. Another line of NLGN3 KO mice show lack of mGluR-LTD in parallel fiber synapses in cerebellum, which is a result of occlusion of gp I mGluR-LTD due to constitutive activation and increased basal GluA2 S880 phosphorylation and increased expression of mGlu1 [ 265]. These deficits can be rescued by overexpression of NLGN3 suggesting that they are reversible and possible to modify with pharmacologic treatments.
ProSAP/Shanks are a family of multi-domain scaffolding proteins enriched in the PSD of excitatory synapses that play important role in the formation, maturation, and maintenance of synapses through linking postsynaptic mGluRs, NMDARs, AMPARs, NLGNs and the actin cytoskeleton [277,278]. Genetic studies have identified mutations in human SHANK1, 2 and 3 genes in ASD [254]. Studies in KO mouse models shed light on the roles of Shanks in ASD and how they affect glutamatergic neurotransmission and iGluRs. Genetic deletion of Shank2 in mice results in an early, brain-region-specific upregulation of iGluRs at the synapse and increased levels of Shank3 [279]. Shank2 homozygous KO mice exhibit fewer dendritic spines and show reduced basal synaptic transmission, reduced frequency of mEPSCs and enhanced NMDARmediated excitatory currents at the physiological level. The mutants are hyperactive and display profound autistic-like behaviors including repetitive grooming and abnormalities in vocal and social behaviors [279].
In another study, Shank2 homozygous KO mice carrying a mutation identical to the ASD-associated microdeletion in the human SHANK2 gene exhibit ASD-like behaviors such as reduced social interaction, reduced social communication by ultrasonic vocalizations, and repetitive jumping [280]. These mice show a marked decrease in NMDAR function. Direct stimulation of NMDARs with Dcycloserine, a partial agonist of NMDARs, normalizes NMDAR function and improves the social interactions in Shank2 KO mice. Treatment of Shank2 KO mice with a positive allosteric modulator of mGlu5, which enhances NMDAR function via mGlu5 activation, also normalizes NMDAR function and enhances social interaction. These results suggest that reduced NMDAR function contributes to the development of ASD-like phenotypes in Shank2 KO mice, and that both NMDAR and mGlu5 modulation of NMDARs are potential therapeutic strategies in ASD [280].
Loss of a functional copy of the SHANK3 gene leads to the neurobehavioral manifestations of 22q13 deletion syndrome and ASD [281,282]. Study of Shank3 haploinsufficient mice supports a role for this protein in glutamate receptor function and spine morphology [283]. In Shank 3 heterozygous mice there is reduced basal hippocampal synaptic transmission due to reduced AMPAR transmission, reduced GluA1 immunoreactive puncta in CA1 stratum radiatum, reduced LTP, and no significant change in LTD. Male Shank3 heterozygotes displayed less social sniffing and fewer ultrasonic vocalizations during interactions with estrus female mice, as compared to wild-type littermate controls. This study suggests that modulation of glutamatergic neurotransmission to increase GluA1 AMPARs is a therapeutic option in this form of ASD.
Similar effects of Shank 3 deletion on GluA1 receptors and synaptic plasticity in vivo is obtained by another group [284]. The major isoforms of the gene are disrupted in mice by deleting exons 4-9. Shank3 (e4-9) homozygous KO mice display abnormal social behaviors, communication patterns, repetitive behaviors and learning and memory. Male KO mice display more severe impairments than females in motor coordination. Shank 3 KO mice have reduced levels of GluA1 and show attenuated activity-dependent redistribution of GluA1-receptors, morphological alterations in dendritic spines, normal CA1 hippocampal synaptic transmission, and deficient LTP [284]. In addition, the ability of Shank3 to interact with the cytoplasmic tail of NLGNs coordinates preand postsynaptic signaling through the NRXN-NLGN complex in hippocampal neurons [278]. Synaptic levels of Shank3 regulate AMPA and NMDAR-mediated synaptic transmission through NRXN-NLGN signaling, and ASDassociated mutations in Shank3 disrupt not only postsynaptic AMPA and NMDAR signaling but also interfere with the ability of Shank3 to signal across the synapse to alter presynaptic structure and function [278]. This is a mechanism through which Shank 3 mutations can disrupt neuronal connectivity during development. Due to similarities of the Shank 3 KO mice with human ASD, they may be useful for development of new drugs to evaluate the effects on AMPA, NMDARs and LTP.
GRIP1 is a multi-PDZ domain protein that interacts through PDZ domains 4-6 with the C-terminus of GluA2/3 AMPARs and regulates their synaptic clustering and trafficking [65,285]. In addition, GRIP1 interacts with other postsynaptic molecules such as MAP1B to regulate AMPAR trafficking [103], and regulates the trafficking of GABA receptors [286]. GRIP1 is mapped to chromosomal 12q14.3 region, which is associated with autism [287]. Screening of autistic families identifies several GRIP1 single-nucleotide polymorphism variants, which have been further studied in vitro in cultured hippocampal neurons using pH-sensitive pHluorin-GluA2 fusion protein (pH-GluA2) [131]. Some of the GRIP1 mutations associated with autism correlate with more severe social impairment, result in gain-of-function and increased interaction of GRIP1 with GluA2/3 receptors, higher rates of recycling of GluA2/3 and higher surface levels of GluA2 [131]. These findings are correlated with behavioral studies of Grip1/2 double KO mice which exhibit increased sociability, impaired PPI, and object recognition, suggesting a role for GRIP proteins in the modulation of social behavior and possibly cognitive or memory function. GRIP1 may also regulate the excitation/inhibition balance in brain due to its ability to regulate the trafficking of both GluA2/3 and GABA receptors and is therefore an attractive molecule to investigate further in subjects with autism.
CONCLUSIONS. DIRECTIONS FOR FUTURE NEUROPHARMACOLOGICAL RESEARCH
It is evident that ASD and FXS are disorders affecting the synapse and synaptic plasticity. Human and animal model studies indicate that iGluR expression, trafficking and functions are altered, resulting in altered synapse development and plasticity in a brain region-and disorder-specific manner. The studies point to complex molecular and cellular neurobiological changes involving iGluRs and their associated proteins in ASD and FXS, and indicate that several pharmacological drugs may be used therapeutically. It is notable that brain changes in iGluRs may be reversed using drugs acting at other receptors such as mGlu5, D1, M1 muscarinic acetylcholine, and GABA B receptors. Further clinical, animal and molecular/cellular studies are necessary to understand in greater detail the changes of iGluRs in various ASD subtypes. For example, NLGN3 genetic changes in various mouse models can cause increase or decrease in AMPAR-mediated transmission in hippocampus, or enhanced GluA2 S800 phosphorylation. Similarly, deletion of Shank 2 in mice is associated with decreased or enhanced function of NMDARs. Animal model studies suggest that there are reduced levels of AMPARs in several brain regions in FXS and Shank 3 deletion syndrome, while in GRIP1/2 KO mice there are increased levels of AMPARs. It is necessary to understand well the iGluR-dependent alterations and glutamatergic hypo-vs. hyper-function in ASD brain in order to select appropriate pharmacologic treatments. Importantly, the iGluRs and associated proteins and neuronal signaling pathways which are altered may provide translational biomarkers for ASD and FXS. In future, it is necessary to characterize better the timing of the changes affecting iGluRs during development which will be useful for selection of optimal therapeutic window. Overall, it can be concluded that animal models and in vitro cellular models are very useful for understanding neurobiological pathways affecting iGluRs in ASD and FXS with the caveat that they cannot reproduce all characteristics of the human disorders. This may be due to species-specific differences in neuroreceptor and neurotransmitter systems in brain and complex behaviors. It is important to note that the majority of the existing animal models of ASD are models of monogenic forms of ASD such as FXS, TSC, Rett and Angelman syndromes, and single-gene knockout or knock-in animal models. In humans, ASD may be the result of defects in many genes, which interact with environmental factors, although the nature of the genetic changes is not well understood. Known environmental factors that may affect iGluRs in ASD include stress which exacerbates ASD and FXS and has effects on synaptic plasticity and AMPA receptor trafficking, and cytokine production due to infection. Since iGluRs are exquisitely regulated by experience, many environmental factors may modulate their functions including nutrition, environmental toxins and infections. Future studies should address the significance of other factors which regulate the expression and functions iGluRs. Lastly, ultimately the studies on ASD neurobiology involving iGluRs and development of new pharmacologic treatments have to be validated in humans. | 2018-04-03T00:45:36.821Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "672759d31b0ed6fde666f8a8fdbb5a157dfc4511",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3915351?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "672759d31b0ed6fde666f8a8fdbb5a157dfc4511",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
265116172 | pes2o/s2orc | v3-fos-license | Magnetosphere–Ionosphere Drivers of Transient‐Large‐Amplitude Geomagnetic Disturbances: Statistical Analysis and Event Study
We present a comprehensive statistical analysis of high‐frequency transient‐large‐amplitude (TLA) magnetic perturbation events that occurred at 12 high‐latitude ground magnetometer stations throughout Solar Cycle 24 from 2009 to 2019. TLA signatures are defined as one or more second‐timescale dB/dt interval with magnitude ≥6 nT/s within an hour event window. This study characterizes high‐frequency TLA events based on their spatial and temporal behavior, relation to ring current activity, auroral substorms, and nighttime geomagnetic disturbance (GMD) events. We show that TLA events occur primarily at night, solely in the high‐latitude region above 60° geomagnetic latitude, and commonly within 30 min of substorm onsets. The largest TLA events occurred more often in the declining phase of the solar cycle when ring current activity was lower and solar wind velocity was higher, suggesting association to high‐speed streams caused by coronal holes and subsequent corotating interaction regions reaching Earth. TLA perturbations often occurred preceding or within the most extreme nighttime GMD events that have 5–10 min timescales, but the TLA intervals were often even more localized than the ∼300 km effective scale size of GMDs. We provide evidence that shows TLA‐related GMD events are associated with dipolarization fronts in the magnetotail and fast flows toward Earth and are closely temporally associated with poleward boundary intensifications (PBIs) and auroral streamers. The highly localized behavior and connection to the most extreme GMD events suggests that TLA intervals are a ground manifestation of features within rapid and complex ionospheric structures that can drive geomagnetically induced currents.
The largest and longest space weather events are generally considered to pose the greatest threat to technological infrastructure.These events typically cause geomagnetic field disturbances that last from tens of minutes to several hours and have peak derivative amplitudes exceeding 8 nT/s (Kappenman, 2006).However, recent studies have shown that more rapid and localized processes are also capable of generating GICs (Engebretson et al., 2019a(Engebretson et al., , 2021;;Ngwira et al., 2015;A. Pulkkinen et al., 2015).Case studies of some of these processes were presented by Belakhovsky et al. (2019) and include sudden commencements (SCs), dayside traveling convection vortices (TCVs), nightside geomagnetic disturbance (GMD) events, and irregular Pi 3 pulsations.All of these space weather processes have timescales of 1-10 min and frequency range of 1-22 mHz.
Higher-frequency Pi 1 and Pi 2 magnetic pulsations with irregular waveforms and periods of 1-40 and 40-150 s, respectively (Jacobs et al., 1964), have long been studied for their role in substorm dynamics.Pi2 waves are commonly associated with the development substorm current wedge (SCW; Atkinson, 1967;McPherron et al., 1973); the polarizations of Pi 2 magnetic pulsations on the ground have been used to identify the location of the SCW (Lester et al., 1983).Pi 1 pulsations have also been observed in association with substorm onsets (Lessard et al., 2006) and have been shown to be caused by local ionospheric enhancements and particle precipitation (Arnoldy et al., 1987;Engebretson et al., 1983).
While type Pi 1-2 magnetic pulsations are clearly associated with substorm processes, disturbances with these frequencies are not generally associated with GIC activity.Magnetic perturbations in the Pi 1-2 frequency range with second timescales are less studied in the context of GICs as they are incapable of directly driving large currents through conductors on the surface of Earth.However, it has been shown recently that magnetic field perturbations in this frequency range are an important aspect of larger space weather events that can cause GICs.McCuen et al. (2021) found that high-frequency transient-large-amplitude (TLA) dB/dt intervals (17-1,000 mHz; 1-60 s periods) with derivative amplitude greater than 6 nT/s often occur prior to or within many of the most intense nighttime GMD events that could drive GICs.Any number of TLA dB/dt intervals that occur within a one-hour window at a single station location are referred to collectively as a TLA event.Nighttime GMDs are large, isolated geomagnetic perturbations with overall amplitudes of hundreds of nanotesla and 5-10 min periods (Engebretson et al., 2019a).These events are often associated with substorm onsets but do not require substorm activity to occur (Engebretson et al., 2021).
It is shown in McCuen et al. (2021) that TLA dB/dt intervals are often related to nighttime GMDs and auroral substorms; however, this relationship is complex.TLA dB/dt with Pi 1-2 pulsation periods are sometimes involved in substorm processes but do not always occur in close temporal proximity to substorm onsets or geomagnetic storms.While SCs have been previously thought to be a primary driver for the most rapid and large-amplitude magnetic field perturbations (Kataoka & Ngwira, 2016), there was only one SC-related TLA event despite five other SC events that occurred in 2015 while the stations were located on the dayside.Rather than SCs and large geomagnetic storms, the largest TLA events were most often associated with smaller-scale processes like GMDs and substorms, suggesting that small-scale ionospheric currents are involved in driving these large-amplitude, high-frequency signatures.
These high-frequency TLA magnetic field intervals show a clear relation to other GIC-causing space weather events; however, the exact role these variations play within and in association to larger events is yet unknown.The goals of this study are to (a) more broadly understand the behavior of TLA events throughout the solar cycle, (b) more clearly define how high-frequency perturbations behave within larger space weather events, especially nighttime GMDs, and (c) determine the small-scale ionospheric currents and space weather phenomena that give rise to these disturbances.We analyze TLA dB/dt events in magnetometer data from multiple arrays that span the high-latitude region of North America and throughout Solar Cycle 24.We discuss these events in the context of other space weather phenomena and suggest possible physical mechanisms for their generation based on the evidence presented.
Data
The data used in this study are from multiple magnetometer arrays.Table 1 gives the geographic and corrected geomagnetic (CGM) coordinates for the stations as well as the array each station is a part of; the map shown in Figure 1 shows the locations of these stations in CGM coordinates.The details of each array and instrumentation 10.1029/2023JA031587 3 of 19 are outlined below.It is important to note that we also surveyed 10 stations located in the midlatitude and equatorial regions and no TLA events were identified below 61° magnetic latitude.1.The Magnetometer Array for Cusp and Cleft Studies (MACCS) is a system of magnetometers located in north-east Nunavut, Canada from about 65° to 80° geomagnetic latitude (Engebretson et al., 1995).MACCS is operated by Augsburg University and the University of Michigan and is funded by the National Science Foundation (NSF).The MACCS stations contain fluxgate magnetometers with axes aligned with the Earth's et al., 2016) is located in the eastern region of Canada.The AUTUMNX instruments are fluxgate magnetometers provided by UCLA that measure the magnetic field with 0.01 nT resolution at two samples/s and in local geomagnetic coordinates.5. THEMIS Ground-Based Observatory (GBO) systems (Russell et al., 2008) are a part of the larger collaboration of stations that contribute magnetic data to the THEMIS Ground Magnetometer (GMAG) cooperative.THEMIS GBO stations are operated by UCLA, contain UCLA instruments as in (4), and thus have the same resolution, measurement frequency, and coordinate system as mentioned above.
Methodology
A high-frequency TLA event is defined as a 1-hr period at a single station in which there is at least one dB/dt interval with timescale from 2 to 60 s and magnitude greater than 6 nT/s (subsequent ΔB > 60 nT).The lower derivative amplitude threshold of 6 nT/s was chosen as it is comparable to the 8 nT/s disturbances observed during the geomagnetic storm of March 1989 that caused significant power system damage (Kappenman, 2006).Note that GMDs during storms persist for timescales much longer than 60 s, but the 6 nT/s threshold serves as a baseline for what is considered to be large dB/dt.
TLA events often present as clusters of these dB/dt intervals in multiple components of the magnetic field data at a given station.An example of a TLA event is shown in the bottom panel of Figure 2, showing magnetic field data for ∼1 hr at the RANK station on 17 December 2017.The specific TLA dB/dt intervals are marked near 06:10 UT with the open circles (start of the interval) and closed circles (end of the interval) in each component.This TLA event occurred within an overall nighttime GMD event that began around 06:05 UT and peaked at about 06:15 UT.
The other station data are shown in the panels above RANK in Figure 2 because they show nighttime GMD events from about 06:10 to 06:20 UT as well as 06:30 to 06:40 UT (Engebretson, 2023).The GMDs at both RBY and CDR have maximum derivative amplitudes exceeding 10 nT/s.There is also a nighttime GMD that occurred at RANK beginning at about 06:05 UT.The RANK station measured TLA dB/dt that also exceeded 10 nT/s, and the overall magnetic field change during the entire interval was largest at RANK, with ΔBz of nearly 800 nT from 06:09 to 06:15.
In order to identify instances of TLA events in ground magnetic field data, an automated dB/dt search procedure was designed.The automated procedure is necessary because the characteristics of TLA dB/dt intervals are very similar to that of magnetometer "noise," referred to in this study as signals resulting from outside interference or instrumentation error that do not have geophysical sources.Noise-type signals in magnetometer data are often very short-timescale and large-amplitude dB/dt intervals, so identifying TLA signatures and distinguishing them from noise-type signals is an imperative aspect of this research.
The automated GMD classifier is described thoroughly in McCuen (2023) and discussed briefly here.The basic algorithm functions by partitioning every hour of data consecutively based on the number of data points and the 10.1029/2023JA031587 5 of 19 measurement frequency (i.e., for the 2 Hz MACCS data, the first hour partition is the first 7,200 data points and the second hour partition is the following 7,200 data points).Then, instances where the sign of the slope of the magnetic field changes and remains the same for at least two measurement cycles (i.e., for 1 consecutive second for 2 Hz data or 2 s for 1 Hz data) are identified.The consistency of sign change for two cycles is required to reduce single-point errors/spikes in the data or highly variable data due to noise interference.
After the slope sign changes are identified, the time difference between each slope sign change is calculated (i.e., the Δt between each change of slope direction) as well as the change in magnetic field strength (ΔB), and the rate-of-change of the interval (dB/dt).Finally, this first step of the process identifies all of the intervals between changes of the sign of the slope that last from 1 to 60 s and have rate-of-change of at least |6| nT/s.
The next steps of the algorithm incorporate a filtering process that has requirements derived from the statistical analysis of geophysical and noise-type events described in McCuen et al. (2023).The first condition is that at least one dB/dt interval identified from the first step in each hour window of data lasts 10 s or more.This condition is defined because all of the geophysical events identified in the MACCS data for 2015 met this criteria, whereas a large number of hour windows with only noise-type dB/dt exhibited only intervals that lasted less than 10 s.If any dB/dt identified in an hour window lasts more than 10 s and has derivative amplitude of at least |6|, the ratio filter is performed.
This ratio filter finds the ratio of the number second-timescale dB/dt > 6 nT/s to the total number of dB/dt intervals within the hour (in which the magnetic field changes for at least two measurement cycles, for any timescale and magnitude).If this ratio is less than 5%, then the dB/dt intervals identified in the hour advance to the next step in the process.This condition is implemented because many noise-type events in magnetometer data consist of more than 5% concentration of large, second-timescale dB/dt (hundreds, sometimes thousands of dB/dt within an hour period), so this ratio filter excludes instances that are highly likely to be a result of noise interference rather than geophysical source.The 5% ratio threshold is another requirement derived from the analysis of McCuen et al. (2023).
Finally, if the first two qualifiers are met (i.e., if there are second-timescale dB/dt > 6 nT/s intervals and at least one interval with 10-60 s timescale, and the 5% ratio filter is passed), then a support vector machine (SVM) classification is performed on the dB/dt intervals to classify them as either geophysical TLA or noise type.The
Solar Cycle Dependence of TLA Events
In this section, a subset of stations was selected to examine the solar cycle dependence of TLA events.This subset excludes TLA event data from the MACCS stations as well as KJPK and SALU.These stations were selected because there is more uniform data availability throughout the solar cycle (see Table S1 in Supporting Information S1 for yearly data availability).The analysis in this section and the figures shown use this subset of events.
To explore the occurrence of TLA events in comparison to both sunspots and substorms throughout Solar Cycle 24, we reference Figure 3: the number of substorms per month from late 2009 to 2020 (shown in blue) and the number of TLA events per month (shown in black).The number of substorm onsets is from the SuperMAG substorm event list (Newell & Gjerloev, 2011); this method defines substorm onset as the initial minute in which the SML index drops sharply by 45 nT in the next 3 min and has a sustained negative bay of at least 100 nT over the following half-hour.The SML index is the lower envelope of N-component magnetic field measurements at stations between 40° and 80° magnetic north and reflects the maximum strength of the westward auroral electrojet.This index makes up half of the overall SuperMAG electrojet index, SME, derived by subtracting the SML values from the upper-envelope of N-component values (SMU) from the same set of stations.In Figure 3, the vertical red dashed lines show the times of Solar Minimum and Maximum for Solar Cycle 24 (note that the following Solar Minimum was in April 2020, just beyond the range shown in Figure 3).Figure 3 shows that while the number of TLA events throughout a 1-month period is much fewer than substorm onsets, TLA events often occurred in higher numbers when substorm onsets also occurred more often in a given month.Further, this figure also shows that both TLA events and substorm onsets increased during the declining phase of the solar cycle from mid-2014 to 2019.
Next, the association of TLA event occurrences to ring current activity and solar wind speed throughout the solar cycle is examined.Figure 4a displays the probability density of all SuperMAG ring (SMR) current index (Newell & Gjerloev, 2012) values for 2009-2019 (shown in red) compared with the SMR values during the minute of the maximum dB/dt interval of each TLA events from 2009 to 2019. Figure 4a shows that SMR values have a narrow distribution that peaks near zero with average value of −6 nT and standard deviation (σ) of ∼15 nT, while the distribution for TLA events is shifted to more negative values, peaking from 0 to −50 nT (mean value of −54 nT) with a larger σ of 43 nT and a long tail that extends to −175 nT.While a majority of TLA events occur for only slightly elevated SMR values, this distribution shows that some TLA events are related to very active geomagnetic storms.
Figure 4b shows the probability distribution of all solar wind flow speed values, Vsw, for every minute throughout the solar cycle (red) compared with the solar wind flow speed during the minute of the maximum dB/dt interval during each TLA event (blue).The Vsw values are from the OMNI database (time shifted to the Earth's bow shock nose; King & Papitashvili, 2020).These distributions show that Vsw peaks between 300 and 400 km/s, with a mean value of 412 km/s, and Vsw during TLA events is much higher on average (mean of 578 km/s) and a majority of values from 450 to 700 km/s.
Figure 4c shows the distribution of the SML values for the minute of substorm onset for all substorms that occurred from 2009 to 2019 (pink) and the SML values of the substorm onsets that occurred within 60 min of a TLA event (blue).The distribution of all substorm onset SML values has a mean of −234 nT and a σ of 189 nT, while the distribution of SML for substorm onsets that occurred within 1-hr of TLA events peaks for more negative SML with mean of −465 nT and a wider spread of SML with a σ of 316 nT.While Figure 3 shows that there was often a larger number of TLA events during days when there was also a larger number of substorm onsets; Figure 4c shows that TLA events often occurred within 1 hr of substorm onsets with more negative SML values that is, more disturbed conditions of the westward electrojet (WEJ).
Figure 3 shows that TLA events occur more often during the declining phase of the solar cycle when substorm activity is increased and large geomagnetic storms driven by coronal mass ejections (CMEs) occur less often.This observation together with Figures 4a and 4b that show TLA events are more common during slightly elevated ring current activity and fast solar wind speeds may indicate that TLA events may be related to weak geomagnetic storms caused by coronal holes and subsequent corotating interaction regions (CIRs) that are most frequent in the descending phase (Hajra & Sunny, 2022) and give rise to fast flow speeds that can cause mild ring current activations.
Latitude and Local Time Dependence
In this section, we examine the latitude and magnetic local time (MLT) dependence of TLA events.For these purposes, a subset of the full database of TLA events was created so that there are an equal number of stations used from each magnetic latitude range.This subset consists of TLA events identified in all 12 stations for the years of 2015-2019 only (excluding the two AUTUMNX stations used in the event analysis of Section 7, see Table S1 in Supporting Information S1).There are three stations in each magnetic latitude range: 61°-64°, 65°-69°, 71°-74°, and 75°-78°.It is also important to note that we surveyed seven magnetometer stations in the midlatitude region from 30° to 60° MLAT and three stations in the equatorial region below 30° MLAT for all years of the solar cycle and we found no geophysical TLA signatures at any magnetic latitudes lower than 60°.
Figure 5 shows two distributions of the number of TLA events based on the magnetic latitude at which they occurred (a) and the MLT at which they occurred (b).The bars in Figure 5a each represent the number of events at each station except for the bar showing events from 61° to 62° that includes both the ATHA and MEA stations with very close separation distance.The degrees of magnetic latitude that show zero events do not necessarily indicate that zero TLA events occurred at those latitudes.Rather, these empty spaces signify the highly localized nature of TLA events and the need for a dense magnetometer network in order to detect them over all latitudes of the high-magnetic latitude region.Figure 5a shows that a majority of TLA events occurred in the 65°-69° range with a slightly smaller population of events in the 71°-74° range.The equatorward boundary of the auroral oval is nominally around 65°; during the expansion phase of substorms the auroral oval can extend to 62°-64° and 68°-70° in the midnight sector (Akasofu, 1964).
Figure 5b shows the local time distribution of TLA events for each hour of MLT.This plot shows that TLA events are primarily nighttime events, with two distinct local time populations.The majority of events occurred from 17 to 01 MLT and a much smaller number of TLA events occurred from 01 to 08 MLT.About 3% of the total TLA events from 2015 to 2019 occurred during the daytime (referred to here as those occurring from 08 to 17 MLT, outside of the two nighttime populations).Daytime events were most commonly associated with geomagnetic storms (10 events were related to SCs, 5 occurred during the main phase and 2 during first day of recovery); however, 3 daytime events are referred to as unrelated events that were not associated with a storm and occurred more than 60 min from a substorm onset.Unrelated events, that occurred more than 60 min from substorm onset and in the absence of a CME-driven geomagnetic storm, comprised just over 8% of the TLA events that occurred in the 2015-2019 subset.
Connection to Substorms and GMD Events
McCuen et al. ( 2021) analyzed TLA events solely from five MACCS stations for the year of 2015 and showed that they were strongly associated with substorms and GMD events.Nighttime GMDs are magnetic perturbations with amplitudes of hundreds of nT and periods of 5-10 min (Engebretson et al., 2019a).These events are generally localized to a ∼275 km radius and they occur in two distinct local magnetic time populations in the pre-and post-midnight regions (Engebretson et al., 2019b).GMDs are often associated with substorm onsets but substorms are not necessary to cause them (Engebretson et al., 2021); the pre-and post-midnight populations show different temporal relations to onsets, indicating that there may be distinct M-I drivers for GMDs dependent on MLT.
Nighttime GMDs have been observed to coincide with dipolarizations in the magnetotail and subsequent auroral streamers (Engebretson et al., 2019b) as well as omega bands (Engebretson et al., 2020).The spherical elementary current system (SECS) analysis of nighttime GMDs by Weygand et al. (2021) found that a majority of GMDs occurred underneath the WEJ; many of the pre-midnight events occurred within the Harang current system while the remaining pre-midnight as well as many of the post-midnight events occurring underneath the downward region 1 or upward region 2 field-aligned current (FAC) systems.
In the present analysis, TLA events (from the same subset of TLA events used in Section 5 from 2015 to 2019 and excluding KJPK and SALU, see Table S1 in Supporting Information S1) are analyzed in comparison with a data set of GMD events (Engebretson, 2023) that consists of nighttime GMDs that occurred at the RBY, CDR, and PGG stations from 2015 to 2019.In this subset of GMD events, there are 843 hour windows in which GMDs occurred and 236 of them exhibited associated TLA dB/dt intervals.The most extreme GMD events with derivative amplitudes exceeding 12 nT/s occurred within 154 hour windows and of the 154 hour windows of extreme GMD events, a large majority (124 windows, 81%) have TLA dB/dt intervals included within the hour window.
For GMDs with derivative amplitudes over 20 nT/s, this percentage is even higher: from 2015 to 2019, 28 hour windows included GMDs > 20 nT/s and 26 of these windows (93%) included TLA intervals as well.
Of those 124 hour windows with extreme GMDs and associated TLA dB/dt, there are 91 hour windows that consist of GMDs observed at multiple stations; 58 of these windows have the largest TLA dB/dt at the station location of the largest GMD.There are 78 cases of hour windows in which extreme GMDs occur at multiple stations and TLA dB/dt intervals occur at fewer station locations than the GMDs.In other words, TLA events were often even more localized than the spatial extent of the GMDs and further, when the nighttime GMDs commonly occurred at more than one station, the largest TLA dB/dt occurred at the specific location of the largest GMD.
To examine the relationship between substorms, nighttime GMDs, and TLA dB/dt events, Figure 6 shows the number of TLA and GMD events that occurred from 2015 to 2019 based on their temporal proximity from the nearest substorm onset (a) and the longitudinal difference of the TLA and GMD events from the location of the substorm onset (b).These substorm onset times and locations are from the SuperMAG substorm event list defined by Newell and Gjerloev (2011).Because TLA events often consist of multiple dB/dt signatures, the time and location of each event is marked with the maximum dB/dt interval of each TLA event.The blue bars in Figure 6 show the number of all TLA events, the orange bars show TLA events that were related to GMDs within the same hour window, and the red bars show the number of TLA events related to extreme GMD events (>12 nT/s).
Figure 6a shows that TLA events most commonly occur within 20 min of substorm onset with average onset delay of 5.5 min (σ = 55.4 min).TLA events also occurred in the 30 min prior to onset but much less frequently than 30 min after onset.The distribution of hour windows containing GMD-related TLA events is wider than that of the total TLA event population with an average delay of 7.9 min and σ of 72.6 min.Figure 6a shows that extreme GMD events with associated TLA intervals comprise about half of all GMD-related TLA events, meaning that when GMD events occur with associated TLA dB/dt they are very likely to be the most extreme GMD events.This distribution also shows that hour windows containing extreme GMD events with associated intervals can occur well beyond 20 min from the most recent substorm onset, and a sizable population of TLA intervals occur more than 2 hr from onset.
Figure 6b shows the number of hour windows that occurred for a given difference in longitude from the location of substorm onset identified by Newell and Gjerloev (2011) to the location of the maximum dB/dt of TLA events.
Here, negative degrees of longitude difference signify that the location of the TLA and/or GMD events within the hour window occurred to the west of the location of the substorm onset.The distance conversion of longitudinal degrees to kilometers ranges from 65 km per 1° longitude at 54° geographic latitude (the geographic latitude of the lowest station) to 38 km per 1° longitude at 70° geographic latitude (geographic latitude of highest station), for an average of about 52 km per 1° longitude over this latitude range.
The distribution of Figure 6b shows that most TLA and TLA-related GMD events occurred within ±20° longitude of the substorm onset and more often occurred 20° to the west of the onset location rather than to the east.Further, Figure 6 shows that many hour windows containing extreme GMD events with associated TLA intervals occurred very far from the location of the substorm onset, in many cases more than 80° of longitude west of the onset location.Taken together, these two figures show that a majority of TLA events are closely related to substorm activity, but also shows the distinction that many TLA events, especially those related to extreme GMDs, can have in both time and space from substorm onsets.
Analysis of 30 September 2016 GMD/TLA Events
Figure 7 shows GMD events that occurred at six stations on 30 September 2016.The data for each station are plotted from top to bottom in order of decreasing magnetic latitude.Within these GMDs, TLA intervals occurred This event occurred during moderate geomagnetic activity during what appears to be recovery from a CIR-driven storm (the SMR index reached −59 nT at 09:40 UT on 29 September, then recovered, and fluctuated between ∼−30 nT and −10 nT throughout 30 September).A substorm onset occurred at 01:10 according to the method of Newell and Gjerloev (2011).Substorm auroral onsets are determined by major auroral intensifications (Akasofu, 1964;Nishimura et al., 2010); by this definition, a small substorm auroral onset occurred at 01:05 UT and a larger onset occurred at 01:20 UT.The SML index increased ∼160 nT from 01:00 to 01:10 UT, then In order to analyze the ionospheric behavior during this interval, the SECSs method developed by Amm and Viljanen (1999) and applied to magnetometers in North America and Greenland by Weygand et al. (2011) was used to analyze the horizontal equivalent ionospheric currents and the vertical current amplitudes during this interval.Figure 8 displays SECS maps for pertinent minutes throughout this event, with the stations in Figure 7 marked as colored circles.The red shaded regions indicate locations of upward currents perpendicular to the ionosphere and the blue shaded regions indicate downward currents, with the degree of shading signifying the strength of the current.Further, Movie S1 shows a mosaic composition of images taken every 15 s from THEMIS All-Sky Imagers (ASIs) at four stations in this region for the hour interval in which this event occurs.
The two southern-most stations, FCC and KJPK, measured slight disturbances near the time of the substorm onset ∼01:10 UT.The SECS map at 01:10-01:11 UT (not shown) indicates a slight and localized intensification of the upward current and mild south-eastward horizontal currents above the FCC and GILL stations.At 01:22 (Figure 8a, and marked as a dashed line in the KJPK and FCC panels of Figure 7), an up-down current pair appeared spanning from East to West over Hudson Bay, shown in the SECS map of Figure 8a.At this time, the RANK and SALU stations (mauve and orange, respectively) are both underneath the downward R1 currents, and RANK and FCC both lie on the east side of the Harang current system.A moderate westward horizontal current is shown in the shear region between the up-down current pair.Magnetic disturbances were observed at FCC and KJPK near this time, most notably at FCC but not at the stations north of FCC.
At 01:26 (Figure 8b, and marked in RANK and SALU panels of Figure 7), the SECS maps show that the current pair begins to extend northward.GMDs were seen at RANK and SALU, with TLA intervals in the By-and Bz-component at RANK.At this time, SALU is still underneath the downward R1 currents, while RANK is located in the boundary region between downward and upward R1 currents.From about 01:26 to 01:28, strong WEJ currents are observed in the SECS maps extending over SALU that turn slightly northward to the north of RANK and southward to the south of RANK.
At 01:32-01:33 (Figure 8c, and marked in CDR and RBY panels of Figure 7), the upward portion of the current pair in red (to the south of the downward portion in blue) separates into two separate localized upward vertical current systems on either side of the north edge of the Hudson Bay.At this time, WNW horizontal currents are enhanced overhead of the upward current lobes.GMDs were recorded at the two northern-most stations, CDR and RBY, with peaks near 01:32-01:33 and TLA intervals in the x-component at CDR and x-and z-components at RBY.While there are large positive excursions in the z-components at both CDR and RBY around 01:33 UT, the SECS maps show that these disturbances appear to be caused by separate, localized upward systems overhead each station on either side of the northern Hudson Bay.Over the next 10 min from 01:32 to 01:42, the upward current on the east side weakens, while the upward current to the west moves slightly northwest and intensifies at 01:37 (Figure 8d) when TLA dB/dt are measured at RBY (Figure 7 top panel) before weakening at 01:39 when the GMDs at all of the four northern stations begin to subside.
The ASI data images in Movie S1 are consistent with the magnetic field data and SECS maps.A relatively stationary east-west auroral arc appeared just north of KJPK soon after the time of the first substorm auroral onset and extended across the bay over FCC by 01:18.The second substorm auroral onset started at 01:20 and initiated a series of brightenings.At 01:22, a poleward boundary intensification (PBI) began and the arc extended northward with auroral streamers that emerged from the arc at 01:26:30 in two distinct areas: (a) one to the west of SALU over RANK (where TLA dB/dt occur) and (b) another part to the south of SALU.Note that there are three stationary streaks of light in all of the ASI images at the SNKQ station (to the southeast of CDR, to the southeast of SALU, and directly south of PGG), these are stationary throughout the ASI movie and are artifacts rather than streamers.
The arc portion (a) over RANK continued moving poleward, while a streamer emerged at 01:26:30 UT and moved equatorward.This streamer can be seen in Figure 9 to the south of RANK.Then at 01:29:30, the arc broke up into a smaller part to the north and a longitudinally extending streamer to the south.The streamer north of RANK continued moving poleward and began to fade at 01:32, while the southern streamer moved equatorward and dissipated by 01:31.
The arc portion (b) south of SALU at 01:26:30 had two portions within it, one to the south and extending slightly west of SALU and a stronger part SE of SALU.By 01:27:30, these two features were more distinct, extending in the NW-SE direction.The part south of SALU reached SALU at 01:27:45, while the eastern part moved equatorward.Both portions then retreated equatorward and faded away by 01:29.At this same time, a new streamer appeared NE of SALU, intensified as it moved equatorward, and dissipated by 01:31.From 01:31:30, a small streamer appeared over RANK and moved equatorward while extending longitudinally at 01:32:30 and then fading.During this time, another intensification occurred NE of SALU and streamers moved equatorward, then faded by 01:35:45.
Figure 10 shows magnetic field data measured by the GOES-13 spacecraft during this event.The field-line footprint of GOES-13 at this time is shown in Figure 1.Here, Bz (plotted in red) is parallel to the Earth's rotation axis, positive northward, Bx is in the Earthward direction perpendicular to Bz, and By is in the eastward direction perpendicular to Bx and Bz.The sharp increases in Bz (highlighted as gray panels) signify dipolarization fronts (DFs) in the magnetotail at geosynchronous orbit (Ohtani et al., 2020), the timing of which coincide with the timing of ionospheric current enhancements and subsequent GMDs measured on the ground.
The first substorm auroral onset occurred at 01:05 UT and a second onset occurred at 01:20 UT, both with DFs that occurred ∼2-3 min after.Perturbations were measured at the two southern-most stations, KJPK and FCC, The auroral poleward expansion shown in the ASI data corresponds well with the poleward progression of GMDs and the timing of the largest ionospheric currents.DFs are the leading edge of dipolarizing flux bundles (DFBs; Nakamura et al., 2002), defined as transient (∼40 s), localized (<∼3R E in X GSM and Y GSM ) flux tubes carrying strong northward magnetic field (Liu et al., 2018).DFBs typically propagate at high speeds from the near-Earth reconnection region, efficiently transporting magnetic flux in short flow bursts referred to as bursty bulk flows (BBFs).Auroral streamers emerging from PBIs are considered to be the auroral signatures of BBFs (Henderson et al., 1998;Sergeev et al., 1999).Although there is no available plasma flow velocity data in the magnetotail during this time, the DFs at GOES-13 in the magnetotail paired with the PBI and streamers in the ionosphere appear to be evidence of a BBF event.Further, the double auroral onset nature of this case may indicate that the first auroral onset was a pseudo-breakup: a small, localized, substorm-like activation of auroral brightening that often precedes a full-scale substorm.Pseudo-breakups can be associated with localized dipolarization in the tail that does not cause a global reconfiguration of the magnetotail (Akasofu, 1964) but can generate a localized current wedge (T.I. Pulkkinen, 1996;T. I. Pulkkinen et al., 1998).
We suggest that the cause of the poleward progression in this event is the tailward retreat of the magnetotail reconnection region due to successive DFBs.As the magnetic field dipolarizes, the reconnection region shifts downtail and this corresponds to the poleward shift of the larger magnetic footprint.Further, the Earthward propagation of fast flows in the plasma sheet causes the PBIs to extend equatorward and align in the north-south (NS) direction, forming auroral streamers (Ieda et al., 2016;Nakamura et al., 2011).
The overall structure of the GMDs are observed progressing northward; the peaks are measured closely in time for stations with similar magnetic latitudes, but the TLA intervals present are observed to be more longitudinally localized.For instance, RANK and SALU are about a degree separate in latitude and see relatively simultaneous peaks in all three components as the vertical current pair and auroral arc move northward, however TLA dB/dt are measured only at RANK.Then, CDR and RBY show a similar peaks in time but many more TLA intervals are observed at RBY than CDR.The stations on the western region of the Hudson Bay (RANK, RBY) where the localized upward current structure was stronger, exhibited the majority of TLA intervals in this event.It appears that while the GMDs at each station are a response to the larger-scale (roughly 1,000 km) ionospheric currents, the TLA dB/dt's are smaller-scale features of more localized FACs and auroral intensifications.Further, the SML index in this case does not reflect the timing of the TLA dB/dt that align fairly closely with the auroral enhancements and features of the SECS maps.This discrepancy is likely due to the localization and rapidity of the ionospheric variations causing the TLA dB/dt; the SME index uses 1-min cadence magnetic field data with a sliding 30-min buffer (Newell & Gjerloev, 2011) that may not capture the small-scale ionospheric enhancements.
Discussion
Engebretson et al. (2019b) analyzed three separate GMD events that occurred during 2015.It was shown that all three of these events occurred within 1-hr of a substorm onset as well as a dipolarization at GOES-13, indicating that these events were generally related to fast flows from the magnetotail that penetrated the near-Earth plasma sheet.These events all exhibited a northward and westward spatial progression, and SECS maps during these events showed coinciding regions of localized horizontal current enhancements with ∼275 km radius.Two of these three events had TLA dB/dt intervals within the GMD: Event 1 on 11 November 2015 and Event 3 on 9 October 2015.In Event 1, TLA events occurred at two of the four stations that measured GMDs and in Event 3, TLA dB/dts occurred at one of the four stations.In both Events 1 and 3, the locations of the GMDs that had associated TLA dB/dt are where the largest GMDs (>12 nT/s) occurred over the spatial extent of the disturbance; Event 2 exhibited no extreme GMDs and had no associated TLA signatures.
The TLA-related GMD events of the present study show some consistency with the spatial progression of those in Engebretson et al. (2019b): many TLA intervals occurred prior to the GMD at a more southern station (as shown in Figures 2 and 7) indicating northward progression of a fast ionospheric event.However, in comparison to substorm onsets, many TLA-related GMD events occurred to the west of the location of substorm onset defined by Newell and Gjerloev (2011), indicating an eastward progression in some cases.Rather than an eastward progression, this could be due to the overall northward progression of a larger ionospheric disturbance (i.e., poleward expansion of the WEJ [Olson & Rostoker, 1975]), but with more longitudinally localized variations (i.e., auroral streamers) causing the rapid TLA signatures at only some of the stations, as in the event analyzed in Section 7. Wei et al. (2021) presented an analysis of intense dB/dt events on the ground that occurred on 7 January 2015.The perturbations occurred from 08:40 to 09:20 at 24 stations in midlatitude to high-latitude North America.During this event, large GMDs with TLA intervals within them occurred at SALU and KJPK.The study by Wei et al. (2021) included seven other stations that were also analyzed in the present study, and four of these stations exhibited TLA signatures (ATHA, MEA, GILL, and RANK, temporally in that order).The larger perturbations and the TLA events showed a northward progression.This event occurred just after a substorm onset and in close temporal response to a BBF event carrying multiple DFs that were detected by the Cluster spacecraft in the inner magnetosphere (∼−4R E ).It is suggested in Wei et al. (2021) that during this event, the large-scale SCW system is composed of multiple localized R1-sense FAC structures driven by multiple BBFs, as has been previously proposed (Birn & Hesse, 2014;Liu et al., 2013) and demonstrated (Liu et al., 2015(Liu et al., , 2018)).Weygand et al. (2021) analyzed GMDs during 2015 and 2017 at CDR and KJPK and found that a majority of the events occurred within the WEJ and the pre-midnight events often occurred beneath the Harang current system.In Section 7, the pre-midnight event on 30 September 2016 is consistent with these observations: the SECS maps at 01:22 and 01:26 (Figures 8a and 8b) show that FCC and KJPK were within the upward Harang current with the WEJ flowing between this region and the downward Region 1 currents to the north, overhead RANK and SALU.
Then at 01:32 UT, localized and transient upward vertical currents appear separately above RANK to the west and CDR to the east; the upward current lobe on the west strengthened near RBY when TLA dB/dt were measured at 01:37 UT.These localized vertical currents were likely FACs caused by DFBs in the inner plasma sheet, as evidenced by the dipolarization observed at GOES-13.The TLA dB/dt at RANK (01:26 UT) appeared when the station was located in a boundary region between upward and downward currents, as was the case for the TLA dB/dt that occurred at CDR near 01:32 UT and at RBY near 01:32 and 01:37 UT.
Summary and Conclusions
In this study, we have shown that TLA dB/dt events occurred primarily at night, preferentially in the pre-midnight sector.These high-frequency perturbations occurred only in the high-magnetic latitude region above 60° MLAT, with a majority in the 65°-74° MLAT band where substorm onset and expansion occurs.TLA events most often occurred within 60 min of substorm onsets but there is also a subset referred to as unrelated events that occurred more than 60 min after substorm onset and in the absence of a CME-driven geomagnetic storm.The large number of TLA events that occurred soon after substorm onset, in pre-midnight sector of the high-latitude region, suggests that these events are closely related to the upward portion of the SCW.
TLA dB/dt events occurred most often during the declining phase of the solar cycle when the yearly mean sunspot number decreases but the number of substorm onsets per year increases from solar maximum.TLA events are most common during intervals of mild SMR current activity and fast solar wind flow speeds.This may indicate a relationship between TLA and GMD events with weak CIR-driven storms due to fast solar wind flow speeds emanating from coronal holes.Future work includes an investigation of this potential association, especially for the so-called unrelated TLA events.
We have shown in this study that many TLA events during 2015-2019 were associated with GMDs, often preceding the event or occurring within the overall disturbance.Not only were TLA-related GMD events common, but as GMD amplitudes increased, the likelihood that TLA intervals were associated with the GMD vastly increased: 81% of hour windows with extreme >12 nT/s GMDs had associated TLA intervals, and 93% of even larger GMDs >20 nT/s included TLA intervals.Engebretson et al. (2019a, 2019b, 2021), and Weygand et al. (2021) all show that GMDs have an effective radius of ∼300 km.The results presented here show that high-frequency intervals of the magnetic field can be even more localized: TLA dB/dt intervals often occurred at fewer stations than the extent of the GMDs were measured, as shown in Figure 2 (17 December 2017), the event on 30 September 2016, Events 1 and 3 of Engebretson et al. (2019b) and the 7 January 2015 event of Wei et al. (2021).While TLA dB/dt are commonly more localized than GMD events, the locations of the largest TLA most often signify the locations of the largest GMDs.
This study has presented multiple cases of TLA dB/dt correlated with dipolarizations in the inner magnetosphere, as well as localized FAC structures, PBIs, and auroral streamers; one of which also included measurements of bursty Earth-directed plasma flows in the magnetotail (Wei et al., 2021).We show that TLA events are closely associated with substorm activity, but they also occur many tens of minutes and even hours apart from substorm onsets, as well as in locations very far from the location of the onsets.The spatial and temporal separation of some TLA events from substorms indicates that TLA events are driven by M-I processes that are often, but not always related to substorms.Additionally, TLA events may be the ground manifestations of highly localized pseudo-breakups and/or localized substorm current wedgelets driven by individual closed-field-line DFBs as in Liu et al. (2015Liu et al. ( , 2018).Because of their potential M-I source mechanisms and their subsequent relationship to larger, longer GMDs on the ground, TLA events are relevant to GIC-driving processes but they are not always directly reflected in large-scale geomagnetic activity indices like SML that are derived with 1-min magnetic field data.
Strong magnetic perturbations and Pi 2 pulsations are closely correlated with auroral intensifications followed by streamers driven by DFs and fast flow bursts in the magnetotail (Kepko & Kivelson, 1999;Lyons et al., 2012;Nishimura et al., 2012).This study has shown that disturbances in the Pi 1 frequency range are often present in these situations as well.TLA magnetic perturbations appear to have complex M-I drivers, but they are likely the result of small-scale ionospheric current phenomena coupled to the magnetotail that often but do not always occur during substorms.Because these high-frequency signatures are very often associated with the most extreme nighttime GMDs that can drive GICs on Earth (and even though magnetic variations with Pi 1 and short Pi 2 periods do not drive GICs directly), TLA dB/dt and the associated M-I phenomena such as BBFs are important to take into account when investigating the complex dynamics that can give rise to GIC.Future work includes a broader investigation of the ionospheric currents, magnetotail dynamics-especially fast plasma flows in the tail-and solar wind drivers of TLA-related GMD events.Identification and analysis of TLA dB/dt in association with nighttime GMDs will continue to provide insight on their M-I drivers and their behavior from the ionosphere to the ground, where GMDs with associated TLA intervals pose the greatest threat of hazardous GICs.
Figure 1 .
Figure1.Locations of the magnetometer stations used in this study.The symbols for each station represent the array to which they belong: squares signify MACCS stations, triangles signify CARISMA stations, circles are for CANMOS stations, the diamond is for the THEMIS GBO, and the asterisks represent AUTUMNX stations.The X marks the magnetic footprint of the GOES-13 spacecraft (determined using tools from SSCWEB https://sscweb.gsfc.nasa.gov/)during the event discussed in Section 7. Lines of the latitude and longitude are shown in corrected geomagnetic coordinates for epoch 2014.
Figure 2 .
Figure 2. Magnetic field data from three stations on 17 December 2017.The Bx-component is displayed in black, By in blue, and Bz in red.The transient-large-amplitude (TLA) intervals are signified by hollow circles denoting the start of the interval and filled circles denoting the end of the interval.The mean B value in the each component has been subtracted.
Figure 3 .
Figure 3. Number of substorm events (blue) and transient-large-amplitude (TLA) events (black) per month from late 2009 to early 2020.Values are plotted with separate y-axes: the left y-axis shows the number of substorm onsets and the right y-axis gives the number of TLA events.
Figure 4 .
Figure 4. (a) Probability density of all SuperMAG ring (SMR) values throughout Solar Cycle 24 (pink) and probability density of SMR values during the minute of maximum dB/dt of each transient-large-amplitude (TLA) event throughout the solar cycle (blue).(b) Probability density of all Vsw values throughout Solar Cycle 24 (pink) and probability density of Vsw during the minute of maximum dB/dt of each TLA event throughout the solar cycle (blue).(c) Probability density of SML values for all substorm onsets throughout Solar Cycle 24 (pink) and probability density of SML values for substorm onsets that occurred within 1-hr of a TLA event (blue).
Figure 5 .
Figure 5. Histograms of (a) number of transient-large-amplitude (TLA) events based on magnetic latitude range and (b) number of TLA events that occurred for every hour of magnetic local time.
Figure 6 .
Figure 6.Number of 1-hr event windows from 2015 to 2019 that contain transient-large-amplitude (TLA) events (blue), TLA events related to geomagnetic disturbances (GMDs) (orange), and TLA events related to extreme GMDs (red) as a function of the time delay from substorm onset (a) and the longitude difference (in geographic coordinates) from where the TLA event occurred to where the substorm onset occurred (b).
Figure 7 .
Figure 7. Magnetic field data from six stations on 30 September 2016.The Bx-component is displayed in black, By in blue, and Bz in red.The transient-largeamplitude (TLA) intervals that occurred within some of the events are signified by hollow circles denoting the start of the interval and filled circles denoting the end of the interval.The dashed vertical lines signify the times that correspond to the spherical elementary current system (SECS) maps in Figure 8.The mean B value in each component for the interval shown has been subtracted.
Figure 8 .
Figure8.Four spherical elementary current system (SECS) maps during the 30 September 2016 event in geographic coordinates (dotted black lines) and geomagnetic coordinates (dotted pink lines).Each panel shows the combined field-aligned like current densities and equivalent currents.The dots indicate the points at which the equivalent current was determined and the vector gives the magnitude and direction.The stars mark the stations with useable data on that day.The key for the equivalent current is given in the lower right corner, and the color bar indicates the current density values.The colored circles mark CDR (yellow), RBY (gray), SALU (orange), RANK (mauve), FCC (green), and KJPK (pink).
corresponding closely with these DFs.An east-west auroral arc appeared around 01:12 north of KJPK and south of SALU.The arc brightened around 01:22 after the second substorm onset and DF; the SECS maps show an E-W, up-down current pair extended across the southern Hudson Bay, westward horizontal currents increased in this region, and a GMD occurred at FCC.At about 01:26, a third DF occurred and the auroral arc began to move poleward and split into two separate parts over RANK and SALU, with auroral streamers that emerged from the arc and moved equatorward.The SECS maps show the up-down vertical current pair moved northward as well and the horizontal currents increased in the northwest direction; GMDs were observed at RANK and SALU with TLA intervals within the GMD at RANK, but not at SALU.Then around 01:33, another DF occurred at geosynchronous orbit as auroral patches were observed over RANK and SALU, developing into longitudinally localized streamers that moved equatorward; two distinct regions of localized upward vertical currents enhanced on either side of the northern Hudson Bay and strong WNW horizontal currents extended over the region.During this time, GMDs occurred at CDR and RBY, with TLA intervals within the GMDs at both stations.
Figure 10 .
Figure 10.Magnetic field data measured by the GOES-13 spacecraft during the geomagnetic disturbance (GMD)/transient-large-amplitude (TLA) event.The average B value for the interval shown has been subtracted from each component.
Figure 9 .
Figure 9. All-Sky Imager (ASI) image data from four stations at 01:26:30 UT showing separate streamers that have emerged from auroral arc south of RANK and south of SALU.
Table 1
Location Coordinates of Stations Used in This Study | 2023-11-11T16:19:55.308Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "bcee29e0b35709f60d13f2427218d4ffcc3c6c54",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023JA031587",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "d96b2790d35553cdde9af4706c4c1e2cfd7e9b0e",
"s2fieldsofstudy": [
"Physics",
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
} |
155213081 | pes2o/s2orc | v3-fos-license | Politicians’ Self-Reported Social Media Activities and Perceptions: Results From Four Surveys Among German Parliamentarians
The growing importance of social media in the political arena seems to be in line with the mediatization of politics thesis, which states that mediated communication is becoming more important in politics and increasingly influences political processes. However, how politicians’ social media activities and politicians’ perceptions concerning social media have developed over time has rarely been examined. Moreover, it is unclear how the politicians’ activities and perceptions are related to each other. Referring to theoretical approaches, such as the influence of presumed influence approach, four surveys were conducted among German parliamentarians (MPs) between 2012 and 2016 (n = 194/149/170/118). The results indicate that the MPs’ self-reported social media activities and perceptions have remained remarkably constant since 2012. Regression analyses indicate that MPs’ self-reported social media activities and perceptions are hardly related to each other. This raises the question whether mediatization processes are indeed driven by politicians’ perceptions about media influences.
On one hand, this is convincing, as the news media still dominate Western media environments (e.g., Klinger & Svensson, 2015;. In addition, mediatization focuses on long-term processes (Kepplinger, 2002;Mazzoleni & Schulz, 1999;, and social media are a relatively new phenomenon. On the other hand, media environments have changed dramatically since the emergence of social media (Chadwick, 2013), and within a short time period, political actors, organizations, journalists, and citizens have integrated social media into their media repertoires (Broersma & Graham, 2013;Gulati & Williams, 2013;Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2017;Nitschke et al., 2016). Thus, even shorttime longitudinal studies are necessary to identify changes and stable patterns in the importance of social media in politics. 837679S MSXXX10.1177/2056305119837679Social Media <span class="symbol" cstyle="Mathematical">+</span> SocietyKelm et al. Since it has become widely accepted that mediatization is less fostered by media and rather by individuals (Schulz, 2014), there is a shift from a "media-centric" to an "actorcentric" perspective in mediatization research (Esser & Strömbäck, 2014, p. 227). In particular, politicians' perceptions and politicians' media activities as well as the relationship between both factors often served as indicators of the extent to which politics is mediatized (e.g., Cohen, Tsfati, & Sheafer, 2008;Kepplinger, 2002;Strömbäck, 2011). These indicators, with a focus on social media, were also used in this study. The study is based on data from four surveys conducted between 2012 and 2016 among members of Germany's national parliament (MPs) and asks how have the German MPs' self-reported social media activities and perceptions changed over time, and how have the German MPs' social media activities been affected by their perceptions? Thus, the study supplements the few longitudinal studies that have investigated the process of the mediatization of politics (Elmelund-Praestekaer et al., 2011;Kepplinger, 2002;Negrine, 1999;Pontzen, 2013;Zeh & Hopmann, 2013). Furthermore, to the authors' knowledge, this is the first study which analyzes the influence of perceptions about media influences on one's own self-reported media activities over time (cf. "influence of presumed influence," Gunther & Storey, 2003). This allows investigating whether mediatization processes are indeed "driven to a large extent by politicians' perceptions that media have a powerful influence on politics" (Tsfati, 2014, p. 572).
Mediatization of Politics
Mediatization of politics can be defined as "a long-term process through which the importance of media and their spillover effects on political processes, institutions, organizations and actors have increased" (Strömbäck & Esser, 2014, p. 6). Following Strömbäck (2008), this process can be differentiated in four highly related phases. The phases refer (1) to which degree the media constitute the dominant source of political information, (2) to which degree the media have become independent from political institutions, (3) to which degree the media coverage is mainly governed by the media logic instead of the political logic, and (4) to which degree political actors are governed by the media logic instead of the political logic. While the first two phases are largely completed in most western European democracies (Maurer & Pfetsch, 2014, p. 340), it is unclear to what extent the third and fourth phases are developed. In this study, the fourth phase of mediatization is analyzed, which "deals with the very essence of the mediatization of politics" (Strömbäck & Esser, 2014, p. 6).
Fourth Phase of Mediatization: The Role of News Media
Studies focusing on the fourth phase of mediatization examine to what degree political actors are guided by the media logic instead of the political logic (for the theoretical discussion of the political and media logic, see, for example, Altheide & Snow, 1979;Esser, 2013;Lundby, 2009). Most of these studies have analyzed the impact of the news media logic, and thus, the impact of "media-specific rules of selecting, interpreting, and constructing political news messages" (Esser, 2013, p. 160). If politicians create, for example, pseudo-events or change their language style in favor of the news media's needs, they follow (news) media logic rather than the political logic (Esser, 2013;. This form of self-mediatization (e.g., Esser, 2013) is only rational, if political actors perceive that the content that is distributed via news media influences relevant target groups (Donges & Jarren, 2014;Strömbäck, 2011;. Thus, "the first aspect of mediatization is perception" (Donges & Jarren, 2014, p. 189).
Empirical studies on the mediatization processes among politicians have analyzed the politicians' media activities, perceptions, and the relationship between both factors. Kepplinger (2002), for example, shows that the quantity of German MPs' information-related activities has increased over time, while their decision-making activities remained rather constant. According to Pontzen (2013), the number of press releases, interviews, and media trainings of German MPs significantly increased from 2005 to 2011.
Most politicians perceive that the mass media have some or great influence over politics (Strömbäck, 2011), can "make and break politicians" (Strömbäck, 2011;van Aelst et al., 2008), are important on diverse stages of the political process (Fawzi, 2018), and rather set the political agenda than politics itself (van Aelst & Walgrave, 2011). Moreover, a majority of Dutch politicians perceived that other politicians would do anything to get media coverage (Brants, de Vreese, Möller, & van Praag, 2010), and German MPs stated that the media have more influence on candidates' recruitment than in the past (Pontzen, 2013). According to the "influence of presumed influence" approach (Gunther & Storey, 2003), these perceptions have consequences: individuals who perceive strong media influences on others react to these perceptions and change their attitudes or behaviors. This assumption was, for example, confirmed in a study among Israeli MPs (Cohen et al., 2008): their media-related activities were influenced by their perception that the media have a strong political influence on the public.
Taken together, politicians perceive that the media have a strong influence on politics and react by investing more time in media-related activities. However, most studies are based on cross-sectional data (but see, e.g., Kepplinger, 2002) and analyze either politicians' perceptions or activities (but see, e.g., Cohen et al., 2008). Moreover, these results cannot simply be transferred to social media, as these platforms have a different media logic.
Fourth Phase of Mediatization: The Role of Social Media
Citizens across countries increasingly use social media as a source to receive news, and many social media users in countries such as the United States (54%), the United Kingdom (42%), and Germany (25%) followed at least one political party or politician (Newman et al., 2017, pp. 12, 17). Politicians, in turn, have adopted social media within short time periods. For example, Facebook adoption among the major party candidates for the United States Congress increased remarkably from 2006 to 2012 (Gulati & Williams, 2013). Finally, many journalists conduct a daily social media monitoring and increasingly include social media content from politicians and other actors in their news reports (e.g., Broersma & Graham, 2013).
These examples show that political communication processes have rapidly changed. However, as the news media still dominate political information and communication (e.g., Newman et al., 2017), Chadwick (2013) argued that the contemporary media system has evolved to a hybrid media system. "The hybrid media system is based upon conflict and competition between older and newer media logics but it also features important pockets of interdependence among these logics" (Chadwick, 2013, p. 207). Politicians need to adopt both media logics to succeed in this media environment. Svensson (2015, 2016) have carved out the differences between the ideal types of news media logic and network media logic (for an alternative conceptualization, see, van Dijck & Poell, 2013). They focused on the differences regarding production content, distribution of information, and media usage. Contrary to the news media logic, the network media logic is much more based on amateurs who produce content based on their own interests and based on the anticipated interests of fragmented publics. The distribution of information is based on virality, which means that users distribute popular content with like-minded others, who possibly share this content within their network. Finally, contrary to the news media audience, the social media audience is more fragmented, interactive, and bound to networks of peers or interests, which enables a high level of selective exposure.
Politicians can address these fragmented social media audiences directly with their messages. Popularity cues, such as the number of likes, shares, or retweets, give politicians hints as to who and how many individuals of a specific audience receive these messages and "what content and presentations techniques 'work' online" (D'heer, 2018, p. 177). However, as it is "nearly impossible to determine the actual audience" (Litt, 2012, p. 312), politicians communicate to an "imagined audience" (e.g., Litt, 2012). This imagined audience can be differentiated in an abstract audience and specific target audiences (Litt & Hargittai, 2016) with whom politicians should communicate for several reasons.
Assuming that politicians want to be (re-)elected, they need to convince as many citizens as possible to vote for them in the next election. Thus, politicians should address the rather abstract audience of "the general public." However, this general public more frequently receives political information from news media than from social media (Newman et al., 2017). Because journalists, as gatekeepers of the news media, also monitor social media platforms and include social media content in their coverage (e.g., Broersma & Graham, 2013), they are an important target audience of politicians' social media activities. Another important target audience are the politicians' voters. First, politicians need to communicate with their voters, because former voters have to be persuaded to vote again for the politician. Second, politicians' voters or followers may interact with the politicians' messages (Kalsnes, Larsson, & Enli, 2017), which increases the virality of these messages. Finally, other politicians are an important target audience. For example, in party-centered political systems such as Germany, political parties and their members primarily determine the chances of politicians getting a parliamentary seat by deciding about a politician's position on the party list. German MPs are aware of this and accordingly perceive that relationships within one's own party are a very important factor for political success (Pontzen, 2013).
Hypotheses
Empirical studies show that the majority of politicians have adopted social media within short time periods (e.g., Gibson & McAllister, 2015;Gulati & Williams, 2013). However, many politicians use social media only occasionally and less in an interactive way (e.g., Enli & Skogerbø, 2013;Nuernbergk & Conrad, 2016;Pontzen, 2013), with usage decreasing shortly after election campaigns (e.g., Enli & Skogerbø, 2013;Nuernbergk & Conrad, 2016). Furthermore, politicians adapt and use Facebook and Twitter in different ways (e.g., Enli & Skogerbø, 2013;Larsson & Kalsnes, 2014;Quinlan, Gummer, Roßmann, & Wolf, 2018). Since most of these studies did not have a longitudinal design, it is unclear if and how fast social mediatization processes take place. Moreover, these studies did not argue from a mediatization perspective and did not consider the specifics of the network media logic. However, based on these results and according to the mediatization of politics thesis, we assume that German MPs report that the amount of their social media activities changed over time: H1. According to the German MPs' self-reports, their Facebook and Twitter activities increased from 2012 to 2016.
Although politicians or campaign managers perceive that traditional news media (e.g., television and newspapers) and traditional campaign tools (e.g., press releases) are more important or influential than social media, social media are attributed some or even great political influence (e.g., Karlsen & Enjolras, 2016;Lilleker, Tenscher, & Štětka, 2015;Magin, Podschuweit, Haßler, & Russmann, 2017;Pontzen, 2013;Quinlan et al., 2018). In particular, "Facebook is a must have" for political parties (Magin et al., 2017(Magin et al., , p. 1707, while Twitter is perceived as less influential (e.g., Karlsen & Enjolras, 2016;Lilleker et al., 2015;Quinlan et al., 2018). Moreover, the politicians' perception that Facebook is important significantly increased during the last years (e.g., Karlsen & Enjolras, 2016). It is likely that the perceived influence of Twitter is also increasing over time, particularly since politicians like Donald Trump detected Twitter as campaign tool. Thus, considering that social media have fragmented audiences and in line with the mediatization of politics thesis, we assume: H2. According to the German MPs' perceptions, the political influence of Facebook and Twitter on (1) the general public, (2) journalists, (3) other politicians, and (4) their own voters increased from 2012 to 2016.
Finally, some studies have analyzed the influence of politicians' perceptions on their social media activities. Politicians who strongly perceive that their voters, colleagues, and party expect that politicians should use social media are more likely to adopt social media (Hoffmann, Suphan, & Meckel, 2016). Politicians who perceive that social media are important in electoral campaigning increase their social media activities (Karlsen & Enjolras, 2016). Focusing on the influence of presumed social media influence, studies indicate that the perceived influence of social media or the Internet on other politicians, citizens, and journalists partially affected politicians' social media activities (e.g., Bernhard & Dohle, 2015;Metag & Marcinkowski, 2012 for contrary results: e.g., Marcinkowski & Metag, 2014). A longitudinal and largely context-independent analysis should make these previous findings more robust and could show to what extent politicians' self-mediatization in social media is affected by their perceptions regarding their imagined audiences. Thus, in line with the mediatization of politics thesis, it is hypothesized: H3. The stronger German MPs perceived the political influence of Facebook and Twitter to be on (1) the general public, (2) journalists, (3) other politicians, and (4) their own voters between 2012 and 2016, the more extensively, according to their self-reports, they used Facebook and Twitter between 2012 and 2016.
Procedure and Sample
To test the hypotheses, two standardized surveys were conducted among the members of the 17th (spring 2012 and 2013) and 18th German Bundestag (spring 2015 and 2016). At the time of the surveys, no national elections or other specific events occurred that could have distorted the responses.
To increase the number of participants, all MPs were invited to participate in the surveys. In addition, the respondents were guaranteed absolute anonymity. 1 The invitations were sent by letter. The questionnaire and a stamped return envelope were enclosed. The MPs were also able to complete the survey online. At 2 and 4 weeks after the invitation, reminder emails were sent.
Although the response rates varied over time, the samples were not biased with respect to sex and age (Table 1)
Measures
The study was part of a larger research project. Therefore, not all characteristics of mediatization could be addressed in the surveys. Moreover, the concept of the network media logic was not yet developed when conducting the first surveys. Thus, if we would design the surveys today, some questions would be added and some question wordings would be adjusted. However, to maintain the longitudinal character of the study, most questions were not adjusted between the surveys.
German MPs' Self-Reported Social Media Activities. In each survey, the MPs were asked how often they used Facebook and Twitter (1) to get political information and (2) to broadcast information about their political work. In the 2015 and 2016 surveys, they were additionally asked how often they used Facebook and Twitter (3) to broadcast information about their everyday lives. All items were measured on five-level scales. To make the scales more comparable, all scales were slightly adjusted in 2015 and 2016 (2012/2013: 1 = never to 5 = daily [get information] and 1 = not at all to 5 = very intensive [broadcast information about political work]; 2015/2016: 1 = never to 5 = very frequently).
German MPs' Perceptions About the Political Influence of Social
Media. MPs were asked in each survey how strongly they believed the political influence of (1) Facebook and (2) Twitter to be on (a) the general public, (b) journalists, and (c) other politicians. In 2015 and 2016, MPs were also asked to assess the political influence of Facebook and Twitter on (d) their own voters (all items: five-level scales: 1 = no influence to 5 = very strong influence).
Covariates. In addition to sex, age, and party affiliation, the MPs' perceptions about the reach and suitability of social media were requested, because these perceptions also seem to influence political actors' social media activities (e.g., Kelm, Dohle, & Bernhard, 2017). Thus, the MPs were asked to estimate how many people in Germany (in each survey), journalists, politicians, and their own voters (in 2015 and 2016) used Facebook and Twitter to receive political information. The items were measured on five-level scales (2012/2013: 1 = very few people to 5 = very many people; 2015/2016: 1 = almost no one to 5 = almost all). Moreover, the MPs were asked how suitable they considered Facebook and Twitter to be for getting political information (in each survey) and for broadcasting information about their own political work (in 2015 and 2016; all items: five-level scale; 1 = not suitable at all to 5 = very suitable).
German MPs' Self-Reported Social Media Activities
It was hypothesized that German MPs' self-reported Facebook and Twitter activities increased from 2012 to 2016 (H1). However, according to the MPs' answers, their social media activities were rather constant (Table 2).
Using Facebook and Twitter to receive political information did not increase over time. Instead, German MPs stated that their Facebook usage for this purpose significantly decreased between 2013 and 2015, but reached the former level in 2016. The MPs' self-reported Facebook usage for broadcasting information about their own political work significantly increased between 2012 and 2015 and remained at a high level in 2016. In contrast, using Twitter to broadcast information remained at a consistently lower level. In 2015 and 2016, the MPs were also asked how often they broadcasted information about their everyday lives via Facebook and Twitter, with the results indicating that they rarely used Facebook and Twitter for this purpose. For all purposes, German MPs stated that they used Facebook more frequently than Twitter.
Taken together, according to the MPs, only broadcasting information about their own political work via Facebook noticeably increased in the observed period. However, MPs' self-reported intensity of other social media activities hardly changed between 2012 and 2016. Thus, H1 has to be rejected.
German MPs' Perceptions About the Political Influence of Social Media
It was hypothesized that the German MPs' perceptions about the political influence of Facebook and Twitter on the (1) general public, (2) journalists, (3) other politicians, and (4) their own voters increased from 2012 to 2016 (H2). However, in most instances, the perceived influence of Facebook and Twitter remained more or less constant (Table 3) influence of Twitter on journalists, the MPs' perceptions did not change in the observed period. In contrast, German MPs' perceptions about the political influence of Twitter on their own voters significantly increased from 2015 to 2016. Taken together, the German MPs' perceptions about the political influence of social media hardly changed between 2012 and 2016. Thus, H2 has to be rejected.
However, looking at those respondents who observed a strong social media influence (for a similar proceeding, see, for example, Fawzi, 2018;Strömbäck, 2011;van Aelst et al., 2008), the picture is different. The proportion of German MPs who perceived a (very) strong influence of Twitter on the general public (+83.0%), journalists (+63.1%), politicians (+93.3%), and their own voters (+109.0%) strongly increased from 2012 to 2016. This trend is not visible on average, because the proportion of those MPs who perceived Twitter as not or slightly influential decreased by only 13.3% on average in the observed period. However, the proportion of those who perceived a strong influence of Facebook on the general public (-39.4%), journalists (-1.8%), politicians (+11.4%), and their own voters (+3.2%) decreased or did not change notably from 2012 to 2016.
Influence of German MPs' Social Media Perceptions on Their Self-Reported Social Media Activities
To test to what extent the self-reported Facebook and Twitter activities of German MPs were affected by their perceptions about the influence of Facebook and Twitter on the (1) general public, (2) journalists, (3) other politicians, and (4) their own voters (H3), hierarchical linear regression analyses were calculated with the data from all surveys. The German MPs' perceptions about the influence of Facebook and Twitter on the general public, journalists, politicians, and their own voters served as the independent variables. The frequency of the MPs' self-reported Facebook and Twitter usage to broadcast information about their own political work served as dependent variables. Sex, age, education, party affiliation, the perceived reach, and the perceived suitability of Facebook and Twitter served as covariates. 2 Table 4 indicates to what extent the German MPs' selfreported Facebook activities were influenced by their perceptions about the political influence of Facebook on their target groups. In almost all cases, there was no relationship between the German MPs' perceived political influence and their self-reported Facebook activities. The only significant result is observed in 2015, where the German MPs' selfreported Facebook activities were positively influenced by their perception of Facebook's political influence on their own voters (β = .23, p < .01).
The picture changes little when focussing on Twitter (Table 5). In almost all cases, the German MPs' selfreported Twitter activities were not affected by their perceptions about the political influence of Twitter on their target groups. The only remarkable results are observed in 2013, where the perceived political influence of Twitter on other politicians positively affected German MPs' Twitter activities (β = .23, p < .05), and in 2016, where the perceived influence of Twitter on journalists had a positive impact (β = .26, p < .10). 3 Taken together, the German MPs' self-reported social media activities between 2012 and 2016 were hardly affected by their perceptions about the political influence of social media on their target groups. Thus, H3 has to be rejected.
The effects of some control variables partly confirm the results of other studies. The younger the MPs, the more they stated that they used Facebook and Twitter (see, e.g., Metag & Marcinkowski, 2012). MPs of the Greens stated that they used Twitter more frequently than MPs of other parties (see, e.g., Quinlan et al., 2018). The most consistent effect on the German MPs' social media activities is exercised by the perceived suitability of social media (see, e.g., Bernhard & Dohle, 2015;Kelm et al., 2017).
Discussion
Contemporary media systems and individuals' media behavior have changed rapidly since the emergence of social media. Within mediatization research, these developments were worked up in a theoretically meaningful way (e.g., Klinger & Svensson, 2015), but hardly considered empirically (but see, e.g., Olsson & Eriksson, 2016). Particularly, it is unclear how these developments have changed the media behavior and the perceptions of important political decision makers. Moreover, the few studies that have examined how politicians' perceptions and social media activities are related have led to mixed results (e.g., Bernhard & Dohle, 2015;Metag & Marcinkowski, 2012). Thus, it is unclear to what extent perceptions are indeed a major driver of mediatization processes (e.g., Cohen et al., 2008;Tsfati, 2014). This study addressed these research gaps by examining the It is striking that German MPs' Facebook and Twitter activities remained largely constant from 2012 to 2016. Only the German MPs' self-reported Facebook activities for broadcasting information about their own political work increased significantly in the observed period. On one hand, this consistent pattern was unexpected because the (political) online world has significantly changed in recent years (e.g., Vowe & Henn, 2016). For example, social media play an increasing role for political information (Newman et al., 2017) as well as in other fields of public communication (e.g., Broersma & Graham, 2013). Moreover, a new kind of political actor was growing with the help of social media (e.g., Groshek & Koc-Michalska, 2017). On the other hand, the results could be explained by the fact that Facebook and Twitter were already the most prominent social media platforms used by German MPs at least since 2012 (results not presented). Those who have used Facebook and Twitter since 2012 may have established consistent routines that do not vary from one year to another. Moreover, as citizens have become accustomed to the MPs' social media communication, MPs did not feel obliged to change their social media habits (Tromble, 2018).
Likewise, the German MPs' perceptions regarding the political influence of Facebook and Twitter on their target groups, and thus on the general public, journalists, politicians, and their own voters, hardly changed from 2012 to 2016. There are some significant changes, but in most instances, these short-time peaks disappear in the following years. Again, a possible explanation for this consistency could be that Facebook and Twitter were already well established in 2012, and MPs have not changed their perceptions regarding social media since that time. Moreover, since the mediatization of politics is not a linear process (e.g., Strömbäck, 2011, p. 426), short-time upswings and downswings are expectable.
Nevertheless, the picture becomes less clear when looking at those respondents who perceived a (very) strong political influence of social media on their target groups. While the proportion of those respondents who perceived a (very) strong political influence of Facebook on their target groups has also not changed or even decreased over time, the proportion of those respondents who perceived a (very) strong political influence of Twitter on their target groups has increased by 87.1% on average. This indicates a digital divide in German MPs' Twitter perceptions. While, on one hand, more and more German MPs perceived Twitter as (very) influential, on the other hand, the proportion of those who perceived Twitter as not or slightly influential decreased only slowly. In other words, although more and more German MPs "believe in Twitter," there is a relatively constant proportion of parliamentarians who are rather skeptical about Twitter's political influence. This digital divide of politicians' Twitter perceptions is not yet mirrored in politicians' Twitter activities. But if perceptions are indeed "the first aspect of mediatization" (Donges & Jarren, 2014, p. 189), Twitter will become more important for German MPs in the future, while the relevance of Facebook is likely to remain similar or even to decrease. The third hypothesis assumed, in line with the influence of presumed influence approach (Gunther & Storey, 2003), that German MPs' self-reported social media activities are influenced by their perceptions about the social media influence on mentioned target groups. The findings indicate that politicians' perceptions about the social media influence are largely independent from their social media activities. The few significant findings should not be overstated, especially since the findings are not based on one specific independent variable (e.g., the perceived influence on journalists). Thus, the results raise two questions. The first question is whether mediatization processes are indeed driven by politicians' perceptions about media influences. While perceptions about media influences seem to be a relevant driver in the offline world (e.g., Cohen et al., 2008), these perceptions seem to have little or no impact in the online world (e.g., Marcinkowski & Metag, 2014;Metag & Marcinkowski, 2012). One reason could be that the news media logic and the network media logic imply very different affordances. For decades, politicians have internalized the affordances of the news media logic and therefore act strategically in the offline world. However, politicians still try to understand the network media logic and use social media rather as a playground than as a strategic communication platform. Since the network media logic is in flux and will develop new affordances as technology changes, politicians are likely to continue to lag behind these developments in the future. The second question is what motives politicians have to use social media. Maybe perceptions and activities are still somehow related to each other, but politicians do not change their social media activities automatically when they perceive a growing influence of a social media platform. Instead, they might observe the developments and perhaps adapt their social media activities at later times or occasions (e.g., election campaigns). Another explanation could be that politicians use social media just because they want to present themselves "as being modern, open minded, and upto-date" (Marcinkowski & Metag, 2014, p. 161). Moreover, politicians may have intrinsic motives to use social media. Some studies, for example, indicate that politicians also use social media simply for having fun or passing time (e.g., Hoffmann et al., 2016). However, the strongest and most consistent effect on the German MPs' self-reported social media activities was the perceived suitability to get and broadcast political information via Facebook and Twitter. This indicates a rather strategic use. Obviously, further studies are needed that focus on the motives of politicians' social media activities.
This study has limitations. The results were based on selfreports by the MPs, which is problematic for several reasons. First, MPs' staff often co-curate politicians' social media communication. Thus, it is unclear to what extent politicians can correctly assess the nature and intensity of their social media activities. Although German politicians state that they have control over their social media communication (Meckel, Hoffmann, Suphan, & Poëll, 2013, p. 4), this perception could also be biased. Second, the respondents may have brightened the frequency of their social media activities (social desirability) although absolute anonymity was guaranteed. In addition, politicians may have changed their attitudes toward a socially acceptable intensity of social media usage over the years. As a consequence, politicians may have overstated their social media activities in some years and understated their activities in other years. For this reasons, further research should try to avoid measuring social media by self-reports, especially if the activities leave "digital trace data" (Jungherr, 2015). Instead, further research should use scraping tools, which were developed in recent years (e.g., Keyling & Jünger, 2016).
Another limitation was that not all items were queried in all surveys. In addition, the constructs were partly measured in slightly different ways, which might have distorted the comparisons over the years.
Moreover, the element of change, which is inherent for mediatization, could not be addressed in the items directly. In addition, some activities and perceptions that address the specificity of network media logic (Klinger & Svensson, 2015) or social media logic (van Dijck & Poell, 2013) could not be take into account, because these logics were not developed in 2012. Future research should consider these logics and their specific characteristics more clearly when developing their study designs (e.g., Olsson & Eriksson, 2016). Especially, other researchers are encouraged to develop survey questions that could measure social mediatization in cross-national and longitudinal study designs. How often, for example, have MPs considered information in their decision processes that they have obtained via social media? To what extent are politicians trying to create messages that "go viral?" And do politicians use the features of social media to address different target groups?
Finally, future research should consider that in the hybrid media system, the mass media logic and the network media logic overlap (Chadwick, 2013) and therefore compare politicians' perceptions and activities in traditional and social media.
Despite these limitations, the data provide a valid overview of how a group central to German politics, members of the German Bundestag, state to use and perceive social media platforms. These aspects were measured four times, and changes were traced over a period of 5 years. Thus, the study contributes to the few empirical studies that considered the process character of mediatization (e.g., Kepplinger, 2002). Moreover, the study enriches the literature on MPs' motivations for online activities by revealing the extent to which social media activities are influenced by subjective perceptions. Other researchers are encouraged to analyze MPs' social media communication in different national contexts to make changes in political communication in the online world more visible.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The study was supported by the German Research Association (research group "Political Communication in the Online World," subproject 3, grant no. 1381).
Notes
1. While this sampling strategy can be considered as a suitable way to increase the response rate, it also has negative implications. Particularly, due to the anonymity, the respondents' answers could not be linked to their actual activities, which could have been additionally assessed by content analysis. 2. The perceptions about the influence, reach, and suitability are related to each other. However, bivariate analyses using the data from each year indicated only moderate relationships between these perceptions. Moreover, as all tolerance values are above .30 and all variance inflation factors (VIF) are below 3.50, there should be no problem with multicollinearity. 3. Further analyses were carried out in which the control variables were kept constant. However, the effects of the independent variables did not change. | 2019-05-17T14:33:28.746Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "734b91ca3f72efc205b107e90ed1fb56f67516f4",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2056305119837679",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "45aeb36d2b984f4a5a9f2e3d58beb51aacd5640d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
11492413 | pes2o/s2orc | v3-fos-license | Two different phase-change origins with chemical- and structural-phase-changes in C doped (1.5 wt.%) In3Sb1Te2
We fabricated C-doped (1.5 wt.%) In3Sb1Te2 (CIST) thin films with amorphous phase (a-CIST) using a sputter method. Two electrical-phase-changes at 250 and 275 °C were observed in the sheet resistance measurement. In order to understand the origin of these electrical-phase-changes, all samples were characterized by XRD, TEM, and HRXPS with synchrotron radiation. In a-CIST, only weak Sb-C bonding was observed. In the first electrical-phase-change at 250 °C, strong Sb-C bonding occurred without an accompanying structural/phase change (still amorphous). On the other hand, the second electrical-phase-change at 275 °C was due to the structural/phase change from amorphous to crystalline without a chemical state change.
again at 275 °C (B in the Fig. 1a). This confirmed that the carbon doping effect in IST was due to the increased phase-change temperature. This result is similar to that of nitrogen doping in GeTe and GST 5,12,13 .
To investigate the physical origin of these sheet resistance changes in more detail, the structures of all of the samples annealed at elevating temperature were characterized by XRD and TEM (Fig. 1b). When the sample was annealed to around 250 °C, broad peaks by a typical amorphous phase (a-CIST) are observed in the XRD patterns except two very low-intensity peaks at 29° and 39° 2θ that could be indexed as (202) and (400) crystallographic plane of InTe phase 14 . Additionally, this crystalline phase was confirmed by observation of the lattice fringes in the TEM image (Fig. 1b). From the diffraction peaks in the XRD patterns at annealing temperatures at 350 °C and 450 °C, we confirmed that most of a-CIST phase transformed into crystalline phases which correspond to cubic InSb, cubic In 3 SbTe 2 (c-IST), and tetragonal InTe phase as shown in Fig. 1(b), respectively. As the annealing temperature increased from 350 °C to 450 °C, the XRD peaks of c-IST (200), (220), and (222) moved to the higher 2θ angle direction, indicating that the plane spacing of the sample is shorter. However, it was reported that IST phase is metastable under 420 °C in the phase diagram of IST [15][16][17] . Moreover, Eun Tae Kim et al. reported that the IST peak appears at 400 °C and the peak intensity is maximum at 450 °C 9 . According to PDF #17-0849 (Fm-3m, a = 6.1263 Å), the d value of (200) diffraction peak of IST is 3.05 Å. In the present work, the interplanar (d) value of (200) diffraction peak of c-IST at 350 °C and 450 °C are 3.131 Å and 3.124 Å, respectively. Therefore, the peak shifts in the XRD patterns were attributed to phase transformation of c-IST from thermodynamically metastable state to stable state.
From the sheet resistance measurement, we assumed that CIST underwent a structural phase change at 250 °C. However, from the XRD results it is apparent that this was an electrical phase-change without an associated structural phase-change. This suggests that the first electrical phase-change may have resulted from a chemical phase-change (Fig. 1a) 18 . XRD results confirmed that the second change of sheet resistance (Fig. 1a) was due to a structural phase-change.
To understand these independent electrical and structural phase-changes in CIST, we measured HRXPS with synchrotron radiation. To remove surface oxide, samples were mildly sputtered under Ne + 18-21 . This resulted in a clear improvement of peaks in C 1s, Te 4d, Sb 4d, and In 4d core-levels (Fig. 2). Peak intensity of O 1s core-level that originated from the surface oxide completely disappeared. After sputtering, peak binding energies of Te, Sb, and In 4d core-levels were 40.0, 31.9, and 17.3 eV, respectively. These binding energies are different with pure IST 10 . The chemical shift of Sb 4d core-levels between pure and C-doped IST was 0.3 eV. In the case of C 1s core-level, we observed a binding energy of 284.5 eV before sputtering. However, this peak completely At ~350 °C, the phase-change of a-CIST was indicated by a change of resistance and crystallinity. disappeared and a new peak with a binding energy around 282.8 eV was observed after sputtering. This means that the binding energy peak at 284.5 eV originated from the surface oxide of C-doped IST. Normally, C-C bonding is observed at 284.5~284.8 eV 22,23 . Also if the carbon is cationic, its binding energy will be shifted higher (e.g., C-O, ~286 eV and CF 2 , ~292 eV) than that of C-C bonding 22 . However, a chemical state at a lower binding energy than C-C indicates a metal carbide species 22 . On the other hand, the binding energy of Sb 4d core-level shifted to a higher value (31.9 eV) than the Sb of pure IST. This implies a cationic role. Finally, the doped carbon in a-CIST is bonded with Sb, creating an Sb-C metal carbide. This is completely different than C-or N-doped GST 12,20,24 . In the latter case, in Fe-or Mn-doped IST, these metals bonded only with In atom in the amorphous phase 18,21 . Here, the a-CIST has the chemical state of an Sb-C metal carbide.
In order to see the change of chemical states during both electrical and structural phase-changes, we measured HRXPS from 250 to 450 °C (Fig. 3). At 250 °C, significantly, a new chemical state in C 1s core-level appeared at 283.6 eV and maintained its peak intensity and shape in spite of increasing temperature (Fig. 3a). In the case of Te 4d core-level (Fig. 3b), the binding energy at 40.0 eV did not change except in the 450 °C sample. The peak in the 450 °C sample was observed at 40.2 eV. A dramatic change was seen in Sb 4d core-level spectra (Fig. 3c). At 250 °C, peak intensity decreased dramatically by half and then continued to decrease further with increasing temperature. This means that Sb is depleted from the surface. In a device, this is very important, because Sb will diffuse to the top electrode in the isolated cell space of the device. Sb diffusion will create a type of thin film in the cell and will change the electrical properties of the device. If C-doped IST-based phase-change random access memory is fabricated, Sb stoichiometry should be considered in order to avoid this diffusion. The peak position also changed from 31.9 to 32.1 eV. This means that the Sb-C bond developed more metallic properties. We believe that this new, stronger Sb-C bond created at 250 °C is induced during the first electrical phase-change without a structural phase-change. In the case of In 4d core-level spectra (Fig. 3d) In order to analyze the spectra of C 1s and Sb 4d core-levels in more detail, we performed curve-fitting (Fig. 4a,b) using Doniach-Sŭnjić curves, convoluted with a Gaussian distribution function, considering instrumental broadening 26 . Background noise due to inelastic scattering was subtracted by the Shirley (integral) method 27 . In the curve-fitting of C 1s core-level, we found three chemical states, C1, C2, and C3 with binding energies of 283.8, 282.8, and 283.6 eV, respectively. The C2 chemical state (weak Sb-C bonding) is observed in all samples with different peak intensities. As the temperature increased to 250 °C, the C1 chemical state (a kind of C-C bonding) disappeared completely and the C3 chemical state (strong Sb-C bonding) appeared. On the other hand, in Sb 4d curve fitting, the chemical state of Sb1 was observed only in a-CIST and after increasing the temperature we observed only the Sb2 chemical state. Binding energies of Sb1 and Sb2 were 31.9 and 32.1 eV, respectively. Sb1(Sb-C bonding) and Sb2 (relatively strong Sb-C bonding than Sb1) are bonded with C2 and C3, respectively. The chemical state of Sb2 is not changed in the 2nd phase-change. If we have the new chemical state of C3 after 1st and 2nd phase-changes, normally we should observe a chemical state of Sb3 (for new cation element). However, we could not observe. It means that the chemical state of C is only changed without a new chemical state of Sb. This means that at the 2nd phase-change the chemical environment of Sb is not changed but the next nearest neighbor (the C element) is changed. In this case, we could not mention the stoichiometry. That reason was why we made the relative labels of "Strong Sb-C bonding" and "Weak Sb-C bonding". Also, the peak intensity of Sb 4d core-level decreased more than 60% as temperature increased (Fig. 4c).
Interestingly, the chemical states of CIST at 250 °C had already changed without a structural phase-change. This means that the first electrical phase-change of CIST is due to the new, strong Sb-C chemical bonding. Then structural ordering followed at 350 °C. At that time, the second electrical phase-change occurred without a chemical state change. Finally, we found that the first and second electrical phase-changes of CIST originated from chemical phase-changes (with strong Sb-C bonding) and structural phase-change (from amorphous to crystalline), respectively.
Conclusions
We fabricated a-CIST and performed post-annealing experiments to understand phase-change mechanisms in CIST. All prepared samples with different annealing temperatures were characterized in terms of sheet resistance, XRD, TEM, and HRXPS. The formation of Sb-C bonding, a kind of metal carbide, in a-CIST is very different than C-or N-doped GeTe or GST. We found that CIST underwent two electrical phase-changes and the origins of first and second electrical phase-changes in CIST are a chemical phase-change (strong Sb-C bonding) and a structural phase-change (from amorphous to crystalline).
Methods
Sample preparation. C-doped a-IST thin films (100 nm) were deposited onto Si(001) substrates in an Ar atmosphere using reactive sputtering with a single CIST target at room temperature. To remove the oxide layers formed on the surface of the thin film by air exposure, the C-doped a-IST was etched with Ne + (99.999%) ion sputtering for 1 h with an ion beam energy of 1 kV under a pressure of 1.0 × 10 −6 Torr to remove any surface oxide 10,21 . We used the resistivity heating method and a K-type thermocouple on a Si substrate to apply heating and measure the temperature, respectively 10,28 . Characterization using TEM, HRXRD and HRXPS with synchrotron radiation. For TEM measurements, we used an ARM200F (JEOL) which employs atomic resolution TEM with the S-TEM Cs corrector. All samples were scraped on the surface to get sample shavings. Shavings were mounted on TEM grids. In order to confirm structural phases of samples, we performed XRD at the 9C beamline of the Pohang Light Source II (PLS-II) in South Korea. X-ray energy of 8.9 keV (λ = 1.3932 Å) was selected by a double crystal Si(111) monochromator, and XRD data were obtained from 15°-55° with a standard theta-two-theta scan. HRXPS spectra were obtained using synchrotron radiation at the 10D beamline of the Pohang Light Source II. Photon energy was varied from 360 eV (for C 1s, Te 4d, Sb 4d, and In 4d core-level) to 660 eV (for O 1s and Sb 3d core-level) to obtain high-quality XPS spectra. Photoelectron signals were recorded with a PHOIBOS 150 electron energy analyzer equipped with a two-dimensional charge-coupled detector (2D CCD) (Specs GmbH), collecting photoelectrons normal to the surface. The binding energy scale was calibrated with the Au 4f core-level peak at 84.0 eV 29 . The base pressure of the main chamber was maintained below 1.2 × 10 -10 Torr. | 2018-04-03T00:26:00.760Z | 2016-12-08T00:00:00.000 | {
"year": 2016,
"sha1": "ba279d6f2f4c0a69e3707df21db82693bef7f554",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep38663.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9679a909f2d2c50a1d01c305c16d8e6530caf1fe",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
39520326 | pes2o/s2orc | v3-fos-license | Forkhead Homologue in Rhabdomyosarcoma Functions as a Bifunctional Nuclear Receptor-interacting Protein with Both Coactivator and Corepressor Functions*
, In a search for novel transcriptional intermediary factors for the estrogen receptor (ER), we used the ligand-binding domain and hinge region of ER as bait in a yeast two-hybrid screen of a cDNA library derived from tamox-ifen-resistant MCF-7 human breast tumors from an in vivo athymic nude mouse model. Here we report the isolation and characterization of the forkhead homologue in rhabdomyosarcoma (FKHR), a recently described member of the hepatocyte nuclear factor 3/forkhead homeotic gene family, as a nuclear hormone receptor (NR) intermediary protein. FKHR interacts with both steroid and nonsteroid NRs, although the effect of ligand on this interaction varies by receptor type. The interaction of FKHR with ER is enhanced by estrogen, whereas its interaction with thyroid hormone receptor and retinoic acid receptor is li-gand-independent. In addition, FKHR differentially regulates the
The nuclear hormone receptors (NRs) 1 play an important role in a variety of physiological functions such as cell growth, development, differentiation, and homeostasis (1,2). The NR superfamily is often divided into steroid and nonsteroid receptor subfamilies, which show different features in DNA binding and dimerization and a different effect on the basal transcriptional activity of the target (2,3). The estrogen receptor (ER), a member of the steroid receptor family, is critical for the development and progression of breast cancer, and it is a useful diagnostic and therapeutic target (4 -8). Like other NRs, ER contains two distinct transactivation function domains (AFs): the ligand-independent (AF-1) and ligand-dependent (AF-2) activation domains (4,8). A large number of ER-interacting proteins have been identified that modify ER activity. Several coactivators have been characterized recently including SRC-1, Grip1/TIF2, RIP140, Trip1, CBP/P300, SPA/L7, and AIB1/ ACTR/RAC3/p/CIP (9 -12). In addition, several corepressors have also been identified including N-CoR and SMRT (13). The relative expression and/or activity of coactivators and corepressors in a particular environment may modulate the agonistic/ antagonistic activities of the partial ER antagonist, tamoxifen (Tam) (14 -17). Most recently, two bifunctional NR intermediary proteins, TIF1 and NSD1, have been described that can regulate transcription either positively or negatively, depending on both the promoter context and the cell type (18,19).
To identify novel transcriptional intermediary factors for ER that might contribute to estrogen-dependent cell proliferation, we used the AF-2 and hinge region of ER as bait in a yeast two-hybrid screen of a cDNA library derived from Tam-resistant breast tumor tissues from an MCF-7 athymic nude mouse model (20,21). Here we report the isolation and characterization of FKHR (forkhead homologue in rhabdomyosarcoma), a previously described member of the hepatocyte nuclear factor 3/forkhead homeotic gene family (HNF3/FKH), as a novel bifunctional NR-interacting protein that displays corepressor activity on steroid receptors and coactivator activity on nonsteroid receptors (22,23). Consistent with these observations, overexpression of FKHR in MCF-7 cells, an estrogen-dependent human breast cancer cell line, dramatically inhibits their proliferation. FKHR has been been shown recently to be an important player in several signal transduction pathways regulated by the AKT protein kinase (24 -32).
EXPERIMENTAL PROCEDURES
Yeast Two-hybrid Screen-The plasmid pAS2-1/ER (DEF), coding for a fusion protein containing the GAL4 DBD and the AF-2 and hinge regions of hER␣, was constructed from the vector pAS2-1 (CLON-TECH) and used as bait. The cDNA library was prepared using Tamresistant breast tumor tissues from an MCF-7 athymic nude mouse model constructed in pGAD10 by CLONTECH (20,21). The screen was performed in the presence of estrogen (E 2 ). The positive clones were tested further for ligand-dependent interactions with ER by -gal filter lift assay after cotransformation of DBD-ER (DEF) and AD-FKHR (amino acids 402-629) in the absence and presence of ligand.
GST Pull-down Assay-The constructs GST-TR and GST-RAR were provided kindly by Michael G. Rosenfeld (33). GST-ER (DEF) was generated by inserting the hinge and AF-2 regions of hER␣ into the BamHI/ EcoRI sites of the pGEX-2kt vector (Amersham Pharmacia Biotech). The plasmid pcDNA3.1/AIB1 was provided kindly by Paul S. Melzer (9), and pcDNA3/FKHR was constructed by subcloning the full-length FKHR into pcDNA3 at the Klenow-filled BamHI and XhoI sites. The GST pull-down assay was performed as described previously (34).
Transfections, Luciferase, and Growth Inhibition Assays-The reporter genes 2xERE-Tk-Luc (35), PRE/GRE-TATA-Luc (36), pC3-Tk-Luc (37), TRE-Tk-Luc, and RARE-Tk-Luc (38) were described previously. Monkey kidney-derived COS-1 (ATCC) and human hepatocyte carcinoma HepG 2 cells (ATCC) maintained in improved Eagle's medium supplemented with 10% fetal bovine serum (Life Technologies, Inc.) were seeded in phenol red-free improved Eagle's medium (Life Technologies, Inc.) containing minimal essential components and transfected 24 h later with the DNAs indicated in the figure legends using Fugene 6 (Roche Molecular Biochemicals). The total amount of DNA was kept constant in all transfections by the addition of empty vector DNA as carrier. Twelve hours later, the cells were treated with different ligands for 24 h and harvested for -gal and luciferase activities. The luciferase activities were normalized to the -gal activities. The data are presented as the average Ϯ S.E. of triplicates and are representative of at least three independent experiments. The single cell proliferation assay (39) and colony reduction assay (40) were performed as described previously.
RESULTS
Using ER as bait, the yeast two-hybrid screen of the cDNA library derived from Tam-resistant breast tumors was utilized to identify novel ER-interacting proteins. A 0.8-kilobase open reading frame DNA fragment obtained from this screen showed 100% identity with the C-terminal one-third of FKHR (amino acids 402-629), a member of the HNF3/FKH transcription factor family (22,23). The sequence of FKHR protein exhibits certain interesting features (22). As shown in Fig. 1A, the protein contains at its N terminus a forkhead domain that is highly homologous among the HNF3/FKH family and is necessary for DNA binding, and at its C terminus a proline-rich and acidic serine/threonine-rich transactivation domain (AD). Other motifs in FKHR include an SH3 binding site and an alanine-rich region that has been associated with transcriptional repression in other proteins. FKHR and two other closely related members, FKHRL1 and AFX (41,42), are relatively divergent from other HNF3/FKH family members both within and outside the forkhead region. As shown in Fig. 1B, their forkhead domain lacks the N-terminal KPPY motif common to most HNF3/FKH family genes and contains a novel 5-amino acid insert (DKGDS) instead. Interestingly, at their C termini an NR-interacting domain or LXXLL motif (Fig. 1A, NID) (43-45) is highly conserved among FKHR, FKHRL1, and AFX but not other HNF3/FKH members (Fig. 1C).
The interaction between FKHR and ER was defined further using the yeast two-hybrid and GST pull-down assays. In the yeast two-hybrid assay ( Fig. 2A), in the presence of E 2 , coexpression of two chimeric proteins (GAL4 AD-FKHR (amino acids 402-629) and GAL4 DBD-hER␣) resulted in good cell growth on selection medium, and the colonies turned blue within 1 h in the -gal filter lift assay ( Fig. 2A, E 2 ), whereas a minimal growth was seen either in the absence of ligand ( Fig. 2A, No ligand) or the presence of anti-estrogen ( Fig. 2A, Tam). Surviving colonies in the absence of ligand or with Tam did not turn blue within 16 h in a -gal filter lift assay. In the GST pull-down assay, FKHR only weakly interacted with ER both in the absence of hormone (Fig. 2B, lane 2) and the presence of Tam (Fig. 2B, lane 4). E 2 treatment enhanced the interaction by 2-3-fold (Fig. 2B, lane 3). GST alone did not interact with FKHR even in the presence of E 2 (Fig. 2B, lane 5), indicating the specific interaction between ER and FKHR. As reported previously (9 -11), the known ER-interacting protein AIB1 interacted with ER only in the presence of E 2 (Fig. 2B, lane 8). Taken together these results suggest that FKHR is an ERinteracting protein that preferentially associates with E 2bound ER.
The ligand-dependent interaction between FKHR and ER suggests a potential role for FKHR as an ER coregulator. Therefore, we tested whether FKHR can directly affect ER transactivation by cotransfection of FKHR and ER along with reporters driven by either an artificial ERE promoter (Fig. 2C) or a natural ERE promoter (Fig. 2D) into mammalian cells. In both cases, co-expression of ER and FKHR into HepG 2 cells repressed 2-3-fold the transcriptional activity of the reporter in the presence of E 2 in a dose-dependent manner. A similar result was obtained in COS-1 cells (data not shown). A slight repression of reporter activity was also observed in the absence of ligand and the presence of Tam, which could be either caused by residual E 2 in the medium or the weak interaction of ER and FKHR under those conditions. Alternatively, FKHR could act as a general transcriptional repressor. However, the transcriptional activity of the CMV--gal (used as an internal control) was not significantly affected (data not shown), and the repres-sion was observed only when the ER was cotransfected (Fig. 2C), indicating that the repression is receptor-dependent. To exclude the possibility that FKHR may down-regulate ER, we measured ER protein levels in cells transfected with various amounts of FKHR. No difference in ER protein levels was found (data not shown), indicating that the repressive effect of FKHR on ER-mediated transactivation is not caused by ER down-regulation.
To test whether FKHR can counteract the activator activity mediated by other known coactivators, HepG 2 cells were cotransfected with a constant amount of the AIB1 expression plasmid along with increasing amounts of FKHR (Fig. 3A) or vice versa (Fig. 3B). As reported previously (9), the transient transfection of AIB1 resulted in a 2-fold enhancement of ERmediated transactivation, confirming its ER coactivator activity. Cotransfection of increasing amounts of FKHR along with AIB1 resulted in a dose-dependent repression of AIB1-enhanced ER-mediated transactivation (Fig. 3A). Conversely, cotransfection of increasing amounts of AIB1 along with a constant amount of FKHR gradually reversed the repressive effect of FKHR on ER-mediated transactivation (Fig. 3B). These data demonstrate that FKHR and AIB1 antagonize the effects of each other on ER-mediated transactivation.
To investigate whether the repressor activity of FKHR is limited to ER, we examined the effect of FKHR on other NRs. Similar to ER, cotransfection of FKHR resulted in a gradual repression of PR-and GR-mediated transactivation (Fig. 4, A and B), and this effect was solely observed in cells cotransfected with the receptor and treated with their cognate ligands. In addition, FKHR interacted with two nonsteroid NRs: GST-RAR and GST-TR (Fig. 4C). However, unlike ER, FKHR bound to TR and RAR both in the absence and presence of their cognate ligands, indicating the ligand independence of the interaction. In contrast to steroid receptors, cotransfection of FKHR resulted in 2-3-fold stimulation, rather than repression, of RARand TR-mediated transactivation both in the absence and presence of their cognate ligands (Fig. 4, D and E).
To determine the physiologic relevance of the interaction of FKHR with ER, we tested the effect of FKHR on the growth of an estrogen-dependent human breast cancer cell line, MCF-7. Using a single colony reduction assay (Fig. 5A), we observed that empty vector-transfected cells had a significantly different distribution in the cell number per colony or cluster than FHKR-transfected cells (p Ͻ 0.0001). Clusters containing single cells or doublets were much more common in cells transfected with FKHR, whereas clusters containing more than 10 cells were common in control dishes (Fig. 5A). This difference in colony size is more apparent when comparing the median cell number per colony for pcDNA-transfected cells (seven cells/ colony) with the FKHR-transfected cells (two cells/colony). In the colony reduction assay (Fig. 5B), in comparison with vector alone, overexpression of FKHR greatly reduced colony formation (50% reduction). Similarly, overexpression of p21, a known negative regulator for cell growth, resulted in a 70% reduction of colony formation (40).
Finally, Western blot analysis revealed the differential expression pattern of FKHR in different human tissues (Fig. 6). FKHR is expressed in most of the tissues tested with higher levels in ovaries and testes and intermediate levels in brain, heart, kidneys, liver, and skeletal muscle. There is very low FKHR expression in lungs and no detectable levels in placenta and spleen. Interestingly, there is a doublet band in muscle (heart and skeletal muscle) but not in other tissues tested, although we do not know currently the nature of this doublet. The tissue differences in FKHR protein levels may play a role in tissue specificity of NR-mediated responses to various hormones or antihormones. DISCUSSION We have shown that FKHR interacts with several members of the NR superfamily including ER, RAR, and TR, although the effect of ligand on this interaction varies by receptor type. Its interaction with ER is enhanced by estrogen, whereas its interaction with TR and RAR is ligand-independent. The characteristic features of the interaction of FKHR with ER compared with TR and RAR are similar to those described for the previously described NR intermediary protein NSD1 (19). The two distinct NR-interacting domains identified in NSD1 could be also present in FKHR. Alternatively, the different binding features of FKHR could also reside within the structure of the NRs themselves. As mentioned, steroid and nonsteroid receptors show distinct features in their DNA binding and dimerization and in their effects on the basal transcriptional activity of target genes. First, steroid receptors form homodimers in their active state, whereas nonsteroid receptors heterodimerize with RXR upon the addition of ligand. Conceivably, FKHR could interact differently with homodimer versus heterodimer partners. Second, unliganded steroid receptors are complexed with chaperone proteins and remain in an inactive state, whereas unliganded nonsteroid receptors are bound to DNA and are complexed with corepressors, resulting in the repression of basal transcription of target genes. FKHR could bind with different affinities and/or mechanisms to DNA-bound versus free NRs. Because there is no DNA or promoter present in our in vitro assay, the interaction seen in vitro may not necessarily reflect the in vivo situation.
In addition to its different binding properties with steroid and nonsteroid NRs, FKHR differentially regulates the transactivation mediated by different NRs. Co-expression of FKHR in mammalian cells (HepG 2 and COS-1) dramatically represses transactivation mediated by ER, PR, and GR. In contrast, FKHR stimulates rather than represses transactivation mediated by RAR and TR. The differential effects of FKHR on the transactivation of different NRs might be explained by the presence of both coactivation and corepression domains in the FKHR molecule, as described for NSD1 (19). Sequence analysis of the FKHR gene has shown a transactivation domain at its C terminus as well as a repression region at its N terminus (22). Binding to different receptor types could result in conformational changes, exposing either the transactivation or repression domains of FKHR. In addition, homodimers and heterodimers of NRs might recruit different sets of transcriptional components that might then be differentially regulated by FKHR. Finally, because its regulatory function depends on the nature of the receptor, it is possible that FKHR activates transcription functions of TR and RAR by either sequestering TRand RAR-specific corepressors or blocking the histone deacetylase activity associated with these corepressors. Further experiments are required to address these possibilities.
The HNF3/FKH transcription factor family has been implicated in diverse biological functions varying from embryonic development to adult tissue-specific gene expression (46 -48). In addition, variants of several genes of this family (especially the FKHR subfamily) have shown oncogenic potential (46,49,50). FKHR was originally cloned from a rhabdomyosarcoma because of its aberrant fusion with another transcription factor, PAX3, resulting from a unique chromosomal translocation t(2;13) (22,23). The resulting fusion protein PAX3-FKHR is a hallmark of these tumors (51)(52)(53)(54) and is thought to play a crucial role in muscle cell transformation and evolution to rhabodmyosarcoma. Little is known about the underlying mechanism of this transformation process (55)(56)(57). Characterization of FKHR as an NR transcriptional intermediary protein should provide clues about the biological function of FKHR and possibly the oncogenic mechanism of PAX3-FKHR. It is possible that the chromosomal translocation in rhabdomyosarcoma results in not only the activation of PAX3 but also disruption of functional FKHR, which may be essential for RAR-dependent muscle cell differentiation. This loss of a differentiation function of FKHR, rather than a gain of function by the PAX3-FKHR fusion, could conceivably contribute to the development of rhabdomyosarcoma. In addition, FKHR has been shown recently to play a role in several signal transduction pathways (24 -32). Further studies on the mechanism of the regulatory function of FKHR and its biological relevance, especially its effect on hormone-dependent cell proliferation and differentiation, may help reveal the role of FKHR in the development and progression of cancers such as breast cancer, leukemia, and rhabdomyosarcoma.
FIG. 5. FKHR inhibits the growth of MCF-7 breast cancer cells.
A, single cell proliferation assay. MCF-7 cells maintained in improved Eagle's medium plus 10% fetal bovine serum were cotransfected with 5 g of pcDNA3 (n ϭ 100) or pcDNA 3/FKHR (n ϭ 102) plus 0.5 g of pCMV--gal. After 3-4 doublings, the cells were fixed and stained for -gal in situ. Colonies containing blue cells were scored for the number of blue cells per colony and subjected to biostatistical analysis. B, colony formation assay. MCF-7 cells maintained in improved Eagle's medium plus 10% fetal bovine serum were transfected with 1 g of pcDNA3, pcDNA3/FKHR, pcDNA/p21, or mocktransfected and subjected to G418 selection 48 h after transfection. The surviving colonies were stained and counted after 14 days of selection. The number of colonies on each plate was counted and graphed as averages Ϯ S.E. from triplicates. | 2018-04-03T04:35:07.272Z | 2001-07-27T00:00:00.000 | {
"year": 2001,
"sha1": "42334ad23e43631c072fc92e0694547e8763d998",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/276/30/27907.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7c8114ff5f7ad72ee1cf2a7080baa64613ddb424",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
88521413 | pes2o/s2orc | v3-fos-license | Computationally efficient familywise error rate control in genome‐wide association studies using score tests for generalized linear models
In genetic association studies, detecting phenotype–genotype association is a primary goal. We assume that the relationship between the data—phenotype, genetic markers and environmental covariates—can be modeled by a generalized linear model. The number of markers is allowed to be far greater than the number of individuals of the study. A multivariate score statistic is used to test each marker for association with a phenotype. We assume that the test statistics asymptotically follow a multivariate normal distribution under the complete null hypothesis of no phenotype–genotype association. We present the familywise error rate order k approximation method to find a local significance level (alternatively, an adjusted p‐value) for each test such that the familywise error rate is controlled. The special case k=1 gives the Šidák method. As a by‐product, an effective number of independent tests can be defined. Furthermore, if environmental covariates and genetic markers are uncorrelated, or no environmental covariates are present, we show that covariances between score statistics depend on genetic markers alone. This not only leads to more efficient calculations but also to a local significance level that is determined only by the collection of markers used, independent of the phenotypes and environmental covariates of the experiment at hand.
Introduction
In genome-wide association (GWA) studies the aim is to test for association between genetic markers and a phenotype.A large number of markers are tested, and it is important to control the overall Type I error rate.Our focus is on controlling the familywise error rate (FWER).Multiple testing correction methods may achieve this goal by estimating a local significance level for the individual tests.In this work we present a new method, the order k FWER-approximation method, for finding a local significance level in multiple hypothesis testing for correlated common variants, as is often observed in GWA studies.
Assume that we have collected independent individual observations in a case-control, cohort or cross-sectional study.The phenotype of interest can be continuous or discrete.We consider biallelic genetic markers, giving three possible genotypes.For each genetic marker we specify a hypothesis situation, where the null hypothesis is of the type "no association between the phenotype and genetic marker" and we have a two sided alternative.We will model the data using a generalized linear regression model (GLM) with phenotype as response (outcome), genotype as the independent variable of interest (exposure), and possibly non-genetical, referred to as environmental, independent covariates (not of interest) in the model.In epidemiological studies, a confounder is a common factor which is associated with both the exposure and outcome.In GWA studies, population substructure may be associated with both the exposure (genotype) and outcome (phenotype) and therefore may be a confounding factor and need to be adjusted for in the analysis.Population stratification can be adjusted for by including principal components of the genotype covariance matrix of the individuals as covariates in the model (Price et al., 2006).As test statistics for the multiple hypothesis problem we use the score test statistics to evaluate the genotype contribution to the model for each genetic marker separately.It is known that the vector of separate score test statistics asymptotically follows a multivariate normal distribution with a covariance matrix that can be estimated using key features of the fitted GLM model and the genetic markers (Schaid et al., 2002;Seaman and Müller-Myhsok, 2005).This has also been a key ingredient in the work of Conneely and Boehnke (2007).
Further, we show that for the special case when no environmental covariates are present or when environmental and genetic covariates are observed to be independent, the estimated correlation matrix between score test statistics can be approximated by the estimated correlation matrix between the genetic markers.
In a multiple testing situation with m tests the familywise error rate can be controlled at level α by specifying a local p-value cut-off, α loc , to be used for all the m hypothesis tests.Inspired from the work of Moskvina and Schmidt (2008) and Dickhaus and Stange (2013) we will use an approximation to the m-dimensional asymptotic simultaneous multivariate normal distribution of the score test statistics vector to estimate α loc .The α loc estimate can be used to define an effective number of independent tests, and our FWER-approximation can be used to compute FWER-adjusted p-values.
The order k FWER-approximation method is more powerful than the Šidák method (which assumes that the score test statistics are independent across markers) and the Bonferroni method (which is valid for all dependence structures between the score test statistics).Further, it is more efficient and more widely applicable than the method of Conneely and Boehnke (2007).In Section 5 we will see that the method of Conneely and Boehnke (2007) is built on numerical integration in m dimensions and is computationally intensive.
The Westfall-Young permutation procedure is known to have asymptotically optimal power for a broad class of problems, including block-dependent and sparse dependence structure (Meinshausen et al., 2011).However, this method is computer intensive and to have a valid permutation test, the assumption of exchangeability needs to be satisfied (Commenges, 2003).This assumption is in general not satisfied when environmental covariates are present in the model.
We will use two genetic data sets presented by Athanasiu et al. (2010), Djurovic et al. (2010), Aspenes et al. (2011) and Loe et al. (2013) to illustrate our method applied to real data.
The paper is organized as follows.In Section 2 we present statistical background on the score test, and derive expressions for the score test covariance matrix, which is of importance for the subsequent work.Our proposed method is outlined and presented in detail in Section 3, together with characteristics of our method.In Section 4 real data and an artificial correlation structure are used to evaluate our proposed model and compare to other methods.Finally, we discuss and conclude in Sections 5 and 6.
Statistical background
In this section, we present notation and details on the score test in generalized linear models.
Notation and data
We assume that data -phenotype, m genetic covariates and d environmental covariates -from n independent individuals are available in a case-control, cohort or cross-sectional study.Let Y be an n-dimensional vector having the phenotype Y i of individual i as its ith entry, i = 1, . . ., n.Let X e be an n × d matrix having environmental covariates (the first one being 1 to allow for an intercept in the model presented below) for individual i as its ith row, and let X g be an n × m matrix having genetic covariates, or genotypes, for individual i as its ith row, each column corresponding to a genetic marker.
We assume that the genetic data are from common variant biallelic genetic markers with alleles a and A, where A is the minor allele.We will use the additive coding 0, 1, 2 for the genotypes aa, aA, and AA, respectively, in the genetic covariate matrix X g , but other coding schemes are also possible.We denote the total design matrix X = (X e X g ), which has the total covariate vector for individual i as its ith row.
Testing statistical hypotheses with the score test
We assume that the relationship between the phenotype Y and covariates X can be modelled by a generalized linear model (GLM) (McCullagh and Nelder, 1989) with an n-dimensional vector η = X e β e + X g β g = Xβ of linear predictors, where β = (β T e β T g ) T is a d + m-dimensional parameter vector.Let η i be the ith entry of η, and let µ be the n-dimensional vector having µ i = EY i as its ith entry.We assume that the link function g defined by η i = g(µ i ) of the GLM is canonical, which implies that the log likelihood for individual i is , where b and c are functions defining the exponential family of the phenotypes and φ i the dispersion parameter.In our context φ i = φ will be equal for all observations.In general, The full d + m-dimensional score vector n i=1 ∇ β l i can then be calculated to be which is asymptotically normal with mean 0 and covariance matrix where Λ is the diagonal matrix having σ 2 i as its ith entry.Partition U into its environmental and genetic components, U T = (U T e U T g ).Since β e are nuisance parameters and unknown, they are estimated by their maximum likelihood estimates under the null hypothesis of β g = 0.In effect, µ is to be replaced by μe , the fitted values in a model with only environmental covariates X e present, giving the statistic Then U g|e has the conditional distribution of U g given U e = 0, which is asymptotically normal with mean 0 and covariance matrix where V ee , V eg , V ge and V gg are the upper left d × d, upper right d × m, lower left m × d and lower right m × m submatrices of V , respectively (see Smyth, 2003).
The score test statistic U T g|e V −1 g|e U g|e with β g = 0 is asymptotically χ 2 distributed with m degrees of freedom when the complete null hypothesis β g = 0 is true (see Smyth, 2003).However, our interest lies not in the complete null hypothesis, but in the m individual hypotheses β gj = 0 for each component β gj of β g , 1 ≤ j ≤ m, against two-sided alternatives.We consider the standardized components of U g|e , where U g|e j denotes the jth entry of U g|e and V g|e jk the jk entry of V g|e .Under the null hypothesis H j : β gj = 0, T j is asymptotically standard normally distributed, and H j will be rejected for large values of |T j |.Under the complete null hypothesis, β g = 0, the vector T = (T 1 , T 2 , . . ., T m ) is asymptotically multivariate standard normally distributed with covariance matrix R, having as its jk entry, all evaluated at β g = 0. Note that the dispersion parameter φ is cancelled from T and the covariances.However, the σ 2 i of Λ will have to be estimated.
Special cases
We will now look at U g|e and V g|e for some special cases.
No environmental covariates
If no evironmental covariates except the intercept are present in the GLM, then X e = 1, the n-dimensional vector having all entries equal to 1, and Λ = σ 2 I under the null hypothesis, where I is the n × n identity matrix.Then where x j is the jth column of X g , 1 ≤ j ≤ m, 1 ≤ k ≤ m.So T j , the score test statistic for testing β gj = 0, is √ n times the Pearson correlation between x j and Y when σ 2 = Var Y i is replaced by the estimate Y T (I − 1 n 11 T )Y /n, and Cov(T j , T k ) is the sample correlation between x j and x k .Thus, for a GLM without adjustment for environmental covariates, the correlation between the score test statistics can be estimated by estimating the genotype correlation.The genotype correlation estimates twice the composite linkage disequilibrium if the genotypes are coded 0, 1, 2 (Weir, 2008).
Uncorrelated environmental and genetic covariates
Two n-dimensional vectors X 1 and X 2 of observations have zero Pearson correlation if their centered observations are orthogonal, If X 1 and X 2 are two matrices, then near zero Pearson correlation of each combination of a column of X 1 and a column of X 2 can be written compactly as If we consider genetic and environmental covariates to be random variables, and all pairs of an environmental and a genetic covariate to be independent, we would expect (6) to hold for all X 1 having columns that are functions of genetic covariates and X 2 having columns that are functions of environmental covariates.In particular, we consider X 1 = X g and X 2 = ΛX e .Since Λ is a function of environmental covariates only under the null hypothesis, so is X 2 .By ( 6), X T g ΛX e ≈ 1 n X T g 11 T ΛX e , Then, from (2), We now turn to the term X T g ΛX g .Its (j, k) entry is X T 1 Λ1, where X 1 is the vector consisting of the entry-wise products of the jth and the kth column of X g .Letting X 2 = Λ1, by ( 6), independence of environmental and genetic covariates yields and we have which is the same expression as in the case of no environmental covariates with the exception that the common variance σ 2 of the responses is replaced by their average variance tr Λ/n = 1 n n i=1 σ 2 i , where the σ 2 i are defined by the environmental covariates.The conclusion is that, if environmental and genetic covariates are uncorrelated, correlations of the score vector under the null hypothesis can be estimated more easily by estimating only correlations between genetic covariates instead
The normal model
For Y i normally distributed, Λ = σ 2 I, where I is the n × n identity matrix.The score vector can then be written and (2) reduces to e X e ) −1 X T e is the idempotent matrix projecting onto the column space of X e .Then I − H is the idempotent matrix projecting onto the orthogonal complement of the column space of X e , and (I − H)Y are the residuals when fitting the multiple linear model with only the environmental covariates present.Note that σ 2 enters into the test statistics T j (3), and needs to be replaced by an estimate; we have used the residual sum of squares of a fitted model with only environmental covariates present (the null hypothesis), divided by n − d.
The logistic model
For Y i Bernoulli distributed, φ = 1 and the σ 2 i of Λ are estimated by μei (1 − μei ), where μei are the fitted values under the null hypothesis with only environmental covariates.Inference about β g is valid also if data are collected in a case-control study since the canonical (logit) link is used (Agresti, 2002, pp. 170-171).
In the special case of no environmental covariates, that is, X e = 1, each score test statistic, T j (5), is equal to the Cochran-Armitage trend test (Armitage, 1955;Cochran, 1954) , where s i are the possible values of the genetic covariates, n 1 and n 2 the number of 0 and 1 phenotypes Y i , respectively, x i the number of observations having phenotype 1 and genotype s i at marker k, y i the number of observations having phenotype 0 and genotype s i , and m i = x i + y i .The Cochran-Armitage test is used in disease-genotype association testing with scores (s 0 , s 1 , s 2 ) = (0, s, 1) (Sasieni, 1997;Slager and Schaid, 2001), for example with s = 1 2 for an additive genetic model.
Familywise error rate control and approximations
We now turn to the topic of how to control the familywise error rate (FWER) by intersection approximations, and then apply this to our situation.
Multiple hypothesis familywise error rate control
We have a collection of m null hypotheses, H k : β gk = 0 (no association between phenotype and genotype at marker k), 1 ≤ k ≤ m, against two-sided alternatives.We will present a method for multiple testing correction that controls the FWER -the probability of making at least one type I error.We adopt the notation of Moskvina and Schmidt (2008), and denote by O k the event that the null hypothesis H k is not rejected, and by Ōk its complement, 1 ≤ k ≤ m.Then, if all m null hypotheses are true, In our case, O k is an event of the form |T k | < c, where T k is the test statistic of (3).We will consider single-step multiple testing methods, and choose the same cut-off c for each k.We denote by α loc = 2Φ(−c) = P ( Ōk ), the asymptotic probability of false rejection of H k , where Φ is the univariate standard normal cumulative distribution function.When the joint distribution of the test statistics is known under the complete null hypothesis, or can be estimated, FWER control at the α significance level can be achieved by solving the inequality FWER ≤ α for α loc , based on either the union or intersection formulation of (7).When m is large, this involves evaluating high dimensional integrals over the acceptance or rejection regions, which is suggested by Conneely and Boehnke (2007).
To avoid evalulating these costly integrals, we may instead control FWER by considering bounds based on (7).For example, the Bonferroni method is based on the Boole inequality applied to the union formulation of (7), from which it is seen that a local significance level of α loc = α/m guarantees FWER ≤ α.
When the FWER is calculated under the complete null hypothesis, so-called weak FWER control is achieved.However, in our situation, subset pivotality is satisfied, meaning that the distribution of any subvector (T k ) k∈K is identical under k∈K H k and under the complete null hypothesis m k=1 H k , for all subsets K ⊆ {1, 2, . . ., m}.In particular, a subvector of U g|e (1) and a submatrix of V g|e (2) corresponding to K only involves genetic covariates corresponding to K. Then strong FWER control is achieved, meaning that FWER ≤ α regardless of which null hypotheses are true (Westfall and Young, 1993;Westfall and Troendle, 2008).
The focus in this work will be on the intersection formulation of (7).Background theory will be given next and new application in 3.3.
Intersection approximations
Following Glaz and Johnson (1984), we define kth order product-type approximations to where probabilities are evaluated under the complete null hypothesis.This is similar to the usual multiplicative rule for the probability of intersection of events applied to γ m = P (O 1 ∩ • • • ∩ O m ), but with dimension of distributions limited to k.The idea is that the γ k should constitute increasingly better approximations of γ m as k increases, and that calculation of γ k is less costly than calculation of γ m when k < m, since only k-variate distributions are involved in γ k .
Note that the approximations depend on the order of the components of T = (T 1 , . . .T m ).We have used the order in which the m markers are positioned along the genome, assuming that the largest correlations occur between close markers.
In our case, Since T is asymptotically multivariate normally distributed with mean 0 under the complete null hypothesis, γ 1 ≤ γ m asymptotically ( Šidák, 1967) It is well known that the α loc found by this method, the Šidák method, is slightly larger than the α loc found by the Bonferroni method, thus the Šidák method will give slightly higher power.
We have seen that in our case, γ 1 ≤ γ m , meaning that the Šidák method can safely be used.If γ k ≤ γ m , then FWER = 1 − γ m ≤ 1 − γ k = α can be used to control FWER by solving the last equation for α loc (choosing the greatest solution if not unique -we have, however, never observed a γ k that is not monotonically decreasing in α loc ).If γ k ≤ γ l , then continuity of γ k and of γ l as functions of α loc implies that the α loc making 1 − γ l = α is no less than the α loc making 1−γ k = α, so that the power obtained by the lth approximation is no less than the power obtained by the kth approximation.
The ideal property Block et al. (1992).Unfortunately, our |T | is not MSM m−1 .It is possible to construct a trivariate normal distribution with mean 0 such that γ 1 < γ 3 < γ 2 for some α loc .However, the violations of MSM m−1 we have observed have been very small and only for restricted ranges of α loc , and only for carefully constructed covariance matrices.We have not observed violations for covariance matrices estimated from real data, and will therefore proceed to apply γ 2 and γ 3 as better approximations to γ m than γ 1 (the latter giving Šidák cutoffs).A summary of concepts of positive dependence, like MSM, was given by Dickhaus (2014, pp. 58-61).
Controlling FWER using kth order approximation for score tests
As we have seen, the vector T of score test statistics is under the complete null hypothesis asymptotically standard multivariate normal with covariance matrix R (4).We denote by O j the event |T j | < c of non-rejection of H j , which has probability P (O j ) = 1 − α loc under the null hypothesis, with α loc = 2Φ(−c).We will detail how to find α loc given by the second order approximation, γ 2 : Denote by r j the (j − 1, j) entry of R. Then For a desired upper bound α on FWER, the equation 1 − γ 2 = α is solved with respect to α loc , which can be done numerically using for example a bisection algorithm.Note that α loc enters into c = −Φ −1 (α loc /2).We can control FWER by higher-order approximations by solving the equation 1 − γ k = α for α loc in a similar way, which we will henceforth refer to as order k FWER approximation.By (8), γ k can be written as a ratio of products of k-dimensional and products of k − 1-dimensional multivariate normal integrals.Good numerical methods for calculating multivariate normal integrals exist for small dimensions (Genz and Bretz, 2009).We will illustrate using k = 2 and k = 3 for real data in Section 4.
The procedure to find α loc does not depend on the exact form of the test statistic, only that the vector (T 1 , . . ., T m ) of test statistics is asymptotically standard multivariate normal under the complete null hypothesis and |T j | ≥ c leads to rejection.In particular, (9) is identical to what was found by Moskvina and Schmidt (2008) for an allelic test and correlations given by linkage disequilibria.
In practice, instead of calculating α loc , it may be preferable to calculate FWERadjusted p-values: Replace α loc with p, the unadjusted p-value for an individual test, in the calculation of γ k .Then 1 − γ k is an FWER-adjusted p-value for the test, in the sense that if 1 − γ k ≤ α (rejection based on adjusted p-value), then p ≤ α loc (rejection based on local significance level).
FWER control with independent blocks
Genetic markers are distributed along the chromosomes and a common assumption is independence of genetic markers from different chromosomes.
As we have seen in Section 2.3, if the genetic markers are independent and no environmental covariates that are correlated with the genetic markers are included, the score test statistics for these markers would also be independent.Within a chromosome, genetic markers can belong to different haplotype blocks, being highly correlated within a block and independent or nearly independent between the blocks (Griffiths et al., 2002).
Assume that the m markers to be tested, and thus {O 1 , . . ., O m }, can be partitioned k be the kth order approximation given by ( 8) for the intersection of the events belonging to the lth block, 1 ≤ l ≤ b, and let γ k be the overall kth order approximation.Then it is easy to verify that γ k = b l=1 γ (l) k .
The effective number of independent tests
The concept of an effective number of independent tests, M eff , in multiple testing problems has been described and discussed by many authors, including Nyholt (2004), Gao et al. (2008), Moskvina and Schmidt (2008), Li and Ji (2005), Galwey (2009) and Chen and Liu (2011).All except Moskvina and Schmidt (2008) first estimate M eff , and then use M eff in place of m in the Šidák formula to calculate α loc = 1 − (1 − α) 1/M eff .An alternative formulation using the Bonferroni formula also exists.None of these methods use the concept of FWER in the derivation of M eff , and there is no mathematical justification that FWER is controlled.All methods start with the linkage disequilibrium or composite linkage disequilibrium matrix, and there is no mention of the dependence of the M eff estimate on the test statistics used for the hypothesis tests.
The method of Moskvina and Schmidt (2008) is based on an allelic test and controls the FWER using second order intersection approximations.As for our method, the main output of their method is an estimate of α loc .The above Šidák formula can then be used to define M eff = ln(1 − α)/ ln(1 − α loc ).Note that M eff depends on both α loc and the FWER threshold α.We will not consider M eff further in this article.The TOP data set is a case-control GWA data set, in which case is schizophrenia or bipolar disorder.The data set was collected with the aim to detect single-nucleotide polymorphisms (SNPs) associated with the schizophrenia or bipolar disorder (Athanasiu et al., 2010;Djurovic et al., 2010).The preprocessed TOP GWA data contain genetic information on 672972 SNPs (Affymetrix Genome-Wide Human SNP Array 6.0) for 1148 cases and 420 controls.Our dataset included individuals sampled until March 2013, and therefore the sample size is larger than in the cited papers.Preprosessing of the data was done as described in Athanasiu et al. (2010) and Djurovic et al. (2010).
Genotype-phenotype association was assessed by fitting a logistic regression without any environmental covariates, so that score test correlations equal genotype correlations (Section 2.3.1).
The VO 2 -max data set comes from a cross-sectional GWA study (Aspenes et al., 2011;Loe et al., 2013), in which the aim was to find SNPs associated with maximum oxygen uptake.The preprocessed VO 2 -max GWA data consist of 123497 SNPs (Illumina Cardio-MetaboChip, Moore et al., 2012) for 2802 individuals.The VO 2 -max data were analysed using a normal linear regression model, including age, sex and physical activity score as covariates.For both datasets, some genotype data were missing.In the TOP data, mean imputation was done for 0.04% of the genotypes, and in the VO 2 -max data for 0.7% of the genotypes.
For the VO 2 data, the local significance level controlling the FWER at level 0.05 was lowest for the Bonferroni method, slightly higher for the order 1 approximation (the Šidák method), and further increasing through the order 2 and 3 FWER approximations (Table 1).
For the TOP data, since no environmental covariates are included, permutation of the binary response vector is feasible (the exchangeability assumption is satisfied), and the maxT method can be used to estimate the local significance level controlling FWER at level 0.05.Permutation of the responses, followed by calculation of the maximal score test statistics over the whole genome, is a time consuming task, and we will only present results on two of the smallest chromosomes (chromosome 21 and 22): The local significance level controlling the FWER at level 0.05 was, as for the VO 2 data, lowest for the Bonferroni method and increasing through order 1-3 FWER approximations (Table 2).The highest level was obtained for the maxT method, and also the TOP data (left) and VO 2 -max data (right).The plots are logspline density estimates (Stone et al., 1997), implemented by the R logspline package (Kooperberg, 2016).
lower bound of the 95% confidence interval for α loc of maxT was greater than the order 3 FWER approximation.On a 4 × 6-core Xeon 2.67 GHz computer (Intel CPU) running Linux (Ubuntu 14.0) using one thread, the analyses on chromosome 22 took 85 hours for maxT, 20 minutes for order 3 FWER and 10 seconds for order 2 FWER approximation.
Smoothed frequency distributions of the estimated correlations between neighbouring SNPs along chromosomes are very similar across chromosomes (Figure 1), and therefore, we would expect that the trends for chromosome 21 and 22 can be extended to the other chromosomes and to the whole genome.
Correlation structure and local significance level
Consider 100 markers and a multivariate normal test statistic T having an AR1 correlation structure, that is, all entries on the main diagonal of the 100 × 100 correlation matrix are equal to 1, on the sub-and superdiagonal ρ, on the next diagonals ρ 2 , and so on.We investigated the effect of positive ρ on the local significance level α loc found by order 1-4 FWER approximations to control FWER at the 0.05 level.Also, the "true" α loc was calculated without approximation (that is, based on γ 100 ; see Section 3.2), using the pmvnorm function of the R (R Core Team, 2015) package mvtnorm (Genz et al., 2016) using the Genz-Bretz algorithm (Genz, 1992(Genz, , 1993;;Genz and Bretz, 2002).The pmvnorm function can calculate multivariate normal probabilities with some accuracy for dimensions up to 1000.
The inverse of an AR1 correlation matrix contains only negative off-diagonal entries, which ensures a property called MTP 2 (Karlin and Rinott, 1981) which implies that the product-type approximations γ k of Section 3.2 are non-decreasing in k (Glaz and Johnson, 1984), making the α loc of the order k FWER approximations non-decreasing in k.
The effect of ρ on α loc was small for ρ < 0.4 (Figure 2), so for the 100 markers considered, there would be no gain in using FWER approximation or even the true joint distribution of T instead of Šidák this case.For larger ρ, order 2 FWER approximation provides an improvement compared to Šidák.The increase in α loc from Šidák to order 2 FWER approximation was greater than the difference between higher orders.
To assess the order k FWER approximation method for a more realistic correlation structure, we considered the empirical correlation matrix for the first 1000 markers on chromosome 22 of the TOP data.The order 1-4 approximations to control FWER at the 0.05 level gave an α loc of 5.1 ( Šidák), 5.8, 6.2 and 6.4 • 10 −5 , respectively, whereas the α loc calculated without approximation using the Genz-Bretz algorithm was 7.3 • 10 −5 .
Discussion
We have presented the order k FWER approximation method for estimating the local significance level α loc used to control FWER in a GWA study.Our method takes the estimated correlation structure between the test statistics into account, and is applicable when environmental covariates are present.The relation between the phenotype response and the genetic and environmental covariates can be modelled by any generalized linear model (using the canonical link); in particular, both models with discrete and models with continuous phenotypes are allowed.We have applied the method to common genetic variants, but it can also be used for rare variants.However, since rare variants are less correlated than common variants, we expect the increase in α loc from the Šidák method to be less than when analyzing common variants.The order k FWER approximation is based on conditioning on the previous k−1 neighbouring markers along the chromosome.A sufficient condition to have non-decreasing local significance levels when the order of the FWER approximation increases from 1 to k, and that the order k order approximation gives valid FWER control, is that the test statistic has the MSM k property.Even MSM 2 and MSM 3 are difficult to verify for our test statistic with GWA data, but it is reasonable to assume that they are satisfied (Section 3.2).
Population substructure can be associated with both the genotype and phenotype and is therefore a possible confounding factor in GWA studies.Population substructure can be adjusted for in the analysis using principal components of the covariance matrix of the individuals (Price et al., 2006) as covariates.In both the TOP data and the VO 2max data, related individuals were removed in the preprocessing of the data, and no adjustment for population structure was done in our analysis.
The AR1 correlation structure (Section 4.2) might not be a realistic model for genotype correlations, but the calculations nevertheless show potential for a significant improvement over the Šidák method by applying the fast order 2 FWER approximation.Also, there is potential to get quite close to the local significance level given by the full joint distribution of the test statistic vector by using order 3 or 4 approximation.Calculations using the more realistic empirical correlation matrix of part of chromosome 22 of the TOP study confirm this impression.
The maxT method (Section 3.6) of Westfall and Young (1993) may give higher power than FWER approximation (Table 2).However, there is no general way of including envir-onmental covariates using that method (Section 3.6).Also, computing time is much larger than for lower-order FWER approximation (Section 4.1), and the α loc estimate would likely differ if a new set of permutations were made (see confidence limits of Table 2).
Another alternative is parametric bootstrap methods (Seaman and Müller-Myhsok, 2005), which could be used to estimate the local significance level when the exchangeability assumption is not satisfied.It would be an efficient method, but to our knowledge it has not been proven that parametric bootstrap will control the overall error rate, since nuisance parameters need to be estimated.Conneely and Boehnke (2007) introduced a method for multiple testing correction for GLMs for multiple responses (traits) based on the estimated correlation matrix of the score vector.The focus of the method is to calculate FWER-adjusted p-values based on the multivariate integral arising from (7).Currently, this integral can be computed numerically with some accuracy for dimensions smaller than or equal to 1000 using the pmvnorm function of the R package mvtnorm (see Section 4.2 for details and references).Thus, the method of Conneely and Boehnke is not applicable for larger problems, e.g. more than m = 1000 hypothesis tests.
For our order k FWER approximation method we have used standard R functions to compute the second order approximation given by (9).For orders 3, 4 and 5 we have used the above-mentioned function pmvnorm specifying the Miwa algorithm (Miwa et al., 2003) instead of the default Genz-Bretz algorithm.The Miwa algorithm can be used for small dimensions, and is deterministic, whereas the Genz-Bretz algorithm includes simulations that lead to inaccuracies, which accumulate to an intolerable level when used for the large number of factors in (8).The research into better and faster integration of multivariate normal densities is ongoing, and Botev (2016) provides an interesting new approach, applicable for dimensions smaller than or equal to 100.This will enable our order k FWER approximation method to be applied with larger values of k than what has been presented here.
Conclusions
We have presented a new method for controlling the FWER for GWA data.The order k FWER approximation method can be used for generalized linear models and include adjustment for environmental covariates, possibly confounding, like population substructure or sex.We have applied the FWER approximation method to GWA data, and shown that our method is a powerful alternative to the Bonferroni and Šidák methods, especially in situations were permutation methods cannot be used (exchangeability assumption not satisfied).
The method provides a local significance level, α loc , for the individual tests, meaning that the null hypothesis of no association between phenotype and genetic marker should be rejected if the (unadjusted) p-value of a test is less than α loc .We found a substantial increase in α loc already at the order 2 approximation, compared to the α loc produced by the well-known Bonferroni and the Šidák methods -methods that does not take correlation structure between the test statistics of the markers into account ( Šidák assumes independence, but that could be considered worst-case for GWA data).
Figure 1 :
Figure1: Smoothed frequency distributions of absolute value of the estimated genotype correlations between neighbouring SNPs on each chromosome, one line per chromosome.TOP data (left) and VO 2 -max data (right).The plots are logspline density estimates(Stone et al., 1997), implemented by the R logspline package(Kooperberg, 2016).
Figure 2 :
Figure2: Local significance level α loc for order 1-4 FWER approximations and α loc based on true joint distribution of test statistic as a function of the parameter ρ of an AR1 correlation matrix for 100 markers.The horizontal line corresponds to Šidák correction (order 1 FWER approximation), then α loc is increasing with the order of the approximation (order 2-4; the three curves in the middle).The uppermost curve shows α loc based on the true joint distribution.
Table 1 :
Local significance level α loc calculated by the Bonferroni method and by order 1-3 FWER approximations for the TOP and VO 2 -max data, controlling the FWER at level 0.05, and ratio of α loc to Bonferroni α loc .significancelevel is mainly dependent on score test statistics correlations under study (not the sample size).
Table 2 :
for the density of |T |, Local significance level α loc for the TOP data calculated by the Bonferroni method and by order 1-3 FWER approximations, and estimated by the maxT method, controlling FWER at level 0.05, and ratio of α loc to Bonferroni α loc .Chromosome 21 contained 9802 SNPs and chromosome 22 contained 8970 SNPs.The number of permutations for the maxT method was 500000.The lower and upper values for maxT are bounds of a 95% confidence interval for α loc (see Section 3.6). | 2016-12-22T12:09:23.000Z | 2016-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "52db300437538a8507a6a347066aede0cc79603a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/sjos.12451",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "52db300437538a8507a6a347066aede0cc79603a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
235445252 | pes2o/s2orc | v3-fos-license | How to use the atmospheric environmental self-cleaning capability effectively in China
Based on the analysis of the effectiveness and disadvantages of artificial intervention in the treatment of air pollution, this paper briefly describes the importance of the application of atmospheric environmental self-cleaning capability for the comprehensive prevention and control of environmental pollution. According to the concept and connotation of atmospheric environmental self-cleaning capability, and referring to the main methods of foreign atmospheric environmental self-cleaning capability, this paper puts forward some suggestions on how to effectively use atmospheric environmental self-cleaning capability in China, such as strengthening the research of atmospheric environmental self-cleaning capability in China, making full use of the dilution and purification effect of wind, using the purification effect of dry and wet deposition effectively, playing the purification effect of human-oriented design reasonably, speeding up the adjustment of industrial and energy structure in combination with the characteristics of regional self-cleaning ability level, and establishing a comprehensive and key self-purification ability system index. It hopes to provide a reference for China to achieve the goal of air pollution control.
*Corresponding author's e-mail: :zhoujingkun@heuet.edu.cn Abstract. Based on the analysis of the effectiveness and disadvantages of artificial intervention in the treatment of air pollution, this paper briefly describes the importance of the application of atmospheric environmental self-cleaning capability for the comprehensive prevention and control of environmental pollution. According to the concept and connotation of atmospheric environmental self-cleaning capability, and referring to the main methods of foreign atmospheric environmental self-cleaning capability, this paper puts forward some suggestions on how to effectively use atmospheric environmental self-cleaning capability in China, such as strengthening the research of atmospheric environmental self-cleaning capability in China, making full use of the dilution and purification effect of wind, using the purification effect of dry and wet deposition effectively, playing the purification effect of human-oriented design reasonably, speeding up the adjustment of industrial and energy structure in combination with the characteristics of regional self-cleaning ability level, and establishing a comprehensive and key self-purification ability system index. It hopes to provide a reference for China to achieve the goal of air pollution control.
1.Introduction
For a long time, influenced by the concept of Nature rights and the belief that man can conquer nature, western countries have formed the practice of human intervention over nature in social governance. On the one hand, western society and people hope that the government can delegate power to maximize the role of market economy; on the other hand, they hope that the government can focus more on public issues such as air pollution control with a strong posture. However, the profit-oriented market economy drives economic subjects to blindly pursue the maximization of economic benefits, which leads to a large number of environmental pollution problems in the process of economic operation, and seriously damages public health and social development. Air pollution has increasingly become a public issue of close concern to the public all over the world. On the other hand, although Western governments have achieved a lot by adopting active artificial intervention methods to control air pollution, at the same time, countries have to face difficulties such as high cost of treatment and treating the symptoms rather than the root causes. As these governance dilemmas become more prominent, western countries increasingly realize that human society and nature are interdependent systems. It is wrong and one-sided to separate air pollution governance from natural laws. Nature has the function of purification, such as atmospheric circulation, plant absorption, which can purify air pollution to a certain extent. Bringing the self-cleaning capability of the atmosphere into the air pollution prevention and control system is not only the respect and effective use of natural laws, but also the beneficial supplement to the traditional air pollution intervention control measures, so as to reduce the pollution control cost and seek both temporary and permanent solutions.
2.Main practices of s atmospheric environmental self-cleaning capability in foreign countries
The atmospheric environmental self-cleaning capability mainly refers to the ability of atmospheric environmental elements to reduce the concentration, toxicity or disappearance of pollutants entering the atmosphere through complex physical, chemical and biochemical processes. At present, the application of atmospheric environmental self-cleaning capability in foreign countries is usually based on the whole region, and its goal is to ensure that the total amount of atmospheric pollutants in a specific region within a certain period of time does not exceed the maximum capacity of natural purification capacity of atmospheric environment. Foreign related research and practice show that the concentration of air pollutants can be reduced to a harmless level through the self-cleaning capability of atmospheric environment, so as to achieve the solution of air pollution. Therefore, in practice and exploration, countries have gradually concluded that the fundamental core idea of air pollution prevention and control should be to re-balance the atmospheric environment system, that is, air pollutants match the self-cleaning capability of atmospheric environment, and then atmospheric environmental self-cleaning capability gradually draws the attention of all countries. In terms of the dilution effect of wind on air pollutants, the air pollution control policies stipulated by arbitration courts of the Canadian and American governments take the measured values of atmospheric turbulence, wind speed and direction as the basis [1] and incorporate them into the influencing factors of government policy making. Biofiltration is widely used in European countries to achieve biodegradation of air pollutants through biological processes such as vegetation growth absorption and electrostatic properties [2]. In the aspect of the application of atmospheric dry deposition, the governments of some countries, with the support of the development of Internet information technology, have taken the lead to set up specialized departments and institutions for monitoring applied atmospheric dry deposition. The National Dry Deposition Network (NDDN) was established in 1986 to document the magnitude, spatial variability, and trends of dry subsidence across the United States. Currently, the network operates as an integral part of the Clean Air Status and Trends Network (CASTNET). In addition to actively using the existing atmospheric environmental conditions, foreign countries have also carried out a wealth of practices to promote the development of atmospheric environmental self-cleaning capability such as building industry symbiotic system, bringing atmospheric environmental self-cleaning capability into closed industrial production cycle, realizing circular industrial production procedure (Sterr&Ott, 2004) [3], and integrating ecological and urban design into air pollution prevention and control, the representative eco city index systems [4] are formed, including Hamabi eco city index system, Masdar eco city index system, European green city index system, Asian green city index system, environmental sustainable development index system of Yale University and Columbia University, and sustainable development index system of Scotland. Generally speaking, the main methods of the application of atmospheric environmental self-cleaning capability abroad include the dilution of wind, the wet deposition of rain and snow, the dry deposition of plants and buildings, the purification of temperature and humidity, the purification of natural geomorphic environmental factors, the regulation of artificial wet land on the atmospheric water cycle, the boost of eco-city design, scientific planning or regulation of pollution intensive industries.
How to use the atmospheric environmental self-cleaning capability effectively in China
The application of atmospheric environmental self-cleaning capability has been mature relatively in foreign countries, and has played an important role in total environmental pollution control and urban planning and development. To solve the air pollution problem in China, we should explore a road with Chinese characteristics on the basis of learning from the successful experience of foreign countries. Specifically, the application of atmospheric environmental self-cleaning capability in China can be carried out from the following aspects:
Strengthen atmospheric environmental self-cleaning capability research
In the last century, the western industrial countries experienced the development of " treatment after pollution ", which brought serious air pollution consequences. Serious public incidents of air pollution have aroused public anger. In Britain, the government has been accused of failing to control air pollution in court. In order to deal with public crisis, western governments advocate active human intervention. Furthermore, these artificial intervention means are consistent with the value concept of "man can conquer nature" in western society, so it is highly praised by western society. Facts have proved that a series of human intervention means have indeed played a very important role in dealing with sudden air pollution disasters and reducing social losses. However, at the same time, the traditional human intervention means also produces the side effects such as high management cost and difficult to radical cure. The problem of air pollution in all countries is no longer limited to control, but a combination of long-term prevention and control. Effective use of atmospheric environmental selfcleaning capability is the fundamental method to prevent and control air pollution. At present, China is faced with air pollution problems similar to or even more complex than those experienced by western countries. China has also made many achievements in actively absorbing and learning from foreign beneficial experience. However, in order to fundamentally solve and prevent the occurrence of air pollution, it is necessary to strengthen the comprehensive research and application of atmospheric environmental self-cleaning capability, and further incorporate it into the comprehensive system of air pollution control, and promote each other with the development of atmospheric environmental selfcleaning capability and the traditional artificial means.
Make full use of the dilution and purification effect of wind
At present, the application of wind in China is more focused on the prevention of typhoon and other meteorological disasters, but there are relatively few studies and practices on how to apply the regular function of wind to air pollution purification. The reason is that our country is influenced by the experience of western industrial countries which emphasized pure human intervention in the early stage of air pollution control half a century ago. In addition, the situation of air pollution in our country is more severe in recent years, and the society and the people urgently expect the government to effectively fight against air pollution. Therefore, "wait for the wind" was once irony as a satirical name for the government's inability to effectively control air pollution Words. At present, on the basis of increasing the publicity and popularization of the role of wind in purifying air pollution, we should speed up the related research in the field of meteorology, especially in the field of surface wind. Because meteorological factors can make the original concentration of pollutants discharged from pollution sources change greatly after entering the atmospheric environment, and its change effect is direct and obvious. So, on the one hand, wind related meteorological factors such as wind and turbulence motion, atmospheric stability, mixing layer height, etc. are the focus of research on the development and utilization of wind to purify air pollution in China. We should pay more attention to the research on dilution or accumulation of air pollutants caused by windy conditions can be further subdivided according to wind speed, wind direction and wind frequency. For example, the surface layer wind can pay attention to the surface layer height of seasonal change of wind speed, near formation and formation the height direction change regularity study such as each month high wind speed. We should also strengthen the cross-over studies and application of meteorology, environmental science, architecture, etc., to provide a scientific basis for regional air pollution control and environmental planning. On the other hand, it is pointed out in the report of the 19th National Congress of the Communist Party of China that "the urban pattern of coordinated development of large, medium and small cities and small towns should be built with urban agglomeration as the main body to promote coordinated regional development". This indicates that China's strategic planning unit has moved from a single city and province to a world-class urban agglomeration. In the layout design of large urban agglomerations, the development and utilization of atmospheric environmental selfcleaning capability should be further closely combined, such as strengthening the design layout of air ducts within cities, between cities and between large areas. At the same time, the construction of Shelterbelts in desert areas should be strengthened, so as to effectively use the dilution and purification effect of wind.
Effective utilization of dry and wet sedimentation purification
For a long time, more attention has been paid to soil pollution caused by atmospheric deposition, but the cleaning effect of atmospheric dry and wet deposition on dust in the environment has not been paid much attention. However, compared with heavy metals in the soil and other pollutants, the human body is more likely to ingest and adsorbed pollutants in the atmosphere by hand, mouth and direct inhalation, so it is more harmful to the human body and the environment. The dry and wet atmospheric subsidence can be brought into the comprehensive air pollution control system, which can achieve the purpose of treating both symptoms and root causes. The purification of atmospheric dust by dry and wet deposition includes two types: dry deposition and wet deposition. However, there are some problems in China, such as insufficient understanding of dry deposition of atmospheric dust, and the research of dry deposition process is obviously weaker than that of wet deposition. China is a vast country with great differences in climatic conditions and landforms, so it is necessary to actively explore the dry and wet subsidence models suitable for different regions. On the one hand, we should increase the research and application of atmospheric dry dust, including natural dust and dust storm. In terms of dry dust, it is necessary to highlight the retention and dilution of atmospheric dust by plants and buildings, increase the number and area of vegetation such as urban forest park, street green space and affiliated green space, and intensify efforts to return farmland to forest and grassland. On the other hand, we should also increase atmospheric research and application of wet dust, including rain, hail and other forms. In terms of wet dust, the cleaning effect of precipitation in the form of rainfall and snow is highlighted. For example, the calculation of pollutant cleaning effect caused by different rainfall conditions such as topographic rain, frontal rain, convective rain and typhoon rain is calculated. According to the situation of water sources, such as river wet season, rainwater collection, sewage treatment and South to North Water Diversion, it is planned to increase urban reservoirs, lakes, swamps, rivers and other constructed wetlands in areas with lower atmospheric environmental selfcleaning capability, and then increase rainfall by changing the regional climate. At the same time, we can learn from the experience of the federal government of the United States to establish the national dry and wet settlement monitoring center. Based on the characteristics of strong regional mobility of air pollutants, a four-level linkage dry and wet settlement monitoring system of "central -provincialmunicipal -county" should be explored to apply the effect of dry and wet settlement on air purification on the basis of real-time data monitoring.
Reasonable play of human design of purification
Human design factor refers to the maximization of self-cleaning effect of atmosphere by means of eco-city construction, water conservancy measures layout, industrial planning and allocation on the basis of full respect for atmospheric environmental self-cleaning capability. The purification of nature is not only independent of human society, but also influenced by human factors. On the one hand, the purification ability of nature has its inherent law, which will not be changed by human will. On the other hand, we can give full play to this purification ability through reasonable human design, so as to provide greater benefits for the development of human society. First of all, we should improve the research and application of pollution source distribution and emission measurement. The statistical evaluation of the source distribution of air pollutants should be strengthen, and according to the emission mode of pollutants, monitor and summarize the emissions of each pollution source in a specific period of time from the three directions of point source, line source and non-point source, among which the point source is the industry evaluation monitoring focusing on high pollution industries; the line source is the coal-fired motor vehicles and ships; non-point sources include man- made non-point sources such as waste incineration, civil fuel combustion, dust, industrial production and agricultural ammonia emissions, etc., and natural sources including volcanic eruption, forest fire, natural dust, forest plant release, sea spray and so on. At the same time, the cross-accumulation effect between different pollution sources should be evaluated. At present, China is facing a similar but more serious air pollution problem with the western developed countries in the last century, which is manifested in the cross accumulation of traditional pollution sources and emerging pollution sources. Therefore, we need to consider more factors when applying atmospheric environmental self-cleaning capability. It is necessary not only to strengthen the research on the different effects of various pollution sources, but also to perfect the research on the cumulative effects of pollution sources and countermeasures. Secondly, we should actively promote eco-city design. At present, some explorations have been made in some cities in China, but the overall effect is still not obvious, and the problem of formalization is still prominent. In view of similar problems, we can actively construct the evaluation system of eco-city design, and strengthen the guiding role of evaluation indicators such as forest city, low-carbon city, green space and green building for urban construction planning. At the same time, we should speed up the research and application of urban air duct, and actively develop from the overall urban climate model to the small-scale architectural climate model, so as to closely integrate urban air duct with urban planning and the topography of various regions. For example, according to the different seasonal rules of pollutant formation, the total atmospheric capacity of different months in a year can be predicted, so as to develop more accurate policies for pollutant emission permits. Another example is to simulate the transport and diffusion of pollutants in the region according to the law of wind direction and speed in the region, so as to formulate industrial layout planning and related management policies. Through the organic combination and application of atmospheric environmental self-cleaning capability and traditional governance means to better achieve the combination of prevention and control, both symptoms and root causes of the goal.
3.5. Speed up the adjustment of industrial and energy structure according to the characteristics of regional self-cleaning capability Regional self-cleaning capability has certain capacity limitation in specific space and time, so we should not only fully develop and utilize atmospheric environmental self-cleaning capability, but also ensure that it does not exceed the regional purification capacity, so as to ensure the sustainability of atmospheric environmental self-cleaning capability.
Therefore, in the future, on the one hand, we should speed up the study on the overall capacity of atmospheric environmental self-cleaning capability, especially the combination of atmospheric environmental self-cleaning capability with regional industrial distribution and energy structure. On the other hand, under the guidance of the concept of industrial symbiosis, we should accelerate the adjustment of China's industrial structure and energy structure. Firstly, the gradient transfer of pollution intensive industries should be carried out to ensure that the industrial waste gas does not exceed atmospheric environmental self-cleaning capability in a specific region. Secondly, emerging environmental protection industries should be cultivated in the region to build a symbiotic industrial system to support the sustainable development of atmospheric environmental self-cleaning capability. At present, a considerable part of industrial ecological parks has been built in China, and some applicable industrial ecological development experiences have been explored. On the basis of summarizing relevant experience, we should further deepen the research and application of symbiotic industrial system in combination with regional air purification capacity, industrial circulation and rational development and utilization of capacity. For example, at present, China's eco-industrial parks are mainly government led. In the future, we can actively explore and build government serviceoriented eco-industrial parks, which not only conforms to the concept of building a service-oriented government in China, but also can promote the optimal allocation of industrial resources under the guidance of market mechanism.
Constructing a comprehensive and key index system of atmospheric environmental self-cleaning capability
Throughout the evaluation and research on atmospheric environmental self-cleaning capability in various countries, it involves a wide range of disciplines, including meteorology, geography, architecture, ecology and so on. In order to accurately evaluate the ability of natural purification of air pollution in a certain region, in addition to estimating the total capacity of air in the region, meteorological factors, geographical factors, pollution source distribution and emission factors, as well as the interaction among the three factors should be included in the evaluation system. So, the atmospheric environmental self-cleaning capability evaluation index should be considered comprehensively. At the same time, in the atmospheric environmental self-cleaning capability evaluation system, the role of various factors is different, and the important influencing factors should be highlighted. The weight distribution of the system should be inclined, such as the dilution of air pollutants by wind, the wet deposition of pollutants by rainfall, the retention and absorption of pollutants by plant cover, and the urban planning for atmospheric pollutants reasonable guiding. So, these factors should be set as key evaluation index.
Conclusion
At present, China has entered a new stage of development, and its reform has entered a deep-water zone and a critical area. Improving the atmosphere environment is not only an important driving force to promote the sustainable development of economy, but also a prerequisite to ensure the normal operation of economy. High-quality economic development must be based on the carrying capacity of resources and the environment. Due to the diversity of the causes, complexity of the process and spill over-impact of air pollution, the comprehensive prevention and control means, which is composed of the effective use of atmospheric environmental self-cleaning capability and manual intervention, can achieve the treatment of both symptoms and root causes of air pollution to a greater extent. China should speed up the research and application of atmospheric environmental self-cleaning capability, so as to better realize the air pollution control and promote the high-quality development of economy and society. | 2021-06-16T20:03:17.356Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f2565b717686fff7113f7abec06afc5a232c1270",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/766/1/012060",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f2565b717686fff7113f7abec06afc5a232c1270",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
17168732 | pes2o/s2orc | v3-fos-license | Effects of carbohydrate combined with caffeine on repeated sprint cycling and agility performance in female athletes
Background Caffeine (CAF) has been shown to improve performance during early phase of repeated sprint exercise; however some studies show that CAF also increases the magnitude of physical stress represented by augmented blood lactate, glucose, and cortisol concentrations during latter phase of repeated sprint exercise. No studies have investigated the efficacy of combined carbohydrate (CHO) and CAF consumption during repeated sprint exercise (RSE) in female athletes. Thus, the purpose of this study was to investigate the effects of CAF with CHO supplementation on RSE and agility. Methods Eleven female athletes completed four experimental trials performed 7 d apart in a double-blind, randomized, and counter-balanced crossover design. Treatments included CAF + PLA (placebo), CAF + CHO, PLA + CHO, and PLA + PLA. Participants ingested capsules containing 6 mg · kg−1 of CAF or PLA 60-min prior to RSE, and 0.8 g · kg−1 of CHO solution or PLA immediately before the RSE, which consisted of ten sets of 5 × 4-s sprints on the cycle ergometer with 20-s active recovery. The agility T-test (AT-test) was performed before and after the RSE. Blood samples were acquired to assess glucose, lactate, testosterone, and cortisol. Results During Set 6 of RSE, peak power and mean power were significantly higher in PLA + CHO than those in CAF + PLA and PLA + PLA, respectively (p < .05). Total work was significantly increased by 4.8% and 5.9% with PLA + CHO than those of CAF + CHO and CAF + PLA during Set 3. PLA + CHO also increased total work more than CAF + PLA and PLA + PLA did during Set 6 (p < .05). No significant differences in AT-test performance either before or after the RSE were occurred among treatments (p > .05). Blood lactate and glucose concentrations were significantly higher under CAF + CHO, CAF + PLA, and PLA + CHO versus PLA + PLA (p < .05), but no differences in testosterone or cortisol levels were found (p > .05). Conclusions Findings indicate that CAF + PLA or CAF + CHO ingestion did not improve repeated sprint performance with short rest intervals or agility. However, CHO ingested immediately prior to exercise provided a small but significant benefit on RSE performance in female athletes.
Background
Most team sports include performance of moderate-to long duration exercise interspersed with repeated bouts of high-intensity activities as well as periods of low-tomoderate active recovery or passive rest. The work: rest ratio of the team sport athlete is around 1:4.5 [1], and average number of sprints completed during competition is approximately 20-60 times with an approximate sprint duration equal to 2 -4-s [2]. Girard et al. [3] reported that intermittent sprint exercise (ISE) differs greatly from repeated sprint exercise (RSE), that is, ISE is characterized by short-duration sprints (≤10-s) interspersed with long recovery periods (60-300-s); however, RSE is characterized by similar exercise duration (≤10-s) interspersed with insufficient recovery (≤60-s). Gaitanos et al. [4] indicated that the inadequate recovery inherent in RSE (6-s maximal sprints with 30-s rest intervals) may impair sprint performance because of limited adenosine triphosphate (ATP) supply from anaerobic metabolism (glycolysis and phosphocreatine (PCr) resynthesis) during the transient recovery between sprints, and increased acidosis. Thus, the strategies of nutritional ingestion are needed to preserve repeated sprint performance in competitive athletes.
It is common practice for team sport athletes to consume carbohydrate (CHO) to improve intermittent exercise capacity [5,6] and endurance performance [7,8], which is thought to occur via central nervous system (CNS) activation and other potential mechanisms such as higher rates of CHO oxidation [9,10]. Another ergogenic aid that has routinely been used by athletes is caffeine (CAF) [11]. Existing data show that CAF supplementation may benefit sprint performance [12,13] and reactive agility performance [14] via various mechanisms [15]. However, one study demonstrated that caffeine was ergolytic for mean power and fatigue index during the high-intensity sprint test when a 24 × 4-s cycling sprint test with 20-s of active recovery was completed versus a 90-s active recovery between each sprint bout [16]. Numerous studies have also reported that CAF ingestion has a small or negligible effect on sprint performance [16][17][18] when repeated sprint tests (≤10-s) are interspersed with short rest periods (≤60-s), as well as no effect on reactive agility [19]. Although CAF significantly improved ISE [12,13,20], a number of studies have suggested that CAF doses of 2-6 mg · kg −1 are likely to improve ISE but not RSE performance; in other words, caffeine ingestion may negatively affect repeated sprint performance with short recovery intervals in the later stages of exercise [16,21]. If CHO plus CAF could potentiate benefits of CHO on substrate metabolism and improve CNS modulation, then CAF may enhance RSE performance. Some studies have examined changes in metabolism when CAF is coingested with CHO. For example, Yeo et al. [22] found that coingestion of CHO with CAF promoted intestinal glucose absorption resulting in greater exogenous CHO oxidation than CHO ingestion alone. In addition, intestinal glucose absorption was significantly increased with carbohydrate-electrolyte plus CAF compared with a carbohydrate-electrolyte solution alone [23]. Several studies show that combined intake of CHO and CAF may be ergogenic for intermittent sprint performance later in exercise [24][25][26][27] and lower rating of perceived exertion (RPE) and fatigue index [28]. However, certain studies have reported that ingesting CHO with CAF does not affect time-trial performance [23,29,30]. Thus, further studies are needed to clarify the effects of CHO and CAF coingestion on RSE performance.
Team sports require many skills other than running in a straight line, including brief pauses, cutting actions, and rapid direction and speed changes, which all are important elements of agility. The consequences of studies focused on the improvements of agility performance after ingesting CAF and/or CHO remain controversial. Duvnjak-Zaknich et al. [14] showed that ingesting CAF may benefit reactive agility in trained male athletes, but Lorino et al. [19] indicated that CAF does not improve proagility shuttle run performance in young adult males. Roberts et al. [25] investigated the combined effects of CHO and CAF on a sustained high-intensity test of speed and agility in male rugby players, indicating the agility performance was not significantly different between trials but the likelihood of 2% improvements for CHO + CAF over placebo. In female soccer players, Red Bull containing low doses of CAF (80 mg;~1.3 mg · kg −1 ) and CHO (27 g;~0.4 g · kg −1 ) did not provide ergogenic effects on repeated agility T-test performance [31]. However, there are limited evidences investigating the effects of CHO and/or CAF with moderate dosage on agility performance in female athletes. It is unclear whether CAF or CHO + CAF supplementation by female athletes, especially in team sports, enhances agility in change of direction (e.g. agility T-test) and in fatigued condition (e.g. after a long-time repeated sprint test rather than short-time). Thus, further studies should be conducted to clarify the effects of CAF and/or CHO supplementation on agility performance during various exercise stages.
Although no significant differences were found on salivary testosterone and cortisol concentrations after repeated bouts of supra-maximal exercise in female adolescents [32], ingestion of CAF with moderate dose might elevate the salivary cortisol concentrations [33], and the benefit of caffeine on performance might be counteracted by the increases in cortisol and the decreases in testosterone: cortisol ratio [34]. Walker et al. [35] reported that ingesting a placebo and CAF increased cortisol concentration more than ingesting only CHO after a 2-h endurance cycling exercise. CHO could offer some protection against the fall in testosterone: cortisol ratio during short-term intense exercise training [36]. It is likely that the effects of CHO on cortisol release regulation are larger than CAF, and may occur by activating the hypothalamic-pituitary-adrenal axis, providing a natural negative-feedback system through the coordination of cortisol, whereas no effect has been observed on hormonal and physiological responses after RSE [37]. However, it is unclear whether ingesting CHO, or CAF and/or CHO causes RSE performance changes and hormonal reactions in women.
To date, no study examined the effect of ingestion of caffeine + placebo (CAF + PLA), caffeine + carbohydrate (CAF + CHO), carbohydrate + placebo (CHO + PLA), or placebo + placebo (PLA + PLA) on prolonged period of repeated sprint ability and agility performance for women in team sports. Therefore, the primary purpose of this study was to examine the effects of ingesting CAF combined with PLA, CAF + CHO, CHO + PLA, or PLA + PLA on repeated sprint performance tasks simulating team sports in female athletes. It is hypothesized that (1) CAF + CHO may improve repeated sprint performance and agility more than CAF + PLA and PLA + PLA do, and (2) CAF + PLA or CAF + CHO may affect blood metabolism throughout repeated sprint exercise (RSE).
Participants
Eleven trained female athletes (age = 21.3 ± 1.2 yr, height = 164.2 ± 5.7 cm, and body mass = 58.6 ± 7.3 kg), members of Division I collegiate team-sport teams, volunteered to take part in this study. They reported habitual caffeine intake = 50 to 100 mg · d −1 . All participants were regularly involved in team-sport competition such as basketball or volleyball and engaged in training 12.6 ± 1.2 hours/week. Participants were informed of the experimental procedures and potential risks before providing written informed consent. Prior to a familiarization session replicating the experimental procedure, all participants were screened for medical history and legal ergogenic aids use, and the results showed that none had taken any medicines (included prescription and over-the-counter medications) or ergogenic aids (which may influence multiple sprint performance, e.g., creatine) for at least 3 months prior to the experiment. A comprehensive list of dietary food products and medicines containing caffeine was provided to participants prior to the first familiarization trial. Participants abstained from all foods and liquids containing caffeine for 48-h before the experimental trials, as well as any alcohol and intense exercise for at least 24-h prior to all sessions. In addition, participants completed a questionnaire inquiring whether they experienced nausea, vomiting, muscle cramps, flatulence, diarrhea, anxiety, quivering, headaches, or other symptoms in order to evaluate any side effects experienced prior to exercise testing. The investigation was approved by the University Institutional Review Board.
Experimental design
Each participant visited the laboratory on five separate occasions. The first visit included preliminary testing to familiarize participants with the procedures and to minimize any learning effects. Once familiar with the protocol, each participant undertook four experimental trials separated by at least 7 d. Treatment order was randomly assigned and counterbalanced using a Latin squares design, and was provided in a double-blind fashion, participants and researchers were blind to treatment assignment. After ingestion, the participants completed the agility T-test (AT-test) and RSE after a dynamic warm up. The AT-test used in this study was similar with a previous study that showed this test has a highly reliability and validity [38]. During exercise, heart rate (HR) was regularly assessed with a Polar heart rate monitor (Polar S810i™, Polar Electro Inc, Finland) and the RPE was measured using a Borg 6-20 RPE scale [39]. Participants were familiarized with the RPE scale during the preliminary test. Blood samples were obtained throughout exercise ( Figure 1).
Treatment ingestion
Participants completed four experimental trials: CAF + PLA, CAF + CHO, CHO + PLA, and PLA + PLA. Participants arrived at the laboratory according to the time sheet. Within subjects, the time of each trial remained consistent for all trials to avoid any influence of circadian variance. On arrival to the laboratory, participants were provided with a prepacked meal with an energy content of 492.75 Kcal, composed of 64% carbohydrate, 23% fat, and 13% protein. At 7:00 AM, after consuming their prepacked breakfast, participants ingested opaque gelatin capsules containing either 6 mg · kg −1 of CAF (Sigma-Aldrich, Sydney, Australia) or an equal dosage of placebo (cellulose, Holy Food, Taoyuan, Taiwan), along with 200 ml of water [16]. Participants then rested in a quiet room for 50-min prior to ingesting the carbohydrate solution drink or placebo. Before commencing the agility and repeated sprint exercise, participants were asked to describe onset of symptoms or side effects from caffeine ingestion; thereafter, participants consumed either a CHO solution containing 0.8 g · kg −1 body mass dextrose (Roquette, France) with 500 ml of orange-flavored water or a placebo consisting of lowcalorie artificial sweetener (Prinsen BV, Helmond, The Netherlands) with 500 ml of flavored water, and then participants consumed 300-500 ml water throughout the testing. The appearance and taste of solutions were similar among treatments.
Agility T-test (AT-test)
The AT-test, referred to a previous study [38], was performed before and after the RSE. This protocol has been used to assess the agility of athletes participating in team-sport exercise [40,41]. It is a highly reliable measure of leg speed, leg power, and agility [38]. The agility test requires participants to run forward, lateral, and backward, as quickly as possible, and the total distance is 40 yard (36.56 m). Each trial was timed from start to completion by using an electronic timing system (Smart-Speed, Fusion Sport, Australia). Speed decrement of the AT-test was calculated based on a previous study [42]. The intra-class correlation coefficient (ICC, 0.87-0.98) and the coefficient of variance (CV, 4.3%-4.6%), which was calculated from the data between familiarization trial and first bout of AT-test in PLA + PLA trial, was good for AT-test.
Repeated sprint test
Participants were weighed to determine the accurate load for the RSE, which was performed on a cycle ergometer (Avantronic Cyclus II, h/p Cosmos®, Germany). The predetermined resistance was calculated according to body mass by using the following equation, produced by internal software: 0.7 × body mass in kg/ 0.173. Then, participants performed a standardized warm up followed by the first T test. A brief unloaded sprint allowed participants to prepare for the subsequent RSE. Participants were required to stay seated on the cycle ergometer for the entire duration of the RSE to limit the recruitment of other muscle groups. During each sprint, participants were encouraged to cycle maximally for each 4-s bout and pedal as fast as possible against the given load. The protocol for the RSE consisted of ten sets of repeated sprints with 2-min recovery at 50 watts at a self-selected speed ( Figure 1). Each set was composed of 5 × 4-s sprints with a 20-s active recovery (60-70 rpm, 50 watts) performed between each sprint. This test was used in a previous study [16] and is designed to activate glycolysis and maximize PCr degradation [2,4]. They were informed at the end of the recovery phase at least 5-s prior to the beginning of the next sprint. Participants were given consistent verbal encouragement during each sprint, but no performance information was provided. The power output data were recorded during each sprint using the cycle ergometer software. After completing the protocol, all data were then transferred to a personal computer to calculate the peak power, mean power, total work, and sprint decrement (equation 1) as used in previous studies [3,42]. The ICC and CV for peak power during RSE were 0.86 -0.99 and 5.6% -6.4%, respectively.
Blood analysis Blood samples (5 mL) were drawn with an indwelling venous cannula following treatment ingestion and immediately after exercise testing. This sample was placed in a tube and centrifuged at 3000 rpm for 15-min. The resultant serum was stored at −80°C for subsequent analysis of concentrations of cortisol and testosterone using radioimmunoassay (Wizard 2 Automatic Gamma Counter, PerKin-Elmer Corp, USA), with a CV of less than 5% according to LEZEN reference laboratory (Taipei, Taiwan). In addition, a 20 μl blood sample for analyzing blood glucose and lactate concentrations was collected from the earlobe immediately before RSE exercise (i.e. pre-test, which means the time point at 10 min after drinking CHO/PLA beverage), and after sets 1, 5, 8, and 10 of RSE exercise. To assess changes in blood glucose, a 10 μl earlobe blood sample was analyzed by Byer analyzer (Ascencia Breeze, Bayer HealthCare LLC, USA), and the remaining blood sample was used to obtain blood lactate concentration using methods described previously [16].
Statistical analyses
Data are reported as mean ± standard deviation and were analyzed with SPSS for Windows (version 17.0, SPSS, Inc., Chicago IL, USA). Dependent variables (peak power, mean power, total work, and RPE) were analyzed using a ten (numbers of set) by four (treatment: CAF + PLA, CAF + CHO, PLA + CHO, and PLA + PLA), twoway repeated-measures analysis of variance (ANOVA).
Agility T-test Changes in concentration of lactate, glucose, cortisol, and testosterone as well as agility performance between treatments and over time were also analyzed with two-way repeated-measures ANOVA. One-way ANOVA was performed to study differences in performance decrement of AT-test and RSE between treatments. To minimize the violation of the assumption of homogeneity of variance, the Greenhouse-Geisser correction was used when sphericity was violated. When differences were identified by ANOVA, the Bonferroni adjustment was used to ascertain where the differences lay. Statistical significance was set at a p value of ≤ .05 for all analyses. The ICC and CV were computed from the data between familiarization and PLA + PLA trials to determine the test-retest reliability of the RSE and AT-test.
Repeated sprint ability Peak power
There was a significant interaction for peak power (F = 1.89, η 2 = 0.16, p < .01). Figure 2A shows a significant difference in peak power output between PLA + CHO and CAF + PLA (p < .05). Additionally, there was a significant difference in peak power across bouts among all treatments, as it declined across bouts. A main treatment effect was observed in Set 6 (F = 5.02, η 2 = 0.33, p < .01); post hoc analyses revealed there was a trend for greater peak power (+3.8%) in PLA + CHO than PLA + PLA (p = .08) and in CAF + CHO than CAF + PLA (+5.3%) (p = .08), respectively; however, this difference was non-significant. Figure 2B summarizes changes in mean power during the RSE for each treatment. There was a significant treatment × time interaction for mean power (F = 1.64, η 2 = 0.14, p < .05). In PLA + CHO, mean power differed from PLA + PLA at set 6 of RSE (p < .05), but no difference was observed between CAF + PLA, CAF + CHO, PLA + CHO, and PLA + PLA across all other sets (p > .05). Mean power was higher in set 1 than subsequent sprint sets across all treatments (p < .05).
Total work
There was a significant treatment × time interaction for total work (F = 1.64, η 2 = 0.03, p < .05). Compared with Values are mean ± standard deviation.
the PLA + PLA condition, total work in set 6 of PLA + CHO was significantly increased by 5.2% (F = 3.20, η 2 = 0.24, p < .05) and greater by 4.1% (F = 3.26, η 2 = 0.25, p < .05) versus CAF + PLA during RSE; however, total work with CAF + CHO did not differ from CAF + PLA or PLA + PLA in any of the other sets (p > .05) ( Figure 2C). Total work declined across sets in all treatments (p < .01). Individual responses in total work are shown in Figure 2D. Most participants expressed minimal changes in work, although subject 3 revealed lower performance after CAF + CHO supplementation. (Figure 4).
Blood lactate and glucose concentrations
There was a main effect for time and treatment (p < .01) as well as an interaction for blood lactate concentration during exercise (F = 2.57, η 2 = 0.20, p < .01). Post hoc analyses show that blood lactate concentrations in CAF + PLA and CAF + CHO conditions were significantly higher than those in PLA + CHO and PLA + PLA conditions for Sets 5, 8, and 10 throughout the RSE (p < .05; Figure 5A). Blood lactate concentration increased from Set 1 to the last Set and was significantly higher than pretest (p < .01) in all conditions. There was an interaction for blood glucose concentration (F = 7.53, η 2 = 0.43, p < .01) as well as a main effect for treatment and time during exercise. Post hoc for treatment shows blood glucose was significantly higher in PLA + CHO compared with other treatments at pre-test and Set 1 during RSE, but caffeine ingestion combined with carbohydrate or placebo significantly increased glucose levels during subsequent RSE ( Figure 5B). In addition, post hoc analyses show that blood glucose concentration was significantly higher at Set 1 compared to pre-test in CAF + CHO (p < .01), and higher blood glucose at Set 1 versus Set 5 in PLA + CHO (p < .05). In addition, blood glucose concentration remained stable throughout RSE with CAF + PLA and PLA + PLA ingestion (p > .05).
AT-test performance
The results show that a significant agility performance interaction did not exist (F = 2.14, η 2 = 0.18, p > .05), as well no significant main effects for time or treatment (Figure 7 However, agility performance in the PLA + CHO condition was relatively well-preserved compared to the other treatments.
Side effects
All participants filled out the side effect questionnaire to assess the possible adverse reaction 60-min after ingesting caffeine or placebo capsule. After ingestion of caffeine, one participant experienced anxiety and slight tremor, another experienced diarrhea, and a third experienced headache and flatulence. However, carbohydrate alone or placebo supplementation did not result in any uncomfortable issues for participants.
Discussion
To our knowledge, the present study is the first to examine the effects of caffeine (6 mg · kg −1 ) combined with carbohydrate (0.8 g · kg −1 ) administration on repeated sprint performance (10 sets of 5 × 4-s sprint with 20-s rest between each sprint) and agility in female athletes. The main findings indicate a significant increase in peak power, mean power, and total work with carbohydrate ingestion alone prior to commencing a repeated sprint exercise protocol. However, the sprint decrement and agility performance for the CAF + PLA, CAF + CHO, PLA + CHO, and PLA + PLA conditions were not statistically different. Data also demonstrated that either coingestion of CAF and CHO or CAF alone significantly increased heart rate and blood lactate and glucose concentrations during later stages of the RSE, but did not alter testosterone or cortisol levels. It has been documented that CAF's influence on anaerobic exercise capacity and agility may depend on the rest: work ratio [11]. Similar to the results of previous studies by Lee et al. [16], Paton et al. [17], and Stuart et al. [21], CAF alone did not improve repeated sprint ability. Thus, while further applied research certainly needs to be done, these results suggest that CAF provides negligible benefit to repeated sprint exercise with insufficient rest interval (work: rest ratio = 1:5).
Although a meta-analysis indicated that CAF + CHO ingestion improved endurance performance when compared with CHO alone [44], the present study observed that CAF + CHO ingestion does not benefit repeated sprint performance versus CAF + PLA, PLA + CHO, or PLA + PLA. By contrast, the total work in PLA + CHO condition increased significantly at Set 3, compared to the CAF + CHO and CAF + PLA conditions. Therefore, it is tempting to speculate that combining CAF with CHO supplementation has no additive effect on prolonged repeated sprint exercise, composed of 10 sets, 5 × 4-s sprints with 20-s rest interval between each sprint. Furthermore, a performance-enhancing effect of CHO seemed to be negated by CAF when recreational male athletes performed 20-kilometer time trial [29]. This apparent discrepancy may be attributed to type (that is, prolonged repeated sprint exercises) and intensity (i.e. high-intensity and short recovery interval) of exercise performed in the present study, because previous study has indicated that anaerobic glycolysis supplies approximately 40% of the total energy during a single 6s sprint, with a progressive inhibition of glycolysis and decreased ATP production with subsequent sprints [4]. Data also show that blood lactate concentration was not significantly different at pre-test and Set 1 among treatments, but was significantly higher after CAF + PLA ingestion than PLA + CHO and PLA + PLA during later stages of the RSE. Lee et al. [16] demonstrated a significant increase in blood lactate concentrations and decreased fatigue resistance during the late stage of the RSE after CAF ingestion. By contrast, this study and others show that ingesting CHO does not affect the blood lactate response to sprint exercise [45,46]. This may reflect rapidly increasing anaerobic glycolysis, where lactate is produced when ingesting CAF [47]. CAF may impair performance for this type of exercise due to increased accumulation of by-products of anaerobic metabolism [48], a deficiency in the phosphagen system [4], and blocking CNS adenosine receptors [49] or activating Na + /K + ATPase [15]. Nevertheless, studies focused on the exact mechanism related with the effects of caffeine on energy substrate or nervous system should be conducted in future.
The present study showed that repeated sprint performance was improved followed CHO ingestion rather than CAF + CHO ingestion or CAF ingestion alone. CHO supplementation before team sport exercise has been demonstrated to significantly improve high-intensity intermittent sprint performance [6] in non-glycogen depleted subjects, which may be attributed to improved cerebral glucose uptake [9], greater CNS function [50], and motor control . * = significant increase from pre-test (p < .01). # = significant increase from pre-test in the CAF + CHO (p < .05). † = significant decrease from set 1 in the PLA + CHO (p < .05). a = significant difference between CAF + CHO and PLA + CHO (p < .05). b = significant difference between CAF + CHO and PLA + PLA (p < .05). c = significant difference between CAF + PLA and PLA + CHO (p < .05). d = significant difference between CAF + PLA and PLA + PLA (p < .05). e = significant difference between CAF + PLA and PLA + CHO (p < .05). f = significant difference between PLA + CHO and PLA + PLA (p < .05). Values are mean ± standard deviation. [45]. Despite the intensity of RSE being higher than ISE [3], CHO ingestion affects the metabolic response to team sport exercise, with a significant increase in glucose concentration found throughout exercise [5,51]. The mechanisms driving this increased blood glucose concentration are largely unknown. Blood glucose concentration initially increases after ingesting CAF + CHO or PLA + CHO and it may be suppressed by endogenous glucose production [52]. The blood glucose levels gradually decreased in the PLA + CHO trial during the RSE, suggesting that intense sprint exercise increases fuel requirements in working muscles and obligates more blood glucose to muscle cells during the RSE. By contrast, the CAF + CHO exhibited higher blood glucose levels during the RSE, partly because caffeine is crucial for maintaining blood glucose concentration by enhancing glycolytic turnover [11].
Although the exact mechanisms of carbohydrate ingestion on exercise performance, especially for exercise duration less than 1 hour, are not well understood, two major explanations are commonly used to interpret the possible ergogenic effects of carbohydrate. Firstly, the general metabolic response to prolonged intermittent exercise with CHO administration is an increase in plasma glucose concentration and higher rates of glucose oxidation during the later exercise stage [9]. Secondly, the presence of carbohydrate in the mouth has been shown to stimulate the receptors in the oral cavity, thus activating specific areas of the brain associated with reward and the regulation of motor activity [27].
CHO ingestion may increase blood glucose concentrations, however, it should be noted that the improved performance in previous studies [45] might be attributed to the glycogen-depleted state prior to the intermittent sprint exercise. In this study, we asked participants to consume a standardized meal 2 hours before exercise test to mimic the real-life situation, e.g., fed athletes before competition, in each trial. The results indicate that ingestion of PLA + CHO provided a small but significant benefit on RSE performance in female athletes. Nevertheless, Colombani et al. [53] reported that CHO administration might not induce performance improvements in male athletes during exercise lasting less than 70-min in postprandial state.
The increases in blood glucose levels and repeated sprint performance induced by CHO ingestion may also involve the central governor. Gastric empty rate of a CHO drink could be slowed by the hypertonic drink [54] and high-intensity intermittent sprint [55]. Jeukendrup et al. [56] reported that CHO ingestion has no effects on exogenous glucose uptake and total CHO oxidation during short-term (~1 hour) high-intensity cycling exercise. Although the mechanism responsible for the improvement in short-term (<1 hour) high-intensity exercise performance with CHO ingestion is not well known, some studies suggest that the CHO mouth rinsing stimulates the receptors in mouth, which modulate central pathways associated with motivation and improve the perceptions of effort [8,27]. Total RPE scores in CAF + CHO and PLA + CHO were slightly with non-significantly lower than those in other treatments (CAF + PLA vs. CAF + CHO vs. PLA + CHO vs. PLA + PLA, 157 ± 18 vs. 152 ± 16 vs. 154 ± 13 vs. 156 ± 17, p > .05). More than half of participants in CAF + CHO (7/11, 64%) and PLA + CHO (6/11, 55%) had lesser total RPE scores while comparing with PLA + PLA condition. Therefore, our study might provide some supports for the attenuation of perceptions of effort resulted from the CHO supplementation. In addition, our results in RSE performance are partially in agreement with Beaven et al. [27], who found the CAF and (or) CHO mouth rinse can rapidly enhance initial cycle sprint power production; however, recent study [57] reported that the CHO mouth rinse could not improve performance during simulated team-sport exercise (i.e., Loughborough Intermittent Shuttle Test). Therefore, further studies are needed to clarify the existence of CHO receptors in oral cavity and their effect on RSE performance.
Testosterone and cortisol concentrations have been reported to increase in response to high-intensity activity in humans [58], and with CAF [33] or CHO ingestion [36], respectively. Data from this study show that ingesting CAF or CHO does not alter the circulating levels of testosterone or cortisol, but these levels increased distinctly after the AT-test in all four conditions ( Figure 6). One study examined alterations in salivary testosterone and cortisol in nine male cyclists completing repeated sprint test (4 sets of 5 × 30-s sprints, interspersed with 30-s recovery intervals) following caffeinated chewing gum ingestion [18]. Results showed that cortisol was increased by 12% and testosterone decreased by 21% compared to placebo condition, although testosterone and cortisol levels were not significantly different between caffeine and placebo trials (p > .05). Testosterone concentration is related to exercise intensity and increases with greater force production, and testosterone/cortisol ratio is associated with the anabolic or catabolic status of skeletal muscle during exercise [58]. Cortisol exhibits catabolic functions and increases in volume with repetitive high-intensity exercise, and the rest interval length also affects the acute cortisol response [58]. However, Beaven et al. [34] indicated that the anabolic effect of the increase in testosterone concentrations after CAF ingestion may be counteracted by the opposing catabolic effects of the increase in cortisol concentrations. Walker et al. [35] reported that ingesting CHO produced lower plasma cortisol concentrations than CAF and PLA after cycling for 2 h at 65% VO 2max , but the type of exercise was different than that used in this study. In addition, plasma cortisol concentrations (approximately 145-193 ng · dL −1 ) induced by the prolonged submaximal exercise in the study of Walker et al. [35] are obviously lower than those in our study. Pre and post-intermittent exercise did not produce significantly different salivary cortisol concentrations after CHO beverage ingestion [59]. According to the results from the current investigation, adding CHO to a solution and ingesting a CAF capsule does not affect hormone variables. This is probably because the intensity of the RSE exerts a strong influence on hormones without ergogenic aids. Changes in these hormones during RSE after ingesting CAF and CHO require further investigation. | 2017-06-26T17:50:37.339Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "fd2041e68870a1aba2280fa7b03e96643b79e335",
"oa_license": "CCBY",
"oa_url": "https://jissn.biomedcentral.com/track/pdf/10.1186/1550-2783-11-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34d172e3c5634168381bc09f590705fca5029035",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201042049 | pes2o/s2orc | v3-fos-license | Formal Medical Knowledge Representation Supports Deep Learning Algorithms, Bioinformatics Pipelines, Genomics Data Analysis, and Big Data Processes
Summary Objective : To select, present, and summarize the best papers published in 2018 in the field of Knowledge Representation and Management (KRM). Methods : A comprehensive and standardized review of the medical informatics literature was performed to select the most interesting papers published in 2018 in KRM, based on PubMed and ISI Web Of Knowledge queries. Results : Four best papers were selected among the 962 publications retrieved following the Yearbook review process. The research areas in 2018 were mainly related to the ontology-based data integration for phenotype-genotype association mining, the design of ontologies and their application, and the semantic annotation of clinical texts. Conclusion : In the KRM selection for 2018, research on semantic representations demonstrated their added value for enhanced deep learning approaches in text mining and for designing novel bioinformatics pipelines based on graph databases. In addition, the ontology structure can enrich the analyses of whole genome expression data. Finally, semantic representations demonstrated promising results to process phenotypic big data.
Introduction
The year 2018 has produced a large amount of publications related to Knowledge Representation and Management (KRM) in Medicine. KRM focuses on the development of techniques to be used and leveraged in other medical informatics domains. In this review, we present a selection of the best papers published in 2018 in the KRM domain, based either on their impact or on the novelty of the approach they proposed in the medical knowledge representation and management field.
Paper Selection Method
We conducted the selection of KRM papers based on a new set of queries. In comparison with the previous editions of the IMIA Yearbook, both PubMed/MELDINE and Web of Knowledge were used to search for KRM articles published in 2018 [1]. We followed the generic method commonly used in all sections of the Yearbook to select the best papers. As for the last four years, the search was performed on MEDLINE by querying PubMed. This year, we also performed an additional query on the ISI Web of Knowledge database (WoL).
Our query includes Medical Subject Headings (MeSH) descriptors related to KRM in the context of medical informatics with a restriction to international peer-reviewed journals, including conference proceedings indexed in PubMed. Only original research articles published in 2018 (from 01/01/2018 to 12/31/2018) were considered; we excluded the following publications types: reviews, editorials, comments, and letters to the editors.
The selection of the best papers was performed among the results of the query process, in three steps. At the first step, the section editors reviewed all titles, abstracts, and publication types in order to establish a short list of 15 candidate best papers. At the second step, five expert reviewers (including the section editors) reviewed the candidate best papers using the IMIA Yearbook quality criteria scoring method. More specifically, the following aspects of the papers were evaluated: significance, quality of scientific content, originality and innovativeness, coverage of related literature, organization, and quality of the presentation. The final step of best papers' selection was achieved during a meeting gathering the whole editorial board, based on the reviews and the report of the two section editors.
Results
For 2018, the KRM query retrieved 928 citations from PubMed and 34 additional citations from WoL. This new optimized query set accounts for 52% decrease of retrieved papers in comparison with results of the query used in 2017, with an overall improved precision of KRM relevant papers. Section editors achieved a first selection of 100 papers based on title and abstract. After a second review of this set of papers, including full text reviews, a selection of 15 candidate best papers was established [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Five reviewers reviewed these papers and four papers were finally selected as the best papers [2][3][4][5].
In direct line with the research presented last year [1], the 2018 four best papers demonstrated even further the added-value of ontology-based integration approaches for phenotype-genotype association mining.
Best Papers Selection for 2018
The paper authored by Arguello Casteleiro et al., and selected as a best paper, aims at automatically identify term variants or acceptable alternative free-text terms for gene and protein names in PubMed biomedical publications [2]. The use of a domain knowledge ontology, the Cardio Vascular Disease Ontology (CVDO), was associated with the best results. This study led to performance improvements for both Continuous Bag of Words (CBOW) and Skip-gram on a gene/ protein synonym detection task by adding knowledge formalized in the CVDO and without modifying the word embeddings created. Hence, the CVDO supplies context that is effective in inducing term variability for both CBOW and Skip-gram while reducing ambiguity. In an other best paper, Le et al., presents Spfy, a platform that exploits semantic technologies with a graph database that allows rapid phenotype identification through a novel bioinformatics pipeline, as well as efficient storage and downstream comparative analysis of thousands of genome sequences [3]. In their paper, Osumi-Sutherland et al., use OWLbased (Ontology Web Language) reasoning on Gene Ontology (GO) to generate novel, biologically relevant groupings of GO terms to support mapping with a controlled vocabulary [4]. The GO term groupings generated by this approach can be used in over-representation analysis to detect cell and tissue type signatures in whole genome expression datasets. Also one of the best papers for 2018, the work of Yu et al., [5] presents a phenotyping algorithm (Phe-Norm) that does not require expert-labeled samples at the training step. This completely annotation-free 2-step classification method for phenotyping involves an initial normalization step of highly predictive features of the target phenotype, followed by a denoising step to leverage additional information contained in the remaining candidate features. This work introduces a method especially relevant for big data processing, which is the case for EHR-driven (Electronic Health Record) phenotyping. The four best papers are listed in table 1 and detailed in the appendix.
Main Trends in KRM in 2018
Among the 11 other candidate best papers from the short list for 2018, we observed four directions in research, (i) the research on ontology-based data integration for phenotype-genotype association mining; (ii) the design of ontologies and their application, the common direction of our field; (iii) the works regarding the semantic annotation of texts; and (iv) an experience about the deployment of OMOP-CDM (Observational Medical Outcomes Partnership Common Data Model) in Germany.
Semantics for Genomic Data Management
In a long article, Al Kawam et al., [15] tackled fundamental bioinformatics challenges involving semantic representations: genomic data generation, storage, representation, and utilization in conjunction with clinical data. For each aspect, they provided a detailed discussion on the current research directions, outstanding challenges, and possible resolutions. This paper seeks to help narrow the gap between genomic applications, which are being predominantly utilized in research settings, and the clinical adoption of these applications.
In differential diagnoses and disease gene prioritization, the Human Phenotype Ontology (HPO) is often used to compare a phenotype profile against gold-standard phenotype profiles of diseases or genes. In his article [7], Köhler investigated how this comparison can be improved by exploiting structure and information existing in annotation datasets or full text disease descriptions. He tested a study-wise annotation model for diseases annotated with HPO classes and for genes annotated with GO classes. This paper adds weight to the need for enhancing simple flat list representations of disease or gene annotations.
Cheng et al., [8] addressed the question of finding similarities of terms between different ontologies (e.g. HPO, Disease Ontology (DO), …etc.). They took advantage of the gene functional interaction network (GFIN) to explore such inter-ontology similarities of terms. They proposed InfAcrOnt to infer similarities between terms across ontologies and acquired similarities between terms across ontologies through modeling the information flow within the network by random walk. Comparisons of InfAcrOnt results and prior knowledge on pair-wise DO-HPO terms and pair-wise DO-GO terms showed high correlations.
Ontology Design and Documentation
Five of the candidate best papers present research in the ontology design domain [6,[10][11][12]16]. This theme is common in the KRM section. While it was declining in recent years, it has become more present in 2018 than in previous years. In each case, articles described the motivation for building the ontology and developed the application that uses it for validation.
Traverso et al., [12] developed a Radiation Oncology Ontology (ROO). This ontology takes into account a few standard ontologies as the Fundational Model of Anatomy (FMA), the National Cancer Institute thesaurus (NCIt), and others terminologies. Authors demonstrated the possible conversion of clinical data following the FAIR principles (Findability, Accessibility, Interoperability, and Reusability), by using a combination of ontologies and Semantic Web (SW) technologies. This work proposes, using SW technologies based on existing ontologies, to efficiently and easily query data from different sources (relational databases) without knowing a priori their structures.
Bibault et al., [10] developed a Radiation Oncology Structures (ROS) Ontology. This ontology also relies on several standard ontologies as FMA, Radlex, and others. Authors provided annotations of EHR radiation oncology data with ROS concepts and integrated them into their clinical data warehouse. Finally, they showed the utility of the ontology in order to integrate dosimetric data.
Jing et al., [6] described the motivation and the building of OntoKBCF, an ontology for the cystic fibrosis domain. They illustrated the lack of sufficient clinical actionable knowledge that is related to molecular genetic information. The Cystic fibrosis ontology (OntoKBCF) is just a use case example, but given its structure, it should be relatively straightforward to extend the prototype to cover different genetic conditions. The principles underpinning its development could efficiently serve the design of knowledge bases for alternative human monogenetic diseases.
Facing the significant time cost to build ontologies, Zhao et al., proposed a data-driven sublanguage pattern mining method that can be used to create a knowledge model [16]. They combined standard Natural Language Processing (NLP) and semantic network analysis in their model generation pipeline. The results suggest that their pipeline is able to produce a comprehensive content-based knowledge model to represent context from various sources in the same domain.
In line with the publication of ontologies, Matentzoglu et al., [11] proposed the Minimum Information for Reporting an Ontology (MIRO) guidelines as a means to facilitate a higher degree of completeness and consistency between ontology documentation, including published papers, and ultimately a higher standard of report quality. These guidelines result from a survey among the KRM community that is detailed in the article. An illustrative review of 15 recently published ontology description reports from three important journals in the Semantic Web and Biomedical domain analyzed them for compliance with the MIRO guidelines. Only 41.38% of MIRO items were covered by these reports.
Semantics and Clinical Notes
Two candidate best papers presented a research involving semantic formalization associated with NLP approaches to process clinical texts. The first paper by Catling et al., [13] explored methods for representing clinical text using hierarchical clinical coding ontologies. This study demonstrates that hierarchically-structured medical knowledge can be incorporated into statistical models to produce improved performance for automated clinical coding. However, the authors reported that the data processing was difficult: they used a supervised learning approach with manually-as-signed clinical codes for the training dataset. Consequently, learning good representations of rare diseases in clinical coding ontologies from data alone remains challenging.
Viani et al., [9] proposed an ontology-driven approach to identify events (and their attributes) from episodes of care in medical reports written in Italian. Authors developed an ontology that can be easily enriched and translated. For this language, shared resources for clinical information extraction are not easily accessible. The proposed approach performed well on the considered Italian medical corpus, with a percentage of correct annotations above 90% for most considered clinical events.
Interoperability and Data Integration
The paper from Maier et al., [14] reports the experiment of implementing an OMOP/ OHDSI-based pilot within a consortium of eight German University hospitals. Authors evaluated the applicability to support data harmonization and sharing among University hospitals, and they identified potential enhancement requirements. In order to facilitate the work of hospital centers, they provided a virtual machine preconfigured with the OMOP database and the OHDSI tools as well as the jobs to import the data and conduct the analysis. This work is encouraging, even if taking into account important vocabularies for Germany remains to be done. Such a paper shows the difficulties of moving from a model to a real implementation.
Conclusions
In the KRM selection for 2018, research on semantic representations demonstrated their added-value for enhanced deep learning approaches in text mining and for designing novel bioinformatics pipelines based on This work combines semantic modelling (Cardiovascular Disease Ontology, CVDO) and learning algorithms (word embeddings). The authors aim at automatically identifying term variants or acceptable alternative free-text terms for gene and protein names from PubMed biomedical publications. Ontologies, such as CVDO, capture domain knowledge in a computational form and can provide context for gene/protein names as written in the literature. This study investigates: i) if word embeddings from Deep Learning algorithms can provide a list of term variants for a given gene/protein of interest; and ii) if biological knowledge from the CVDO can improve such a list without modifying the word embeddings created. The results are of significant performance improvements for deep learning algorithms on a gene/protein synonym detection task, by adding knowledge formalized in the CVDO (leveraging the formal relations between genes and proteins). Hence, the CVDO supplies the context that is effective to induce term variability for algorithms while reducing ambiguity. As a result, CVDO can be enriched with new discovered synonyms (skos:altLabel). This work relies on a generic approach to be reused with other medical ontologies. Spfy is a platform that rapidly performs the common reference laboratory tests owing to its database of diverse pre-computed results, and the ability to incorporate user data. This platform handles all analysis tasks by dividing them into subtasks, which are subsequently distributed across a built-in task management process. All results are converted into individual graphs and stored within a large graph database according to previously created ontologies: the Genomic Epidemiology Ontology (GenEpiO), the Feature Annotation Location Description Ontology (FALDO), and the Microbial Typing Ontology (TypOn). These ontologies provide the relevant metadata for genotypes, location, biomarkers, host, and source. In its presented version, Spfy contains 10,243 Escherichia Coli genomes, for which in-silico serotypes and Shiga-toxin subtypes, as well as the presence of known virulence factors and antimicrobial resistance determinants have been computed. Spfy includes analyses modules that are also self-contained and can be used in existing platforms. This work demonstrates that Spfy, by leveraging semantic technologies with a graph database, facilitates rapid phenotype identification, as well as the efficient storage and downstream comparative analysis of thousands of genome sequences.
Osumi-Sutherland DJ, Ponta E, Courtot M, Parkinson H, Badi L Using OWL reasoning to support the generation of novel gene sets for enrichment analysis J Biomed Semantics 2018;9(1):10 The Gene Ontology (GO) consists of over 40,000 terms for biological processes, cell components, and gene product activities linked into a graph structure by over 90,000 relationships. It has been used to annotate the functions and the cellular locations of gene products. The graph structure is used by a variety of tools to group annotated genes into sets whose products share function or location. These gene sets are widely used to interpret the results of genomics experiments by assessing which sets are significantly over-or under-represented in results lists. F Hoffmann-La Roche Ltd. has developed a manually maintained controlled vocabulary (RCV) for use in over-representation analysis. The formal structure of GO and logical queries in OWL allow to map RCV terms to sets of GO terms. Finally, gene sets derived from the resulting GO terms sets can be used to detect the signatures of cell and tissue types in whole genome expression data. This article is very interesting and demonstrates all the added-value of ontological representation to three axes: (i) it shows a practical use case of ontology-based reasoning and how the authors can solve problems with widely available standards and tools (OWL2 EL, ELK); (ii) in mapping from the RCV to the GO, the authors found and resolved over 200 omissions in the axiomatization; and (iii) the approach to automate mapping between RCV and GO, replacing the unsustainable manual mapping process. This paper addresses the difficulty to obtain a gold standard to train machine learning processes. The authors introduced a silver standard approach without human solicitation. They present PheNorm, a phenotyping algorithm that does not require expert-labeled samples at the training step. The input for the PheNorm algorithm consists of unlabeled data on a set of potentially informative features, either automatically curated or designed by experts. Online articles about the target phenotype from publicly available knowledge sources, such as Wikipedia and Medscape, are scanned with Natural Language Processing (NLP) software to extract medical concepts recorded in the Unified Medical Language System. These concepts are potentially related to the target phenotype. Then, narrative notes from the Electronic Heath Record (EHR) database are processed with NLP software, which identifies mentions of the above medical concepts. With such a material, the most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification. The authors validated the accuracy of PheNorm with four phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The results suggest that PheNorm can potentially reduce the machine learning algorithm development process and demonstrate the capacity for EHR-driven annotations to scale to the next level -phenotypic big data. | 2019-08-18T13:04:42.814Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "2e05c1eea42c034acb77762a5e05634c3d27c0c4",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-1677933.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b283d6b67aed40a08e722ccdaafa8aeb47f8026a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
235006512 | pes2o/s2orc | v3-fos-license | Proteomic Analysis of Banana Vascular Sap Provides Insight Into Resistance Mechanisms to Fusarium Oxysporum F. Sp. Cubense Tropical Race 4
Background: Fusarium oxysporum f. sp. cubense tropical race 4 (Foc TR4) is the causal agent of Fusarium wilt, and is the most destructive soil-borne and vascular invasive fungus of banana. The sap circulating in vascular cells transports proteins including those that might be involved in disease-resistance processes. However, there is no research to analyze changes in banana vascular sap protein response to TR4 to date. Results: To gain an integrated understanding of differential protein abundance in banana vascular sap during TR4 infection, we performed a comparative proteomic analysis of vascular sap of the resistant ‘Pahang’ and the susceptible ‘Brazilian’ bananas inoculated with TR4. We identied 129 differential expression proteins (DEPs) between resistant and susceptible tested combinations. Of these DEPs, hypersensitive-induced response protein 1 (HIR1) and E3 ubiquitin ligase (E3) decreased in abundance in Pahang with no change in Brazilian under TR4 infection; chalcone isomerase (CHI) and glycine-rich RNA-binding protein (GRP) increased in abundance in Pahang but no signicant changes in Brazilian under TR4 infection; carboxylesterase (CXE) and GDSL lipase (GLIP) were specically in higher abundance in Pahang response to TR4 compared to that of Brazilian. It suggested that these proteins played important roles in bananas against TR4. Conclusions: Our study identied 129 DEPs in vascular sap between resistant and susceptible tested combinations. Of which, HIR1, E3, CHI, GRP, CXE and GLIP played important roles in bananas response to TR4. To our knowledge, this is rst report to analyze changes in banana vascular sap proteins in response to TR4, which help us to explore the molecular mechanisms of banana defense to Fusarium wilt. and TCEP: Tris(2-carboxyethyl)phosphine; IAM: Iodoacetamide; TEAB: Tetraethylammonium bromide; RPLC: Reversed-phase liquid chromatography; LC-MS/MS: Liquid chromatography-tandem mass spectrometry; HCD: High energy collisional dissociation; False discovery rate; of Orthologous Groups; Gene of Genes and SAR: systemic resistance; PAMP-triggered CaMCML: Calcium-binding protein CML; ligase; CCR: Cinnamoyl-CoA alcohol dehydrogenase; Caffeoyl-CoA O-methyltransferase; CHS: Chalcone synthase; 3',5'-hydroxylase; DFR: Dihydroavonol-4-reductase; LACS: Long chain acyl-CoA synthetases; PLDα1: recognition motif; TMV: Tobacco mosaic virus.
Genetically modi cation of a susceptible commercial banana is a promising alternative for banana improvement [14][15][16]. However, defense mechanisms of banana against TR4 are not well understood.
TR4 is the most destructive soil-borne and vascular invasive fungus. It invades root vascular bundles and extends upward to the aerial parts. We previously investigated transcriptomics in the corm to identify the pathways involved in the resistance [17]. Proteomics, complementary to transcriptomics, can provide insights into complex biological processes in banana [18][19][20]. Large-scale studies of proteomics previously focused on dissecting interactions between bananas and Foc [21][22][23]. The proteins related to PR response, cell wall strengthening and antifungal compound synthesis were involved in banana defense to TR4 [22]. β-1,3-glucanase and chitinase were reported to function in banana against TR4 at the early defense stage [21]. The expression patterns of proteins related with cell cytoskeleton, natural killer cell mediated cytotoxicity and lipid signaling were different in banana during Foc1 and Foc4 infection, suggesting these proteins participated in mediating different resistance to Foc1 and Foc4 in banana cultivar 'Brazilian' [23]. These studies help us to understand the defense mechnism of banana against TR4.
Plants transport signal molecules as well as water and minerals over long distance via the vascular bundles [24]. The signal molecules are vital for plant adaption to abiotic and biotic stress [25,26]. The vascular sap proteomics have been applied to characterize the processes associated with plant defense to Fusarium wilt [27][28][29][30][31]. However, to our knowledge, there is no research to analyze changes in banana vascular sap proteins response to TR4 to date. In this study, we performed a comparative proteomics analysis of vascular sap in resistant and susceptible bananas inoculated with TR4. 129 DEPs were identi ed between resistant and susceptible tested combinations, among which HIRP1, E3, CHI, GRP, CXE and GLIP involved in banana defense against TR4. This study provides integrated insight into the resistant mechanism of banana against Fusarium wilt.
Results
Vascular saps of Pahang (resistant) and Brazilian (susceptible) inoculated with TR4 or mock at 14 dpi were collected for comparative proteomic analysis. A total of 261,038 spectra were acquired through iTRAQ quantitative proteomics analysis, among which 31,450 spectra were matched to 6,503 peptides and 1036 proteins.
The vascular proteome of banana All identi ed proteins (1,036) were conducted functional analysis, of which 938, 923 and 779 proteins were annotated with COG, GO and KEGG databases, respectively.
In terms of COG (Fig. 2), 938 proteins were assigned into 23 functional categories. Of which 'posttranslational modi cation, protein turnover, chaperones', 'energy production and conversion' and 'carbohydrate transport and metabolism' were the top 3 largest categories. Related to plant defense were 'lipid transport and metabolism', 'cell wall/membrane/envelope biogenesis', 'secondary metabolites biosynthesis, transport and catabolism', 'signal transduction mechanisms' and 'defense mechanism' categories.
In terms of GO (Fig. 3), 923 proteins were assigned into 43 GO terms, and divided into 3 groups, including biological process, cellular component and molecular function. 'Response to stimulus', 'signaling', 'detoxi cation', 'immune system process' and 'antioxidant activity' were usually regarded to relate to disease resistance among molecular functional groups.
Differential expression proteins analysis
A total of 129 unique DEPs were identi ed in the 4 pairwise comparisons between mock and inoculated in resistant and susceptible genotypes ( Fig. 5a and 5b), for which we performed expression and function annotation analysis (Supplementary Table 2
Discussion
TR4 is a vascular-invading fungus, and it colonizes in the vascular system of banana and completes its life cycle [32]. Vascular sap contains macromolecules, such as proteins, involved in disease-resistance processes [27,28,31]. To gain an integrated understanding of the changes of banana vascular sap proteins during TR4 infection, we performed a comparative proteomic analysis of vascular sap in resistant diploid 'Pahang' and susceptible triploid 'Brazilian' inoculated with TR4 at 14 dpi. The amount of fungal biomass and the degree of necrosis in Pahang tissues were signi cantly less than their levels in Brazilian at 14 dpi [17,33].
Signal transduction
Signal transduction pathways are responsible for induction of plant defense against pathogen [34,35], such as mitogen-activated protein kinase (MAPK) cascades and plant hormone signals [36]. Once plant perceives the invading pathogen, the activation of MAPKs is one of the earliest signaling events [37]. In the present study, we found 10 proteins related with MAPK signaling pathway-plant (Supplementary Table 1), such as nucleoside diphosphate kinases known to be an inducer of MPK3/6 expression through phosphorylation leading to hypersensitive response (HR) cell death in plant response to pathogen attack [38].
Salicylic acid (SA) and jasmonic acid (JA) are essential hormone signals of plant immunity [39]. SA and JA antagonize each other [40]. It is generally considered that SA enhances resistance to biotrophs, while JA is effectively against necrotrophs and insects [41,42]. Fusarium oxysporum was classi ed as hemibiotrophs [43]. SA metabolism activation and signal transduction, and JA induced defense responses improved banana resistance to TR4 [44,45]. In this study, we identi ed 2 proteins associated with SA and JA signaling (Supplementary table 1), including pathogenesis-related protein 1 (PR1), a marker for systemic acquired resistance (SAR) from SA signaling pathway [46]; and coronatineinsensitive protein homolog (COI1), a key regulator for JA-dependent induced systemic resistance (ISR) [41,47]. Further research is needed to determine whether SA-dependent SAR and JA-dependent ISR are simultaneously activated in banana.
Environmental adaptation
In their natural habitats, plants are threatened by various abiotic and biotic stress. Over the evolutionary course during plant-pathogen interaction, plants have developed multi-layered innate immune system to defend against pathogen. The preliminary layer of immune is pathogen-associated molecular pattern (PAMP) perceived by pathogen recognition receptors (PRRs), and induces a series of physiological changes leading to PAMP-triggered immunity (PTI) [48]. These physiological changes include reactive oxygen species (ROS) bursts and calcium (Ca 2+ ) concentrations changes [49,50]. Ca 2+ acts as an important second messenger whose concentration is sensed by Ca 2+ -binding proteins, such as calciumdependent protein kinase (CDPK) and calcium-binding protein CML7 (CaMCML) that we detected in the banana vascular sap (Supplementary Table 1). It initiates downstream signaling processes [51], such as hypersensitive response and cell wall reinforcement.
Biosynthesis of secondary metabolites
Plant secondary metabolites contribute to all aspects in plant and pathogen interactions [52]. In the biosynthesis of secondary metabolites, phenylpropanoid and avonoid biosynthesis have been proved to encompass a wide range of constitute and inducible immunity through lignin and phytoalexin synthesis [53]. In our experiment, we found 21 proteins involved in the phenylpropanoid biosynthesis (Supplementary Table 1). We detected synthetic enzymes of lignin leading to strengthen cell walls [17], including phenylalanine ammonia-lyase (PAL), 4-coumarate-CoA ligase (C4L), 2 cinnamoyl-CoA reductases (CCR), cinnamyl alcohol dehydrogenase (CAD) and 3 peroxidases (POD) (Supplementary Table 1). In addition, caffeoyl-CoA O-methyltransferase (CCoAOMT) associated with lignin production resulting in quantitative resistance to multiple pathogens [54], was found. 13 proteins were assigned to avonoid biosynthesis, such as chalcone synthase (CHS) as the gatekeeper of avonoid biosynthesis which can help plant to produce more avonoids, iso avonoid-type phytoalexins [55], three P450 enzyme avonoid 3',5'-hydroxylase and dihydro avonol-4-reductase (DFR) as precursors for the production of catechins and pro-anthocyanidins involved in plant resistance [56].
Lipid metabolism
Lipids and fatty acids involved in lipid metabolism are considered as signal transduction mediators of plant disease resistance [57,58]. We found two long chain acyl-CoA synthetases (LACS) involved in fatty acids metabolism known to act in the synthesis of cutin to confer plant resistance to fungal pathogen [59,60]. Additionally phospholipases D α1 (PLDα1) involved in lipid metabolism which promote phosphatidic acid and ROS [61] were detected in vascular sap (Supplementary Table 1 Table 2). It suggests that TR4 did not induce highly dramatic changes in the overall vascular sap proteome. This result was similar to the proteomic analysis of phloem sap in melon defense against viral infection [62]. Nevertheless, these DEPs present in vascular sap might play important roles in banana response to TR4.
Among those, we detected 4 genes of interest. First, the hypersensitive-induced response protein 1 (HIR1) which decreased in abundance in Pahang with no change in Brazilian under TR4 infection HIR1 is known to act as regulators of plant immunity by triggering hypersensitive cell death [63,64]. It would suggest that Pahang decreases HIR1 expression to suppress the cell death as a resistance mechanism to TR4 due to Foc as hemibiotroph or necrotroph [43,65].
Ubiquitin involved in the ubiquitination system are key for plant immunity [66]. Ubiquitination is mediated by three step enzymatic cascades, including activating (E1), conjugating (E2) and ligating (E3) enzymes [67]. E3 ubiquitin ligase RING1 gene, CaRING1, played a positive role in pepper (Capsicum annuum) response to microbial pathogens [68]; whereas a homologous triplet of U-box type E3 ubiquitin ligases acted as negative regulators of PTI in Arabidopsis [69]. In the study, E3 with a RING zinc-nger domain decreased in abundance only in Pahang response to TR4. However, further studies are needed to prove whether this protein played a negative role in banana response to TR4.
CHI is an important enzyme of avonoid pathway involved in the production of phytoalexin [70]. We found a chalcone-avonone isomerase (CHI) that increased in abundance in Pahang under TR4 infection, as well as in Pahang control compared with Brazilian control.
Finally, GRP containing an RNA recognition motif (RRM) domain was increased in abundance in Pahang response to TR4, but no signi cant changes in other pairwise comparisons. GRPs act as regulators in diverse cellular processes, including response to stress in plants [71,72]. Over expressing TaRZ1, a wheat (Triticum aestivum) zinc nger-containing GRP, in Arabidopsis thaliana increased resistance against necrotrophic bacteria Pseudomonas syringae [73]. Although functional validation would be needed to con rm the actions, our results unraveled a list of promising candidate genes to explore.
Pahang speci c reaction during TR4 infection
We previously observed with gene expression on the same genotypes that Pahang exhibits constitutive defense responses before TR4 infection [17]. Proteomic results indicated that the higher number of DEPs (84) is between Pahang and Brazilian without infection. We also identi ed 26 DEPs by comparing TR4 inoculated Pahang with that of Brazilian, but could nd only one protein with increased abundance in both infected bananas. These results show our susceptible and resistant banana genotypes have de nitively a different response to TR4 infection. Among the 26 DEPs, 7 proteins have an unknown function but two proteins were highly associated with resistance to pathogen in plant, annotated as carboxylesterase and GDSL esterase/lipase. [74] while constitutive expression of PepEST, a fungus-inducible carboxylesterase in pepper (Capsicum annuum) increased resistance against the hemibiotrophic anthracnose fungus (Colletotrichum gloeosporioides) [75]. In the present study, a CXE protein (Ma06_t34160) was speci cally in higher abundance in TR4 inoculated Pahang compared to Brazilian.
Overexpressing GLIP1 in Arabidopsis improved resistance against hemibiotrophic and necrotrophic pathogens [77,78]. In the study, a GLIP protein (Ma11_t05100) showed an increased abundance in TR4 inoculated Pahang compared with that of Brazilian. This is consistent with our previous transcriptomic study in which this GLIP gene was also activated by TR4 attack [17].
Conclusions
To gain an integrated understanding of the changes of banana vascular sap proteins during TR4 infection, we performed a comparative proteomic analysis of vascular sap in resistant diploid 'Pahang' and susceptible triploid 'Brazilian' inoculated with TR4 at 14 dpi. A total of 1036 proteins were detected in vascular sap, some of these proteins are involved in 'biosynthesis of secondary metabolites', 'environmental adaptation', 'lipid metabolism' and 'signal transduction', which are commonly considered as disease-resistance pathways. Since the vascular sap contained defense-related proteins the constitutive presence of certain proteins could contribute toward resistance. 19 proteins were signi cantly more abundant after TR4 inoculation, and 26 proteins were only upregulated in the resistant genotype. Of these DEPs, hypersensitive-induced response protein 1 (HIR1) and E3 ubiquitin ligase (E3) decreased in abundance in Pahang with no change in Brazilian under TR4 infection; chalcone isomerase (CHI) and glycine-rich RNA-binding protein (GRP) increased in abundance in Pahang but no signi cant changes in Brazilian under TR4 infection; carboxylesterase (CXE) and GDSL lipase (GLIP) were speci cally in higher abundance in Pahang response to TR4 compared to that of Brazilian. It suggested that these proteins played important roles in bananas against TR4. To our knowledge, this is rst report to analyze changes in banana vascular sap proteins response to TR4 to date, which provided insight into resistant mechanisms of banana defense against Fusarium wilt. In the next steps, the function of these proteins will be further validated.
Plant inoculation and vascular sap collection
We selected Musa acuminata 'Pahang' (AA, ITC0609), with o cial SMTA-2015 from the International Musa Germplasm Transit Centre (ITC), and Musa Cavendish 'Brazilian' (AAA, commercial cultivar in China). Pahang is resistant to TR4, while Brazilian is susceptible to TR4 [33,79,80]. The banana inoculation was performed similarly [33] with minor modi cations. The roots of banana with 6-8 leaves were cut to 5 cm and immersed into TR4 conidia suspension for 30 min at a concentration of 10 6 conidia/mL. The plants were soaked into sterile water as mocks. All plants were transplanted to pots lled with sterile vermiculite, and placed in an arti cial climate chamber at 30℃, 80% humidity, and 8 h light/16 h dark (Fig. 1a, b). At 14 days post inoculation (dpi), the pseudostems were transected at 0.5 cm above the corms with a sterile blade (Fig. 1c). After removed the exudate from the cut cells, the vascular sap exuded spontaneously from the remaining pseudostems was collected with a pipette (Fig. 1d-1f). Vascular sap isolated from at least 30 plants was pooled into one independent biological replicate, and three independent biological replicates were conducted. The vascular sap was frozen with liquid nitrogen until protein extraction.
Added equal volume of Tris-saturated phenol, and vortexed at 4℃ for 10 min. Centrifuged with 12,000 g at 4℃ for 20 min to take the phenol phase, and added an equal volume of BPP solution, and vortexed for 10 min at 4℃. Centrifuged with 12,000 g at 4℃ for 20 min to collect the phenol phase. The proteins were precipitated overnight at -20℃ from the phenol phase with pre-cooled ammonium acetate methanol in a ratio of 1:5. Centrifuged with 12,000 g at 4°C for 20 min the next day and discarded the supernatant. Washed the pellet twice with 90% pre-cooled acetone. Used 8 M urea and 1% sodium dodecyl sulfate (SDS) with a protease inhibitor to dissolve the pellet. Centrifuged with 12,000 g at 4℃ for 20 min to collect the protein supernatant.
Protein Digestion and iTRAQ Labeling
Protein concentrations were detected using BCA Protein Assay Kit (Pierce, Thermo, USA). Protein digestion was carried out according to the standard procedure. Brie y, took 100 μg protein of each sample. Added the nal concentration of 10 mM Tris(2-carboxyethyl) phosphine (TCEP) and incubated at 37°C for 60 min. Added the nal concentration of 40 mM iodoacetamide (IAM) to react at room temperature in the dark for 40 min. Added pre-cooled acetone (acetone: sample volume ratio = 6:1) to each tube, and incubated at -20°C for 4 h. Centrifuged at 10,000g for 20 min, and discarded acetone and took the precipitate. Resuspended the precipitated protein with 100 µL 100 mM tetraethylammonium bromide (TEAB) buffer. Added trypsin solution (1:50) to each tube, and incubated at 37°C overnight. The resulting peptide mixture was labeled using the 8-plex iTRAQ reagent (Applied Biosystems, 4390812) according to the manufacturer's instructions [81].
Functional annotation of peptides MS/MS spectra were searched using Proteome Discoverer (Thermo Scienti c, Version 2.2) against Musa acuminata database from UniProtKB-SwissProt [82] (http://www.uniprot.org/proteomes/UP000012960) and the decoy database as the following parameters. The highest score for a given peptide mass (best match to that predicted in the database) was used to identify parent proteins. The parameters for protein searching were based on below criteria: tryptic digestion with up to two missed cleavages, carbamidomethylating of cysteines as xed modi cation, and oxidation of methionine and protein Nterminal acetylation were considered as variable modi cations. False discovery rate (FDR) of peptide identi cation was set as FDR≤0.01. In order to support protein identi cation, a minimum of one unique peptide identi cation was used.
For 26 genes (Fig. 5e), we added the RefSeq (NCBI) annotation locus codes and the V2 annotation names (column "C" and "D", respectively) retired at the Banana Genome Hub [83]. Moreover, we added a more informative or alternative functional annotation which was available we added it in the "E" column. Finally, we checked if one or more paralogs or similar genes were present in Musa acuminata genome V2 [84] (Column "F" and "G", respectively) (Supplementary Table 2).
iTRAQ quantitative proteomics analysis The basic information analysis process for iTRAQ quantitative proteomics using the free online platform of Majorbio Cloud Platform (www.majorbio.com). First, the raw mass spectra generated by the mass spectrometer were subjected to the peak identi cation. Secondly, the UniProtKB-SwissProt reference proteomic database of banana (http://www.uniprot.org/proteomes/UP000012960) was established to identify peptides and proteins. All identi ed proteins were functional annotated using Cluster of Orthologous Groups of proteins (COG, http://eggnogdb.embl.de/#/app/home), Gene Ontology (GO, http://www.geneontology.org/) and Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.genome.jp/kegg/) with E-value≤1×10 -5 and identity≥0.98. The DEPs were identi ed with fold change>1.2 (upregulation), fold change<0.83 (downregulation) and P value<0.05 [23], and analyzed including DEPs Venn and expression pattern analysis.
Declarations
Ethics approval and consent to participate Not applicable.
Consent for publication
The authors con rm that the work described has not been published before. All authors read and approved the nal manuscript.
Availability of data and materials
Musa acuminata 'Pahang' (AA, ITC0609) is imported with o cial SMTA-2015 from the International Musa Germplasm Transit Centre (ITC). The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org) via the iProX partner repository [85] with the dataset identi er PXD018261.
Competing interests
Program (CRP) on Roots, Tubers and Bananas (RTB) provide project research funds and analysis, and interpretation of data and in writing the manuscript.
Authors' contributions LZ and SJZ: conceptualization. LZ: performed the experiments, analyzed the data and wrote the paper. LL, SL, TB, SX, HF, KY, and PH: analyzed the data. MR, AC and SC: provided feedback on data analyses and reviewed the manuscript. YW and WT proof writing. SJZ: conceived and funding acquisition, designed the experiments and proof writing. All authors have read and approved the manuscript. | 2021-05-22T00:03:40.364Z | 2020-05-08T00:00:00.000 | {
"year": 2020,
"sha1": "676dc876359fa270e9c29eceec63ae0904e0f13a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-307891/v1.pdf?c=1631893479000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "425ba23e24966c6395ff37a12fcb00268f262f5e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
252780972 | pes2o/s2orc | v3-fos-license | Triangulations of grassmannians and flag manifolds
MacPherson (in: Topological methods in modern mathematics: a symposium in Honor of John Milnor’s Sixtieth Birthday Stony Brook NY 1991, Perish, Houston, 1993. https://doi.org/10.2307/1970177) conjectured that the Grassmannian Gr(2,Rn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textrm{Gr}(2, \mathbb {R}^n)$$\end{document} has the same homeomorphism type as the combinatorial Grassmannian MacP(2,n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{ MacP }(2,n)$$\end{document}, while Babson (in: A combinatorial flag space, MIT, 1993. https://doi.org/10.2307/1970177) proved that the spaces Gr(2,Rn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{ Gr }(2,\mathbb {R}^n)$$\end{document} and Gr(1,2,Rn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{ Gr }(1,2,\mathbb {R}^n)$$\end{document} are homotopy equivalent to their combinatorial analogs, the simplicial complexes ‖MacP(2,n)‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \text{ MacP }(2,n)\Vert $$\end{document} and ‖MacP(1,2,n)‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \text{ MacP }(1,2,n)\Vert $$\end{document} respectively. We will prove that Gr(2,Rn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{ Gr }(2, \mathbb {R}^n)$$\end{document} and Gr(1,2,Rn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{ Gr }(1,2, \mathbb {R}^n)$$\end{document} are homeomorphic to ‖MacP(2,n)‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \text{ MacP }(2,n)\Vert $$\end{document} and ‖MacP(1,2,n)‖\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \text{ MacP }(1,2,n)\Vert $$\end{document} respectively.
Introduction
An oriented matroid can be thought of as a combinatorial abstraction of a vector space, or of a point configuration or of real hyperplane arrangements.The theory of oriented matroid comes with analogous notion to linear independence, convexity, general position, and subspaces.
Mnëv and Ziegler in [3] introduced the poset G(k, M) of rank k strong map images of a rank n oriented matroid M called the oriented matroid Grassmannian.The poset was introduced to serve as a combinatorial model for Gr(k, R n ), the space of k dimensional subspaces of R n .A special case is when M is the unique rank n oriented matroid on n elements.The resulting poset MacP(k, n), is called the MacPhersonian.Similarly, the poset of flags (N 1 , N 2 ) of oriented matroids, where N 1 is a rank p strong map image of N 2 and N 2 is
INTRODUCTION
a rank k strong map image of M is denoted by G(p, k, M).The poset came up in the work of Babson in [2] and the work of Gelfand and MacPherson in [4].
Mnëv and Ziegler conjectured that G(k, M) , the geometric realization of the poset G(k, M) has the homotopy type of Gr(k, R n ).For k = 2, it was proven by Babson in [2] that G(2, M) has the same homotopy type as Gr(2, R n ).It was also proven in [2] that G(1, 2, M) has the same homotopy type as Gr(1, 2, R n ).
We will show that the complexes MacP(2, n) and MacP(1, 2, n) are homeomorphic to Gr(2, R n ) and Gr(1, 2, R n ) respectively.It follows from Babson's work in [2] that the complex MacP(2, n) and Gr(2, R n ) have the same homotopy type, and that the complex MacP(1, 2, n) has the same homotopy type as Gr(1, 2, R n ).Also, it can easily be shown that for k = 1, MacP(1, n) is homeomorphic to RP n−1 .
In Section 2, we will give basic background on oriented matroids, posets and regular cell complexes that is sufficient for this part of the project.We will also introduce the maps µ : Gr(k, R n ) → MacP(k, n) mapping a k dimensional subspace to the rank k oriented matroid it determines, and the map ν : Gr(p, k, R n ) → MacP(p, k, n) mapping a flag of subspaces to a flag of oriented matroids.To establish our main assertion about the topology of MacP(2, n) and MacP(1, 2, n) , we will prove in Theorem 32 that the stratification {µ −1 (M ) : M ∈ MacP(2, n)} is a regular cell decomposition of Gr(2, R n ).Similarly, we will prove in Theorem 46 that the stratification {ν −1 (N, M ) : (N, M ) ∈ MacP(1, 2, n)} is a regular cell decomposition for Gr(1, 2, R n ).
In Proposition 34, we will show that µ −1 (M ) is homeomorphic to an open ball.Similarly, ν −1 (N, M ) will be shown in Proposition 47 to be homeomorphic to an open ball.In Proposition 38, the boundary ∂µ −1 (M ) of µ −1 (M ) will be shown to be N <M µ −1 (N ) as the union of lower dimensional cells.We have similar result in Proposition 50 for the boundary of ν −1 (N, M ).
In Section 6 and Section 9, we will prove that the closures µ −1 (M ) and ν −1 (N, M ) are topological manifolds whose boundaries are spheres.
In rank r ≥ 3, and for M ∈ MacP(r, n), µ −1 (M ) is not necessarily connected.Our argument will also make use of the fact that in rank 2, ∂µ −1 (M ) = N <M µ −1 (N ).This fact called normality, is also not necessarily true in rank r for r ≥ 3. We will use throughout this part of the project, the realizability of rank 2 oriented matroids; that is every rank 2 oriented matroid can be obtained from an arrangement of vectors in R 2 .This fact is also in general not true for oriented matroids of rank at least 3. Detailed results on rank r oriented matroids for r ≥ 3 can be found in [5].
Oriented Matroids
Suppose X ∈ Gr(p, R n ).We will view elements of R n as 1 × n row vectors, so that X is the rowspace of a p × n matrix.
The collection of all sign vectors {(sign(x 1 ), sign(x 2 ), . . ., sign(x n )) : (x 1 , x 2 , . . ., x n ) ∈ X} is a collection of sign vectors that are called the covectors of an oriented matroid.For a covector C, the set {i ∈ [n] : A formal definition of the covector set of an oriented matroid will be given later in this section.We can write in terms of column vectors as Let {v αi } αi be the set of non-zero vectors in {v 1 , v 2 , . . .v n }.We consider the following arrangement (v ⊥ αi ) αi of oriented linear hyperplanes.The arrangement determines a cellular decomposition of R p .The intersection of the cellular decomposition with S p−1 the unit sphere in R p gives a cellular decomposition of S p−1 .A cell in S p−1 corresponds to a non-zero covector of M and a non-zero covector of M corresponds to a cell in the cellular decomposition of S p−1 .The oriented matroid M with a covector set obtained this way is called a realizable oriented matroid.It should be noted that rank 1 and rank 2 oriented matroids are realizable, but oriented matroids of rank at least 3 are not necessarily realizable, details can be found in [5].
Let X = Rowspace(v 1 v 2 v 3 . . .v n ) as defined earlier and M the corresponding oriented matroid.We consider the following function χ : The collection {±χ} is independent of the choice of basis vectors for X.We will write the resulting oriented matroid as M = (±χ).
In Figure 1(b), a rank 3 oriented matroid is obtained from an essential arrangement of equators in a 2-sphere S 2 .
Notation: Let E be a finite set and X, Y ∈ {0, +, −} E be sign vectors.The composition X • Y is defined to be the element of {0, ) Let E be a finite set and V * ⊆ {0, +, −} E .V * is the covector set of an oriented matroid on elements E if it satisfies all of the following.
and e ∈ E such that X(e) = −Y (e) = 0, then there is a Z ∈ V * such that Z(e) = 0 and, for each f ∈ E, If V * is the covector set of an oriented matroid, then the rank of the oriented matroid is the rank of V * as a subposet of {0, +, −} E .
Definition 3 (Basis orientation)([6]
) A basis orientation of an oriented matroid M is a mapping χ of the set of ordered bases of M to {+1, −1} satisfying the following two properties • χ is alternating, • For any ordered bases of M of the form (e, x 2 , x 3 , . . ., x p ) and (f, x 2 , x 3 , . . ., x p ), e = f , we have where D is one of the two opposite cocircuits complementary to the hyperplane spanned by {x 2 , x 3 , . . ., x p } in M.
The following theorem establish the cryptomorphism between the definition of an oriented matroid using covectors and its definition using a chirotope.Theorem 4 ( [7]) Let p ≥ 1 be an integer and E be a set.A mapping χ : E p → {+1, 0, −1} is a basis orientation of an oriented matroid of rank p on E if and only if it is a chirotope.
In general, an oriented matroid is obtained from an arrangement of pseudospheres. Figure 2 illustrates an arrangement of pseudospheres.
Theorem 5 ( [7])The Topological Representation Theorem (Folkman-Lawrence 1978) The rank r oriented matroids are exactly the sets (E, V * ) arising from essential arrangements of pseudospheres in S r−1 .Let {+, −, 0} be a poset with the partial order 0 < − and 0 < +.The partial order on {+, −, 0} n is component-wise the partial order on {+, −, 0}.Definition 6 ([1]) Let M and N be two rank r oriented matroids, and V * (M) and V * (N ) the covector sets of M and N respectively.We say that N ≤ M if and only if for every X ∈ V * (N ) there exist a Y ∈ V * (M) such that X ≤ Y .The oriented matroid M is said to weak map to N .Definition 7 ([1]) MacP(p, n) denotes the poset of all rank p oriented matroids on elements {1, 2, . . ., n}, with weak map as the partial order.The poset is called the MacPhersonian [1].
Let M be a rank p oriented matroid elements [n] and χ : [n] p → {+, −, 0} its chirotope.We have the following abstraction of notions from vector spaces and convexity.
2.
Basis: A set {i 1 , i 2 , . . ., i p } of size p is said to be a basis of M if and only if χ(i 1 , i 2 , . . ., i p ) = 0. 3. Independence: A set {i 1 , i 2 , . . ., i k } is said to be independent if it is contained in a basis of M. 4. Parallel/Anti-parallel.An non-loop element i is said to be parallel to a non-loop element j if for every p − 1 tuple (i 1 , i 2 , . . ., i p−1 ), we have that χ(i, i 1 , i 2 , . . ., i p−1 ) = χ(j, i 1 , i 2 , . . ., i p−1 ).Similarly, i is said to be anti-parallel to j if for every p − 1 tuple (i 1 , i 2 , . . ., i p−1 ), we have that χ(i, i 1 , i 2 , . . ., i p−1 ) = −χ(j, i 1 , i 2 , . . ., i p−1 ). 5. Convex Hull : Let S be a subset of [n].The convex hull of S is the set An oriented matroid also comes with an abstraction of the notion of subspaces of a vector space.Let V be a rank k subspace of R n and W a rank p subspace of V .We have that the collection of sign vectors {(sign(x 1 ), sign(x 2 ), . . ., sign(x n )) : (x 1 , x 2 , . . ., x n ) ∈ V } is a subset of the collection {(sign(y 1 ), sign(y 2 ), . . ., sign(y n )) : (y 1 , y 2 , . . ., y n ) ∈ W }. Let M be the rank k oriented matroid determined by V , and let N be the rank p oriented matroid determined by W . Then V * (N ) ⊆ V * (M) Definition 8 ([3]) Let M be a rank k oriented matroid, and N a rank p oriented matroid.N is said to be a rank p strong map image of M if and only if V * (N ) ⊆ V * (M).Definition 9 ([3]) Let M be an oriented matroid.The poset of all rank p oriented matroids that are strong map image of M is denoted by G(p, M).
In Figure 3, the oriented matroid N is a rank 2 strong map image of M. We now define the combinatorial analog of the flag manifold Gr(p, k, R n ).The poset came up in the work of Gelfand and MacPherson in [4] and in the work of Babson in [2].Definition 10 ([2]) We define MacP(p, k, n) as the poset of pairs (N, M ) of oriented matroids, where M is a rank k oriented matroid on n elements, and N is a rank p strong map image of M .The pair (N, M ) is called a combinatorial flag.We say that As in the case of the map µ : Gr(p, R n ) → MacP(p, n) discussed earlier, we also have the map ν : Gr(k, p, R n ) → MacP(p, k, n) defined by (W, V ) → (N, M ) where N and M are the oriented matroids determined by subspaces W and V respectively.
We can visualize an element of the flag manifold Gr(1, 2, R n ) as in Figure 4 (a).In Figure 4
Reorientation of an oriented matroid
We have described a realizable rank p oriented matroid as determined by some arrangement of vectors in R p .We now discuss the reorientation of a rank p oriented matroid.
In the language of vector arrangements, reorientation of an element i is simply that if (w 1 , w 2 , w 3 , . . ., w i , . . ., w n ) is a vector arrangement for N , then (w 1 , w 2 , w 3 , . . ., −w i , . . ., w n ) is a vector arrangement for M.
It then follows that if M is obtained from N by a reorientation or a sequence of reorientations, then µ −1 (M) is homeomorphic to µ −1 (N ).Another useful observation is the following lemma: It should also be noted that the homeomorphism type of µ −1 (M) is unchanged by relabelling the elements of M.
3 Posets, Regular Cell Complexes and Topological Balls
Posets and Recursive atom ordering
Associated to every poset P is a simplicial complex P whose n-simplices are chains x 0 < x 1 < • • • < x n for some x i in P .P is called the order complex of P .We will be studying the topology of the order complex of the posets MacP(2, n) and MacP(1, 2, n).
Definition 15 ([8]) A finite poset P is said to be semimodular if it is bounded, and whenever two distinct elements u, v both cover x ∈ P , there is a z ∈ P that covers both u and v.The poset P is defined to be totally semimodular if it is bounded and every interval in P is semimodular.
Definition 16 ([8]) Let P be a poset.P is said to be thin if every interval of length 2 in P has exactly four elements.
Definition 17 ([8]
) A graded poset P is said to admit a recursive atom ordering if the length of P is 1 or if the length of P is greater than 1 and there is an ordering a 1 , a 2 , . . ., a t of the atoms of P which satisfies: 1.For all j = 1, 2, . . ., t [a j , 1] admits a recursive atom ordering in which the atoms of [a j , 1] that comes first in the ordering are those that cover some a i where i < j. 2. For all i < j, if a i , a j < y, then there is a k < j and an element z ≤ y such that z covers a k and a j .
The following theorem and proof appears in the work of Bjorner and Wachs ( [8]).We will give below their proof of the only if direction.
Theorem 18 ([8]) A graded poset P is totally semimodular if and only if for every interval [x, y] of P , every atom ordering of [x, y] is a recursive atom ordering.
Proof Proof of the only if Let P be a totally semimodular poset with length greater than 1.If [x, y] = P , then [x, y] is totally semimodular, and by induction every atom ordering of [x, y] is a recursive atom ordering.
Let a 1 , a 2 , . . ., an be any atom ordering in P , since every atom ordering in [a j , 1] is recursive, order the atoms of [a j , 1] so that those that cover a i for some i < j come first.Let y ≥ a i , a j .Since P is totally semimodular, there is a z ∈ P that covers both a i and a j .
Regular cell complexes
For each open m-cell e α , there exists a continuous map f α : D m → X that is a homeomorphism onto e α , maps the interior of D m onto e α , and maps the boundary ∂D m into a finite union of open cells, each of dimension less than m.
Let {e α } be a regular cell decomposition of X and F (X) the face poset of the decomposition, with e α ≤ e τ if and only if e α ⊆ ∂e τ .The complex F (X) is homeomorphic to X.
Theorem 20 ( [8]) If a poset P containing 0 and 1 is thin and admits a recursive atom ordering, then P is the augmented face poset of a regular cell decomposition of a P L sphere.
Cell Collapse
Hersh in [9] introduced a general class of collapsing maps which may be performed sequentially on a polytope and preserving homeomorphism type.Each such map is defined by first covering a polytope face with a family of parallel lines or generally a family of parallel-like segments across which the face is collapsed.An example [9] of such a collapsing map is given below: Example 21 ( [9]) Let ∆ 2 be the convex hull of (0, 0), (1, 0), (0, 1/2) in R 2 , and let ∆ 1 be the convex hull of (0, 0) and (1, 0) in R 2 .There is a continous and surjective function g : R 2 → R 2 that acts homeomorphically on R 2 \∆ 2 sending it onto R 2 \∆ 1 .The simplex ∆ 2 is covered by vertical line segments with an end point in ∆ 1 .The map g has the property that it maps each vertical line segment to its end point in ∆ 1 .
Let R = {(x, y) : Let g acts as the identity outside R. Definition 23 ( [9]) Given a finite regular CW complex K on a set X an an open cell L in K, define a face collapse or cell collapse of L onto τ for τ an open cell contained in ∂L to be an identification map g : X → X such that:
Hersh gave a formal definition of a collapsing map below
1.Each open cell of L is mapped surjectively onto an open cell of τ with L mapped onto τ .2. g restricts to a homeomorphism from K \ L to K \ τ and acts homeomorphically on τ .3. The images under g of the cells of K form a regular CW complex with new characteristic maps obtained by composing the original characterisitc maps of K with g −1 : X → X for those cells of K contained either in τ or in K \ L.
Definition 24 ([9]) Let K 0 be a convex polytope, and let C 0 i be a family of parallel line segments covering a closed face L 0 i in ∂K 0 with the elements of C 0 i given by linear functions c : [0, 1] → L 0 i .Suppose that there is a pair of closed faces G 1 , G 2 in ∂L 0 i with c(0) ∈ G 1 and c(1) ∈ G 2 for each c ∈ C 0 i and there is a composition i and some (t, t ′ ) = (1, 1).Then t = t ′ , and for each t ∈ [0, 1] we have Theorem 25 ( [9]) Let K 0 be a convex polytope.Let g 1 , . . ., g i be collapsing maps with g j : X Kj−1 → X Kj for regular CW complexes K 0 , K 1 , . . ., K i all having the underlying space X.Suppose that there is an open cell L 0 i in ∂K 0 upon which The sequence of cell collapses needed in the proof of Theorem 43 will be of the form illustrated by the example below.
POSETS, REGULAR CELL COMPLEXES AND TOPOLOGICAL BALLS
The end point of the curve c (x,r) lies in G 0 2 .Let C 0 = {c (x,r) : (x, r) ∈ G 0 1 }.Then C 0 is a collection of parallel-like line segments covering the face L 0 .By Theorem 25, there is an identification map g 1 : K 0 → K 0 specified by C 0 .The map has the property that the curves in C 0 are mapped to their end points in G 0 2 .Let ∼ 1 be a relation on K defined by (x, r) ∼ 1 (x ′ , r ′ ) if and only if and G 1 2 are faces of L 1 .Let C 0 1 be a collection of parallel line segments covering L 1 and with endpoints in G 1 1 and } is a collection of parallel-like segments covering g 1 (L 1 ).By Theorem 25, there is an identification map g 2 : In Theorem 43, we will apply homeomorphism-preserving cell collapses on the boundary of a closed ball of the form The identification ∼ as in Example 26 is given here as in Figure 9.
Topological Balls
Theorem 27 ( [10]) Let S be an (n − 1) sphere in S n and H a component of S n − S.
If H is a topological manifold with boundary, then H is homeomorphic to an n-ball.
Corollary 28 Suppose int(D n ) is the interior of a closed unit ball centered at the origin.
sphere, then H is homeomorphic to a closed unit ball D n .
Definition 29 Let M be a topological manifold and S ⊆ M .The subset S is said to be collared in M if there is a neighborhood N (S) ⊆ M and a homeomorphism h : S × I → N (S) satisfying h(x, 1) = x.
Theorem 30 ( [11]) The boundary of a topological manifold M with boundary is collared in M .
Proposition 31 Suppose G ⊂ X is homeomorphic to an open ball and G is a topological manifold whose boundary is a sphere.Then G is homeomorphic to a closed ball.Proof Let h 1 : G → int(D n ) be an homeomorphism to the open unit ball and S = ∂G the boundary of G.As G is a topological manifold whose boundary is S, by Theorem 30, there is a collared neighborhood N (S) of S in G and a homeomorphism ) is a topological manifold whose boundary is the sphere h 1 (A).By Corollary 28, A ′ is homeomorphic to a closed ball.
We will denote by h 3 : where D 1 2 is a closed ball of radius 1 2 centered at the origin.Define: h 4 gives a homeomorphism from N (S) ⊆ G to the annulus in D n the unit ball.The map h : G → D n given by h = (h 3 • h 1 ) ∪ h 4 is a homeomorphism by the Gluing Lemma.[7] 4 The stratification µ −1 (M ) Let M be a rank 2 oriented matroid.We will first determine the topology of µ −1 (M ) = {X ∈ Gr(2, R n ) : (±χ X ) = M }.We know from the background on oriented matroids in Section 2 that the topology of µ −1 (M ) is invariant under relabeling and reorientation of elements of the oriented matroid.We may thus assume that {1, 2} is a basis of M for the rest of the proofs in this chapter.We may also assume that for any X ∈ µ −1 (M ), X can be uniquely expressed as Arranging the arguments of the non-zero vectors in an increasing order as 0 we can thus uniquely represent X as ((θ l1 , θ l2 , . . ., θ ln 1 ), (θ j1 , θ j2 , . . ., θ jn 2 ), (r λ ) λ ).An example of such an identification is given in Figure 6.The above representation Notation 33 Let M ∈ MacP(2, n) and i a non-loop in M .l M denotes the number of non-loops element in M .p M denotes the number of distinct parallel/anti-parallel classes of M .P M (i) denotes the set of elements that are parallel/anti-parallel to i.
The following proposition then follows from the above identification of µ −1 (M ).
Proposition 35 Let (±χ N , N ), (±χ M , M ) be two rank 2 oriented matroids.Suppose that M covers N .Then exactly one of the following possibilities holds: CR1 There is exactly one i such that |P M (i)| ≥ 2 in M , i is a loop in N and χ N (j, k) = χ M (j, k) for j, k = i.CR2 There are exactly two distinct parallel/anti-parallel classes P M (i) and P M (j) in M such that the following holds: Also, suppose that M and N satisfy CR2.Then there are adjacent parallel classes P M (i), P M (j) with χ N satsfying the definition in CR2.Suppose there is an Let N 0 < M , and let i, j be non-loops in N 0 .χ M (i, j) > 0 implies that χ N0 (i, j) ≥ 0. That is, for any subspace realization Rowspace(e implies that Arg(w i ) ≤ Arg(w j ) for any vector realization Rowspace(e 1 e 2 w 3 w 4 • • • wn) ∈ µ −1 (N 0 ).Also χ M (i, j) = 0 implies that χ N0 (i, j) = 0.In otherwords, Arg(v i ) = Arg(v j ) implies that Arg(w i ) = Arg(w j ) if i, j are non-loops in N 0 .Suppose l i is a non-loop in M such that all elements in P M (l i ) are loops in N 0 .Let a be a non-loop in N 0 satisfying the condition that if for any element p such that χ M (p, a) = +, then χ M (p, l i ) = +.Similarly, let b be a non-loop in N satisfying the condition that if for any element p such that χ M (b, p) = +, then χ M (l i , p) = +.
The face of {(θ 1 , θ 2 , . . ., θn) Repeating the same argument for all such l i with all elements in P M (l i ) loops in N , we can assume that N ′ is a rank 2 oriented matroid such that N ′ < M is obtained from M by sequences of CR2 and each parallel class P N ′ (i) contains a non-loop of N .If a, b are non-loops in N such that χ N (a, b) = 0 and χ ′ N (a, b) = +, this corresponds to the face of {(θ 1 , θ 2 , . . ., θn 1 ) : From N ′ we can obtain by sequences of CR2 a rank 2 oriented N ′′ having the same number of distinct parallel class as N and the same number of non-loops as M .We can then obtain N from N ′′ by sequence(s) of CR1.In particular, if M covers N 0 , then the proposition follows.
Lemma 37 If N < M are rank 2 oriented matroids such that M covers N .Then Proof Case 1: Suppose the covering relation N < M is CR1.There is exactly one p in a parallel/anti-parallel class Case 2: Suppose the covering relation N < M is CR2.There are two distinct parallel/anti-parallel classes P M (s) and P M (j) in M such that the following holds: Let X 1 = Rowspace(e 1 e 2 • • • vn) be any vector arrangement in µ −1 (N ).Let vs = rse iθs , v j = r j e iθs , and suppose WLOG that s and j are parallel in for every sufficiently small value of ǫ.Hence, and N the rank 2 oriented matroid determined by X.Then N ≤ M .
Conversely, suppose N, M ∈ MacP(2, n) such that N < M .We will show that A useful observation from Proposition 35 and Proposition 38 is the following: Suppose N < M are rank 2 oriented matroids such that P M (l 1 ) and P M (l 2 ) are distinct parallel/anti-parallel classes in M , but P M (l 1 ) ∪ P M (l 2 ) is a parallel/anti-parallel class in N .Then we can obtain a rank 2 oriented matroid N ′ ≤ M such that N ′ covers N , P M (l 1 ) and P M (l 2 ) are in distinct parallel/anti-parallel class in N ′ .
An example of such a construction is the following: Suppose Rowspace(v 1 v 2 . . .v n ) determines a vector realization for N and assume WLOG that P N (l 1 ) = P M (l 1 ) ∪ P M (l 2 ).We obtain a rank 2 oriented matroid N ′ as follows: let v l2 = r l2 e iθ l 2 and, we denote by P M (l 1 ) the subset of P M (l 1 ) that are anti-parallel to l 2 in N : The rank 2 subspace Rowspace(w 1 w 2 . . .w n ) determines a rank 2 oriented matroid N ′ ≤ M that covers N .
5 Shellability of the interval MacP(2, n) ≤M ∪ { 0} We will show that the poset MacP(2, n) ≤M ∪ { 0} is the augmented face poset of a regular cell decomposition of a h(M ) − 1 dimensional sphere.
Lemma 39 Let W < T be rank 2 oriented matroids.Then the interval [W, T ] is totally semimodular.
Proof Let N, N 1 , N 2 ∈ [W, T ] such that N 2 and N 1 cover N .In Proposition 35, we proved that there are two possible scenarios when N 1 covers N and similarly for N 2 covering N .So there are the following three distinct cases: Case 1: If N < N 1 and N < N 2 are both case CR1 of Proposition 35.That is, there are i = j such that i, j are loops in N but i is a non-loop in N 1 with parallel/anti-parallel class We consider a dictionary order for ordered pairs for the atoms of M as X 1 , X 2 , . . ., X k .We now verify the conditions in Definition 40.By Lemma 39, the interval [X i , M ] is totally semimodular for each i, and so it has a recursive atom ordering by Theorem 18.In fact any ordering of the atoms in [X i , M ] gives a recursive atom ordering of the poset [X i , M ] by Theorem 18.
We now verify the second condition in Definition 40.Suppose A 1 = (i 1 , j 1 ) < A 2 = (i 2 , j 2 ) in the dictionary order and A 1 , A 2 < Y where Y ∈ MacP(2, n) ≤M ∪{ 0}.We will consider the following cases: Case 1: Suppose i 1 = i 2 .Then j 1 < j 2 .Let j be the maximum label such that j 1 ≤ j < j 2 and the element labeled by j is a non-loop in Y .We obtain Z as (i 1 , jj 2 ), where j and j 2 are parallel if and anti-parallel if ǫ 1 • ǫ 2 = −1.Such a Z covers both (i 1 , j) and (i 1 , j 2 ).We have Z ≤ Y and (i 1 , j) < (i 2 , j 2 ) in the dictionary order.
Case 2: Suppose i 1 < i 2 .Let i be the maximum label such that i 1 ≤ i < i 2 and the element labeled i is not a loop in Y .Then Z is obtained as (ii 2 , j 2 ).Similarly, Such a Z is a rank 2 oriented matroid covers the atoms (i, j 2 ) and (i 2 , j 2 ).We have that Z ≤ Y and (i, j 2 ) < (i 2 , j 2 ) in the dictionary order.
For each j > 1, let Q j = {Y ∈ atom[X j , M ] : Y ≥ X i for some i < j}.In the recursive atom ordering of [X j , M ] for j > 1, we let elements of Q j come first.This determine a recursive atom ordering for the poset [X i , M ] by Theorem 18, as the interval [X i , M ] is totally semimodular for each i by Lemma 39.
Lemma 41 Let N be a rank 2 oriented matroid.
we will consider the covering relations N 0 < N 1 and N 1 < N 2 according to Proposition 35.Case 1: If the coverings N 0 < N 1 and N 1 < N 2 are both CR1 of Proposition 35.That is, there are non-loops i, j so that the parallel/anti-parallel class P 1 (i) has size |P 1 (i)| ≥ 2 in N 1 and i is a loop in N 0 .Also, the parallel/anti-parallel class P 2 (j) has size |P 2 (j)| ≥ 2 in N 2 and j is a loop in N 1 .Now, suppose Rowspace(v 1 v 2 . . .vn) is determines a vector realization for N 2 .Then Rowspace(v 1 v 2 . . .v i−1 , 0, v i+1 . . .vn) determines a vector realization for a rank 2 oriented matroid If the size of parallel/anti-parallel class P 2 (i) is of size |P 2 (i)| ≥ 2 in N 2 , then we obtain N ′ 1 as in the above case from N 2 .If |P 2 (i)| = 1 in N 2 , then there are distinct classes P 2 (j), P 2 (k) = P 2 (i) in N 2 so that P 2 (i)∪P 2 (j) is a parallel/anti-parallel class in N 1 ; and if Rowspace(v and k is a loop in N 1 .Let j ∈ P 2 (k) \ {k} and X 0 = Rowspace(w 1 w 2 . . .wn) determines a vector realization for N 0 .Then w k = 0. We obtain the vector realization X ′ 1 that determines N ′ 1 by replacing w k = 0 with w ′ k = w j .So that N 0 < N ′ 1 < N 2 and N 1 = N ′ 1 .Case 4: If the coverings N 0 < N 1 and N 1 < N 2 are both CR2 of Proposition 35.That is, there are classes P 1 (l), P 1 (j) in N 1 so that P 1 (l)∪P 1 (j) is a parallel/antiparallel class in N 0 and there are classes P 2 (k), P 2 (r) in N 2 so that P 2 (k) ∪ P 2 (r) is a parallel/anti-parallel class in N 1 .Let X 0 = Rowspace(v 1 v 2 . . .vn) determines a vector realization for N 0 , and v k = r k e iθ k .
We can similarly obtain a vector X ′ 1 = Rowspace(w 1 w 2 . . .wn) as in the observation after Proposition 38 so that N ′ 1 the rank 2 oriented matroid determined by Proposition 42 Let M be a rank 2 oriented matroid.Then the interval MacP(2, n) ≤M ∪ { 0} is the augmented face poset of a regular cell decomposition of a P L sphere.
Proof of Proposition 42
The proposition now follows from Proposition 40, Lemma 41 and Theorem 20.
To prove that µ −1 (M ) is homeomorphic to a closed ball, we will first prove that µ −1 (M ) is a topological manifold with boundary N <M µ −1 (N ) as suggested by the following theorems and propositions.
For every X ∈ ∂µ −1 (M ), we will obtain a closed neighborhood N X of X that is homeomorphic to a closed ball of dimension h(M ) and such that X is a point on the boundary of N X .
The map T is not necessarily one-to-one on the boundary of B M .Figure 9 illustrates two points on the boundary {r 3 = 0, r 6 = 0} of B M with the same image under T .
Let ∼ be such identifications on the boundary of B M given by x ∼ y if and only if T (x) = T (y).The identification is as discussed in Example 26.
The map The following theorem will now follow from Proposition 38, Proposition 40, Lemma 41, Theorem 20 and Theorem 43.Theorem 44 Let M be a rank 2 oriented matroid.There is an homeomorphism from µ −1 (M ) to an h(M )-dimensional closed ball.
Conclusion
We have proven that the complexes MacP(2, n) and MacP(1, 2, n) associated to combinatorial Grassmannians have the same heomorphism type as the Grassmannian manifolds Gr(2, R n ) and Gr(1, 2, R n ) respectively.Our argument relies majorly on the realizability of rank 2 oriented matroids, and the fact that stratas µ −1 (M ) and ν −1 (±z, M ) are homeomorphic to an open ball -the argument thus does not apply to oriented matroids of ranks at least 3.
Fig. 1
Fig. 1 An rank 3 oriented matroid from an arrangement of equators
6 NFig. 3
Fig. 3 strong map image Fig. 4 A flag of subspaces, and the rank one oriented matroid
Lemma 14
Suppose N is obtained from M by reorientations of some elements in [n].Then the posets ( 0, M) and ( 0, N ) are isomorphic.Proof Suppose N is obtained from M by reorienting elements in A ⊆ [n].Then the required poset isomorphism R A : ( 0, M) → ( 0, N ) is obtained by taking R A (Y ) as a reorientation of the rank 2 oriented matroid Y by elements in A.
Definition 22 ([9]) Let g : X → Y be a continuous, surjective function with the quotient topology on Y .That is, open sets in Y are the sets whose inverse images under g are open in X.We call such a map g an identification map. | 2022-05-20T01:16:04.710Z | 2022-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "dfeb8fee9faf5aa1251be3c932f67453554138c6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10711-023-00822-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "dfeb8fee9faf5aa1251be3c932f67453554138c6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
241326297 | pes2o/s2orc | v3-fos-license | Exploring the Relationship Between Work-Famly Conflict & Perceived Organizational Support on Project Commitment
This study aims to test the effects of work-family conflict and perceived organizational support on project commitment among project managers of construction industry. The recent growth in construction industry in Pakistan is booming, as many of the international market players have now come into the market. A cross-sectional study was conducted on 179 currently working project managers (middle managers and lead managers) with in the twin cities of Pakistan. Non-probability convinces sampling technique is used; data was collected by using questionnaires. This study is focused on the dimension of project commitment and variables effect on them; work-family conflict shows negative effect and to maintain the imbalance situation and other hand perceived organizational support show positive relationship. This support helps employees to work with enthusiasm. Role theory has been used to support our H1 and organizational support theory to support hypothesis H2. The overall result of this paper is accepted significantly and further explains the relationship between these variables. Also, it would be constructive to examine other variables mediating and moderating impact on these variables such as family and spouse support and burnout. Further, the outcome related variables can also be tested on project managers of other industries.
Introduction
The Pakistani construction industry has always been of economic and socially important to the country. In different prospective portion of Pakistani construction in the local and global market, conversely the construction has been covered the larger area of the market (Khan, 2015). The housing and construction sector in Pakistan plays an important role in developing economy and reducing economy. It delivers significant employment opportunities as it contributes through an advanced multiple effects with a host of favorable forward and backward connection with the economy (Hillebrandt, 1985).
The success and failure of every business around the world depends upon the commitment of the employee engagement within the organization, the relationship between the work family conflict and perceived organizational support are opposite in nature (Greenhaus, 1985;Bowen & Govender, 2017). Work family conflict is the two-way conflict between the work family conflict (WFC) and family work conflict (FWF), which takes your commitment and understanding of the situations towards a negative side of performance (Xiaoyu Yu, 2020). The stress situation between family and work affects work performance and commitment, which further affects balance of life. On the other hand, perceived organization support (PSO) employers common values, employee's loyalty and dedication. Employees who are emotionally committed to their work demonstrate higher commitment towards work, optimum performance, less offs and less-ends likelihood of quitting their jobs (Kim, 2017). Generally, employees are concerned with their organizational commitment with them being valued by organization really helps them towards benefits and rewards. The main point of this research paper is project commitment (PC), how much employees are putting their effort towards work or project goal (Wang & Armstrong, 2004). Commitment includes emotional attachment of employees to the organization with a willingness of the individuals to identify the organizational aims regardless of the situation. In all the elements of the work the most problematic element is the lack of commitment. Commitment play vital role in the success of the project (Gurbuz, 2013).
Theoretical Background And Literature Review
Literature review explains the relationship between the work-family conflict, perceived organizational support and project commitment. These are the factors that affect the project productivity and overall performance of the projects.
Work-family Conflict and Project Commitment
Work family conflict is called as "role conflict of the person when occurs then the person tried to complete his dual responsibility such as working as mother or father" (Dubrin, 1991). As per research review this term "Workfamily conflict" is seeing as a form of the inter-conflict, in which by pressurizing individual to maintain the balance between both works and home (Greenhaus, 1985). This type of situation is tested under two conditions, workfamily conflict (WFC) and family-work conflict (FWC). The prime aspect of work-family conflict usually occurs when it comes to maintain the balance between these two aspects, it's not that easy. Firstly the job related matters intercoms with the deed of family related duties by the person (Nart & Batur, 2014). On other hand, second aspect is family-work conflict, which happens when the individual cannot maintain his/her work responsibilities. Both have their different priority levels, individuals work hard and harder to balance life in a better manner at home but for this balance, work takes a portion of the family time and this leads to work family conflict (Hakanen, 2008). In simple words, when an employee is not able to maintain a balance between their work and family. The reason is that discernment between WFC and FWC are usually seen when different factors get into this dispute with each other and earlier studies have emphasized that there is a significant correlation between these roles (Nart & Batur, 2014).
In 1964, role theory which says that the form of inter-conflict in which role pressure is applied between the work and family domains are mutually clashing. Such pressure indicates that the participation of the work-role is made harder in the family-role and vice-versa (Aminah A, 2008).
Commitment refers to the "one's side involvement", willingness to agree with the decision, trying their level best to carry out their responsibilities is one's commitment. As distinguished, project commitment is characterized by the person willingness, acceptance of organization goal, assurance towards their project and taking the relationship further (Hoegl, 2006). Goal commitment is aligned with project commitment which depends on member's acceptance, involvement and sustaining the relationship with the project team.
The relationship between these two variables are more towards negative association because work-family conflict creates imbalance between the individuals work life and home life, project commitment demands all the hard works towards achieving project goal (Kyle & Peter, 2013). H1: Work-family conflict had a negative relationship on project commitment.
Perceived Organizational Support and Project Commitment
Modern organizational situation becomes more varied and complex, perceived organizational support (PSO) refers to, employee's perceptions where organization highly thinks about their employee's needs and welfare and act accordingly (Eisenberger, 1986). This explains the mutual understanding between employee's and the organization as social exchange theory and the norms of reciprocity. Organizational support theory says the mutual understanding between the employees and the organization then, employees feels inner obligation to exchange supportive and favorable behavior towards their organization aim (Eisenberger, 2011). Further POS also cover socio-economic needs of the employees (e.g. need for appreciation), which leads into favorable condition for the employees.
Accordingly, POS is positively related to the many of the term job satisfaction, job performance, positive attitude, and work-environment. Some of the papers even state that the POS decrease the employee's turnover intentions, burnout and absentees (Kurtesis, 2017). Previous studies state that POS helps in creating positive attitude and decrease conflict.
Most of the literature on the commitment examines project commitment. Project commitment is defined as the bond between the individual and their project team. Commonly studied commitment is (affirmative commitment) which explains the relative efforts of the individuals involvement into their goal of the project (Caesens, 2016). Commitment can be carried out when the employees and project have some goal or both parties are gaining positive outcome (rewards or completed project).
Commitment to the project or the organization is the greatest challenge for the project manager and it's important to the functionally team. The successful projects managers develop commitment through supportive and innovative behaviour. Construction related project commitment and focusing on what is important for project by revising your work which leads to successful project and then to rewards (Gaetane, 2019).
The relationships between these two variables are positively associated with another. POS creates balance between the work lives which leads project towards success; project commitment wants all the hard works towards achieving project goal. H2: Perceived organizational support had a positive relationship on project commitment.
Research Framework
The research model for this study is made-up with the help of role theory (1964) and organizational support theory. Under these theories, the stress in the form of work-family conflict which leads you to extreme stress and this turn in creating problem in lives and other one; supporting employee is the key of successful project. Research aimed on to find out that how the conflict between work and life affects the satisfaction of an employee both at work and in personal life. Based on this; the research model is figure 1.
Model
The model explains that the work-family conflict and perceived organizational support are (independent variable) and they are affecting on project commitment (dependent variable). According to our hypotheses work-family conflict has negative and perceive organizational support has positive effect on project commitment. Figure 1
Methodology
The research study opted for this research is descriptive approach. Data were collected through random sampling by questionnaire. Questionnaires were delivered between project managers and team members of the project of the construction industry. Based on geographical area twin cities (Islamabad and Rawalpindi) of Pakistan, 200 (project managers and individuals who were into same project) were selected from twin cities. Especially those project managers who are currently working into projects nowadays because they require demanding work schedule to balance the time at home. Then the data were collected from 179 project managers, middle managers, resulting the total response were 89.5%.
To measure the work-family conflict, the (Carslon, 2000) and (Stehpens and Sommer, 1996) WFC scale with 10-items was adopted. The sample items includes; ''my work keeps me from my family activities more than I would like''; ''the time I must devote to my job keeps me from participating equally in household responsibilities and activities''.
Perceived organizational support was measured by the, (Eisenberger et al 1997(Eisenberger et al , 2001. POS scale with 7items was adopted. The sample includes; ''help is available from my organization when I have a problems supporting the elderly and children''; 'my organization really care about my well-being''. To measure the project commitment the dependent variable of the research, (Blau G, 1985) was adopted with 5-items. The sample includes; ''I want the career I am doing now''; ''if could do it all over, I will still choose my same career''.
The respondent rate for each statement were tested on 1-5 point Likert scale (Rensis L, 1932), where 1 indicates ''not significant'' and a score of 5 indicates ''always significant''.
The demographic values of project manager are on gender, age, salary (in month), marital status, number of children, experience in work are measured while analysis. Since after this significant result are reported from previous studies. For conducting results IBM SPSS were used to measure frequencies distribution, testing hypotheses, reliability analysis, regression and correlation.
Results
After running the test both of the hypotheses are accepted. Following are the details.
Regarding the qualification the from bachelors to other education. Following are; bachelors are 99 (55.3%), masters are 64 (35.8%), PhD's are 12 (6.7%) and other education is 4 (2.2%). The next frequency distribution is about experience in construction industry; from 0-3 years are 80 (44.7%), 3-5 years are 23 (12.8%), 5-7 years are 26 (14.6%) and 8-above years are 50 (27.9%). And total 200 questionnaires were delivered between the project managers, 179 were used for running test and remaining was uncompleted, so I didn't use them. Furthermore reliability test, correlation analysis and regression analysis are performed. Reliability tests have been performed to check the consistency of the questionnaires, it is important to check the reliabity and validity of the paper. If the paper value lies under Cronbach's Alpha value (0.7) this test was performed on IBM SPSS and tests are; work-family conflicts have 10 items and Cronbach's Alpha is (.798), perceived organizational support is (.850) and project commitment is (.748). These all values show that variables are reliable and satisfactory.
Correlation analysis/tests were performed to see the relationship between the variables, this analysis show that the either independent having significant relationship with dependent variable or not. The correlation between the work-family conflict and project commitment is negative and our hypotheses demand this and as per second hypothesis perceived organizational support having positive relationship with our dependent variable. As per research both hypotheses are accepted and significant.
Regression analysis usually performed to check either, the hypothesis is accepted or rejected. Regression analysis is performed on all variables.
Work-family conflict having negative effect on project commitment and there beta value is (.273) so one unit change in work family conflict effect (.273) change in project commitment. Perceived organizational support having positive effect on project commitment and there beta value is (.374). The tests show value of R-square is .170 so, our independent variables are having 17% deviation on dependent variable, which is significant. The value of t-statistics is, for work-family conflict is 3.149 and perceived organizational support is 5.094 respectively. Values are bigger than the benchmark (1.96) and both of hypotheses are accepted.
Further, after running all the tests both of our hypotheses are accepted. Data were collected through questionnaires and they were sending to project managers and other team members such as (lead manager and middle manager). Results emphasized that work-family conflicts has negative effect on project commitment and perceived organizational support has positive effect on project commitment.
Discussion
Construction is the complex industry; it requires commitment, accuracy, hard work and a lot of work and still this industry is rising rapidly. It is stated that construction industry is linked with the strong linkages of the economy. Results supported the hypotheses; H1 was about the negative effect of work-family conflict on project commitment, work-family conflict is the very trending topic nowadays because either you are teacher, doctor or take any name this variable is commonly faced by the individual because maintaining the balance between the work-life and home-life is the basic agenda of any individual and project commitment how much you are into your project and his related responsibility so while interviewing one of the project manager of the construction site, he said it's tough to maintain balance; there is so much hustle sometimes when I required my many peoples at different place but still trying hard to manage this on the other hand project is in my hand but every individual other here works for his family.
Further, H2 was about on the perceived organizational support impact on project commitment and they have significantly positive relationship with each other. Perceived organizational support is about how much your organization support you, supporting organization and project-based organization played vital role into the life of employees; when they understand employees needs and problem then employee turn more towards there organization and then he puts more heart into the work and by doing this employee show more commitment and responsibility towards their project.
Limitation And Implications
This study adds to the existing information on work-family conflict, perceived organizational support and on project commitment. Data gathered by using pervious research, project commitment the main variable; individual has to perform his duty with the proper responsibility and supporting organizations help employees to work harder. Supporting your employee and solving his problem help him to maintain balance between the work place so employee doesn't have to carry the conflict between the work-life and home-life.
It is certain to admit the limitation of the present study. Following are the few limitations; I have done crosssectional survey on this study, data was collected at the one point at time and conclusion was obtained by this survey. Data on work-family conflict, perceived organizational support and project commitment were gathered at one point of time. Thus, for future work longitudinal study can be performed on this model; to get to know the more variation into results.
While conducting the survey, many of respondent were saying that they face emotional exhaustion, depression while managing both lives. So as per my view; for future limitation we should add burnout as a mediator between work-family and project commitment. To acknowledge the more variations and other side of the picture.
Since the work-family conflict roles relays on both individual home or on individual work place and there support matters a lots, so we can add family support/spouse support as a moderator into this model to gain more clear picture of this model.
Conclusion
Construction is the wider industry and grower faster; as this is going in there in rapid pace towards successful project, some other problem came across between them. The mostly used variable is work-family conflict; the imbalance between the lives of work and home. Perceived organizational support is supported by the organization; either its organization or project team the support really comfort the employee and this help to maintain better commitment to work. I have done survey in the twin cities of Pakistan, data were gathered through cross-sectional study by questionnaires. Role theory is used in first hypothesis inner role pressure create more chaos into lives and organizational supporting theory is for second hypothesis. This study proves that employees had to face many of problems to manage the lives and supporting organizations create more way towards success. This model with one positive and negative hypothesis is about the commitment towards your work life and home life. | 2020-11-05T09:09:23.282Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "a285dfdc0799fecb488a8900d05948feede00240",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/EJBM/article/download/54379/56190",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b6f7c022ef1956137db10ba40b80059e8ca92393",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
11221813 | pes2o/s2orc | v3-fos-license | Models of Fermion Mass Matrices Based on a Flavor- and Generation-Dependent U(1) Gauge Symmetry
We study models of fermion mass matrices based on a flavor- and generation-dependent string-motivated U(1)$_A$ gauge symmetry and report two new classes of solutions to the requisite consistency conditions. In particular, we propose that the fundamental reason underlying the striking feature $m_b, \ m_\tau<<m_t$ is that all of the elements of the down-quark and charged lepton effective Yukawa matrices actually arise from higher-dimension operators, suppressed by inverse powers of the Plank mass. An explicit model embodying this idea is constructed.
The pattern of fermion masses and mixing remains one of the most important mysteries in particle physics. The successful standard model (SM) can accomodate but not explain this pattern. A satisfactory understanding would require that one have an experimentally confirmed theory explaining the related electroweak symmetry breaking (EWSB), and one does not have this at present. Nevertheless, one may proceed by exploring plausible models. The fermion mass spectrum has several striking features: (i) within each charge sector, the masses increase with generation by large factors: m u << m c << m t , m d << m s << m b , and m e << m µ << m τ ; (ii) if one assumes that all of these masses arise from conventional, dimension-4 Yukawa couplings, the associated Yukawa couplings for all of these fermions except the top quark are all much smaller than a typical small coupling like e = √ 4πα ≃ 0.3, without explanation; (iii) related to (ii), even if one restricts to the third generation, the masses are still quite different: m τ and m b are both << m t . A related feature is that (iv) the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix is near to the identity. If one assumes certain simple forms for Yukawa matrices with zeros in various entries, it is possible to explain property (iv) as a consequence of (i), because the quark mixing angles are functions of (square roots of) small mass ratios like m d /m s , m u /m c , etc. [1,2]. However, in such an approach, the fermion masses are used as inputs, and properties (i), (ii), and (iii) are not explained. 1 Indeed, although there has been recent progress in explaining (i) and (iv) via contributions of higher-dimension operators at a high mass scale near to the scale of quantum gravity [3], such efforts have not addressed what, to us, seems an equally remarkable feature, viz., (iii). The attractive idea of radiative electroweak symmetry breaking in a supersymmetric generalization of the standard model [4] depends on the existence of at least one quark which has a mass comparable to the EWSB scale, but it cannot explain why this was the top quark instead of the bottom quark (or, indeed why both m t and m b are not ∼ the EWSB scale), and hence it cannot explain (iii) or the full extent of (ii).
In this paper, we shall explore an appealing class of models of fermion mass matrices which has the potential to explain all of the properties (i)-(iv). We shall present a particular model which, we believe, is the first to offer a possible fundamental explanation of property (iii). The explanation is that in the down-quark and charged lepton sectors, the masses of not just the first two generations, but of all generations arise from higher-dimension operators suppressed by powers of a small mass ratio, ǫ ∝ v/M P , where v is the breaking scale of a flavor-and generation-dependent, string-motivated U(1) A symmetry, andM P ≡ (8πG N ) −1/2 = 2.44 × 10 18 GeV is the (reduced) Planck mass.
To set our work in context, we note that since the original success of the standard model, there has been a growing appreciation that its renormalizability and, in particular, the absence of higher-dimension operators, may well be the consequence of a large logarithmic interval in energy between the electroweak scale and a higher scale where there is new physics (see, e.g., Ref. [5]). Indeed, there are specific reasons for expecting such operators at this high scale: the only known way to stabilize the hierarchy v EW <<M P (where v EW = 246 GeV is the EWSB scale) is via a supersymmetric generalization of the SM. In turn, global supersymmetry is naturally embedded in supergravity, which one also finds as the low-energy limit of the main candidate for quantum gravity, string theory. But d = 4 supergravity is nonrenormalizable. Indeed, explicit calculations of the pointlike limit of string theories for energies E << M str (where M str = 2(α ′ ) −1/2 = gM P ) yield supergravity as a lowenergy effective field theory with infinite towers of higher-dimension operators with coefficient functions proportional to inverse powers of M str and powers of the compactification scale (the latter may be non-explicit in four-dimension string formulations). One must therefore take account of higher-dimension operators when analyzing terms which contribute to fermion mass matrices. This was, indeed, already realized long ago [6], although detailed studies have only been carried out recently. At first, one might consider these higher-dimension operators to be an unfortunate, if inevitable, complication in the theory. However, they may well play a very important role in the area of fermion masses. Specifically, via vacuum expectation values (vev's) of the scalar components of certain chiral superfields, which we shall denote generically as v, these higher-dimension operators can yield contributions to effective dimension-4 Yukawa interactions which are suppressed by powers of the ratio ǫ ∼ v/M str . This idea has already been used for a possible explanation of properties (i) and (iv) [3]; here we shall extend this work with new solutions of the consistency conditions and take a step further, to use the small ratio ǫ to explain (ii) and (iii). A preliminary report of some of our results was given in Ref. [7]. Related results in a somewhat different direction (having m b and m τ arise from dimension-4 operators, as in Ref. [3]) were presented in Ref. [8].
We shall work within the context of a supergravity theory which reduces at low energies to the minimal supersymmetric standard model (MSSM). We consider a theory where there is a flavor and generational symmetry group G F which restricts the forms of the terms in the action. In particular, this symmetry forbids certain cubic superfield couplings which give rise to Yukawa interactions. Since this happens at a scale not too far from that of quantum gravity, and since, in general, global symmetries are broken by quantum gravity [9] even at the semi-classical level, one is motivated to make G F a gauge symmetry. One then faces two questions: (a) is there a natural origin for G F in the presumed underlying string theory? and (b) is there a natural way to explain why the scale of the breaking of G F is such that the ratio v/M str has the value that it must to fit the observed forms of fermion mass matrices?
A possible affirmative answer to both of these questions is provided by a gauged symmetry G F = U(1) A which is, at the field theory level, apparently anomalous, but whose anomaly is cancelled by a Green-Schwarz mechanism [10]. Such U(1) A gauge symmetries are known to arise is various string models and, moreover, they are broken at a calculable scale v given by v 2 ≃ M 2 str /(192π 2 ) [11], so that ǫ ∼ (8π √ 3) −1 = 0.023, a value which is in the right general range to explain fermion mass hierarchies [3]. Of course, such a U(1) A symmetry does not mix up-type and down-type quark superfields, or mix these with lepton superfields and is thus quite different from flavor and generational symmetries which comprise extensions of grand unified groups.
We shall denote the (left-handed) SM matter chiral superfields as Under the flavor-and generation-dependent U(1) A , these carry the charges q Q i , q u c i , q d c i , q L i , and q e c i . The Y = 1, −1 Higgs chiral superfields are denoted H 1 and H 2 and have U(1) A charges q H 1 and q H 2 . We shall also assume that the theory is invariant under the usual R parity. The cubic superfield terms in the globally supersymmetric superpotential are then given by We shall assume that the gauge symmetry at energies The theory will also contain certain chiral superfields which are SM-singlets but transform under the U(1) A . A discussion of constraints on these to avoid destabilization of the gauge hierarchy is given in Ref. [12].
To proceed, we recall how the cancellation of an apparent field-theoretic anomaly in a gauged U(1) symmetry works when this U(1) is, in fact, non-anomalous in the full string theory (Green-Schwarz mechanism) [10] and the related (Dine-Seiberg-Witten, DSW) mechanism whereby the U(1) is broken [11]. Here, we shall denote chiral superfields generically by Φ. Standard d = 4 supergravity is described by two functions. The first is the generalized Kähler potential is a hermitian function of the Kähler potential K and the superpotential W , the latter being a holomorphic function of the chiral superfields. The second is the gauge kinetic normalization (matrix) function f ab (Φ), where a, b = 1, .., N G , the number of gauge bosons. As indicated, f ab is a holomorphic function of the chiral superfields. We shall use G, K, and W to denote both superfield quantities and also their scalar components; however we shall distinguish between chiral superfields and their scalar components by type case (e.g. φ = Φ|, m = M|). The bosonic part of the standard supergravity lagrangian is [14] 1 where the auxiliary field contributions to the potential are given by V =V + D, witĥ and where D a is the D-type auxiliary field associated with generators T a of the gauge group, normalized according to T r[T a , T b ] = (1/2)δ ab . Discussions of one-loop corrections are in Ref. [15] and references therein. The scalar fields and their conjugates are denoted as above by Finally, the derivatives D µ , which are gauge and general coordinate invariant, are normalized so that D µ = ∂ µ + a iA a µ T a . The function f ab determines the gauge coupling constants of the theory and also plays an important role in the cancellation of the field theoretic anomaly due to the FF term in the Lagrangian. In string models, at tree level, f ab is given by where the levels k a of the Kac-Moody algebras on the worldsheet depend on the gauge factor group G a [16]. We denote k i , i = 1, 2, 3 and k A respectively as the Kac-Moody levels corresponding to the factor groups U(1) Y , SU (2), and SU (3) of G SM and the flavor symmetry U(1) A . The dilaton vev determines the gauge couplings, according to 1/g 2 a = k a < Re(s) >. At one-and higher-loop levels, f ab acquires a dependence also on the moduli fields [17,18,19] from chirally anomalous triangle diagrams in the effective field theory as well as string threshold corrections generically needed to cancel these anomalous terms, as is required by the underlying string theory. However, there is a large class of models in which the field theoretic chiral anomalies can be cancelled by a "universal" Green-Schwarz mechanism involving the dilaton, similar to the gauge U(1) anomaly cancellation, in which the one-loop corrected f ab is such that Re(f ab ) (which is what controls the unification conditions for the gauge couplings) has the form Re(f ab ) = k a δ ab (Re(s) + f (T,T )), where T denotes the moduli [17,18,19]. We shall restrict our attention to this class of models. (There are also small logarithmic corrections from the running of the gauge couplings between M str and M X , the scale of gauge coupling unification; as in [3], we shall neglect these here since M X is not << M str .) If the effective theory contains no gauge anomalies, then (3) is gauge-invariant, with the dilaton being a gauge singlet. In the case of interest here, however, where G contains a U(1) A factor which has field theoretic anomalies, then one-loop corrections from light (<M P l ) fermions give an anomalous correction L anom to (3). Under a U(1) A transformation, L anom transforms by where and a runs over the gauge group factors which have no field theoretic anomalies (in the present case, the factors in G SM ). The ellipses in (5) indicate additional terms which we shall discuss below. The theory is invariant under constant rescalings A µ a → αA µ a , which amounts to a simultaneous redefinition of k a and T a : k a → α 2 k a and T a → αT a . (Our normalization of the generators of G SM is the standard one.) For the U(1) A group the above rescaling could be used to set k A to 1 at the expense of modifying the U(1) A charges. We shall instead use this rescaling to fix the U(1) A charge of one of the SM-singlet particles.
Due to the coupling of Im(s) to FF in (3), this variation of L anom can be cancelled by assigning a U(1) A gauge variation to Im(s) [10,11], but only if c A : c a :: k A : k a for all a.
Since s is no longer a gauge singlet, its kinetic terms must be made gauge invariant. At the superfield level, this means that the Kähler function K must include a coupling between the chiral superfield S and V A , the real superfield that contains the U(1) A gauge multiplet. Specifically, the tree level Kähler function for S is modified as [11] − ln(S +S) → − ln(S +S + cV A ) where c is a constant determined by gauge invariance and is related to the coefficients (9). The second term in the last line of (11) produces a term in L B which is linear in the auxiliary D field of V A . Once Re(s) acquires a nonzero vev, this causes some of the fields which are charged under the U(1) A to acquire vev's, thereby spontaneously breaking U(1) A .
In a given string model, (10) is automatically satisfied, due to the absence of gauge anomalies in the underlying theory. The absence of anomalous terms that mix the different gauge group factors requires that This ensures that δL anom does not contain any FF terms that cannot be cancelled by a gauge variation of s. Furthermore, mixed gauge and gravitational anomalies must be absent. In addition to terms explicitly displayed in (11), the U(1) A gauge variation of L anom contains a term proportional to Tr(T A )R µνR µν . This term can be cancelled because the effective Lagrangian contains a nonstandard (higher derivative) coupling k G Im(s)R µνR µν (which defines k G ). The cancellation requires a relation between the coefficients c a , c A and Tr(T A ) because the same U(1) A transformation of s must cancel all anomalies. This last relation is difficult to use phenomenologically because (i) Tr(T A ) depends on SM singlets about which we have very little experimental information, and (ii) because it depends on k A and k G which are not measured at low energies.
We proceed to investigate such models to obtain phenomenologically acceptable fermion mass matrices. Within the above theoretical framework, we shall make the following specific assumptions: 1. The low energy theory near the electroweak scale is the MSSM with phenomenologically viable soft supersymmetry breaking and supersymmetric mass terms. This assumption precludes SM nonsinglets which carry U(1) A charge from getting large vev's due to the U(1) A D-term.
2. Supersymmetry is spontaneously broken in a hidden sector with m 2 3/2 = < e G > ∼ m 2 W and < V > ∼ 0; i.e., O(M 4 P ) and O(M 2 P m 2 3/2 ) contributions to < V > cancel. This ensures that the soft breaking terms are naturally of O(m W ). In addition, this assumption means that the dominant contributions to the effective Yukawa couplings are only from superpotential terms, as in the MSSM, and not from K, which can also potentially contribute to the effective Yukawa couplings in a supergravity model [20].
3. The SM gauge couplings unify in the canonical way, i.e., g −2 3 : g −2 2 : g −2 Y = 1 : 1 : 5/3, or equivalently, k 3 : k 2 : k 1 = 1 : 1 : 5/3. 4. SM singlet fields χ which get a vev with < D A > = 0 carry U(1) A charges with the same sign. In the class of models that we consider, < χ > ∼ 10 16 GeV, which is >> m W . This ensures that < W > (and hence < e G >) can be kept small without resorting to accidental near-cancellation between two large vev's in < W > or introducing extra symmetry. For example, if the U(1) A charge of χ is 1 and the U(1) A charge of another SM singlet χ ′ is −b, b = 2, 3, ..., then gauge invariance allows W to contain terms of If both χ and χ ′ get vev's ∼M P /100 then the contribution of this term to < W > is phenomenologically too large, unless b is sufficiently large.
5.
To reduce the number of parameters, it is convenient to assume that the effective Yukawa matrices are symmetric at M X .
The hierarchical structure of the low-energy fermion masses can be produced by Yukawa couplings which are either hierarchical or democratic. In the latter case, all entries of a Yukawa matrix are the same to leading order, while subleading corrections to these differ so that the eigenvalues of the matrix have the desired hierarchical structure. A U(1) A symmetry can potentially explain hierarchical Yukawa matrices but not (by itself) democratic matrices. In contrast, a hierarchical structure can be produced by assigning U(1) A charges to the operators in (1) in a manner such that for different i, j, these couple to different requisite powers of certain SM singlet chiral superfields.
We next determine which U(1) A charge assignments for the fields satisfy the various constraints and lead to experimentally viable fermion Yukawa matrices. There are N q = 5N G + 2 = 17 U(1) A charges in the MSSM, where N G = 3 is the number of matter generations. In addition, there are N s U(1) A charges for SM-singlet chiral superfields. The conditions (10) and (12), together with the assumption of symmetric mass matrices, reduce these 17 parameters to 8. To show this, we first list the U(1) A charges of the SM matter fields as in Table 1, whereq f = (1/3) 3 i=1 q f i is the generational average charge for the f chiral superfield. For the Higgs chiral superfields, we put q(H i ) = q H i , i = 1, 2. To ensure symmetric Yukawa matrices, we require that the U(1) A charges of the chiral superfield bilinears satisfy In all, there are six independent contraints in these equations. The solutions are Because the anomalies c i in eq. (9) are linear in the U(1) A charges, it follows that, regarding the contributions from the matter fields, the c i only depend on the generational average U(1) A charges. These anomalies are where we show the general N G dependence but take N G = 3 here. The anomaly conditions (10) yield two linearly independent constraints on the 7 parametersq Q ,q u c ,q d c ,q L ,q e c , q H 1 , and q H 2 , e.g., c 2 = c 3 and c 1 = (5/3)c 2 . These can be solved in terms of the 5 quantities x, y, z, v, and w according tō (21) is quadratic in the U(1) A charges, and hence, in general, cannot be written just as a function of the generational averages of the matter field charges. However, given (13)- (15), this anomaly also depends only on these averages; requiring that it vanish gives In terms of the 5 quantities in eq. (20), eq. (21) is We find the following three families of solutions to (22), which are thus solutions to the total set of anomaly constraints (these solutions are independent of N s ): The first two correspond to v = 0 and describe two distinct 3-parameter families of solutions. The last exists for v = 0 and describes a 4-parameter family of solutions with x solved for in (22). The first solution in (23) was already given in Ref. [3]; the other two were not mentioned there and are a new result in the present work. The four parameters α 1 , α 2 , a 1 , a 2 , together with the unknown parameters in (23) yield all allowed U(1) A SM charges consistent with our assumptions. We find that the constraints are very restrictive, as will be seen. In order to account for the important feature (ii) that m t is comparable to the EWSB scale v EW , one chooses the source of m t to be a renormalizable, dimension-4 Yukawa coupling, as in the SM. This requires the U(1) A charge of Q 3 u c 3 H 2 to be zero, i.e., q Q +q u c + q H 2 = 2(α 1 + α 2 ), Let us denote the U(1) A charge of (Q 3 d c 3 H 1 ) as θ. Then q Q +q d c + q H 1 = 2(α 1 + α 2 ) + θ.
Under this assumption, the U(1) A charges of (Q i u c j H 2 ) are given by the matrix If one took θ = 0, then the b quark mass would arise from a renormalizable cubic superfield operator, and one would not have any fundamental explanation of property (iii). The origin of the large mass ratio m t /m b , rather than being explained naturally, would have to be pushed into a similarly large value of tan β = v 2 /v 1 , viz., tan β ∼ m t /m b . Instead, we take θ = 0, implementing our explanation of properties (ii) and (iii), since then m b arises from higher-dimension operators, and m b << m t follows naturally. We proceed to construct an explicit model for fermion mass matrices embodying these ideas. We assume that there are N s = 2 chiral superfields which are SM singlets (and U(1) A nonsinglets), χ and χ ′ with unequal U(1) A charges q χ and q χ ′ . For convenience, let us normalize q χ = 1 (this can be done by rescaling the U(1) A coupling) and define q χ ′ ≡ θ ′ . The DSW breaking of the U(1) A yields values for < χ > /M P ∼ < χ ′ > /M P (denoted ǫ above) which are ∼ O(λ 2 ), where λ = |V us | ≃ 0.22 is a measure of the hierarchical structure of the CKM matrix. First, consider the up-and down-quark masses. We have Motivated by string theory considerations, we shall consider only functions Y which do not contain fractional or negative powers of χ and χ ′ . In order to obtain a viable form of the quark mass matrices, we require Y u . We consider three cases: We have studied all of these cases and find in case (2) an assignment of U(1) A charges which yields effective Yukawa matrices (i.e., the matrices which enter in effective dimension-4 Yukawa terms, all of which, except for Y u 33 , actually arise from higher-dimension operators) which give an acceptable pattern of fermion masses at the electroweak scale. At M X , these are close to the simple forms where the actual entries in the positions given by zeros in (29) need not be, and are not in general, exactly zero; indeed, one may have Y 13 < ∼ O(λ 4 ), and so on (e.g. [3]). The solution we give below satisfies these bounds. In writing such forms, it is understood that the coefficients a ij multiplying a given power of λ may differ from unity, but not by as much as a positive or negative integer power of λ. The pattern (29) is known to be experimentally viable [21,22], and our U(1) A charge assignments constitute a new way of obtaining this pattern.
The last case allows Y u 12 ∼ χ ′2 , i.e. Y u 12 ∼ λ 4 which is too big. Therefore the only viable solution corresponds to the first set, eq. (34). For this solution, we find where, for example, the coefficients of Y u 33 , Y d 23 , Y d 32 and Y d 22 terms could be small enough to satisfy experimental bounds, and so forth for other entries.
These equations imply that the charge assignments for the lepton sector are as for the downquark sector, i.e. Y e ∼ Y d at M X . This allows m τ ≈ m b and m τ m µ m e ≈ m d m s m b at M X and, given the above-mentioned freedom in the coefficients of the powers of λ, this can produce a viable model for lepton masses. 2 Given the charge assignments (38), the three linear equations (24),(25) and (37) can be used to further restrict the solutions to the anomaly constraints (23). For example, for the first 3-parameter family of solutions we require x = −(1/6)(3z + 7) and y = (1/6)(9z − 13).
To pursue this line of research further, the next step is to investigate how the U(1) A charge assignments that we have made can be derived from a deeper theory (presumably the underlying string theory). Another topic for study, but one with much weaker constraints, is that of neutrino masses and mixing. Further details will be given in Ref. [20]. This research was partially supported by the NSF Grant PHY-93-09888. | 2014-10-01T00:00:00.000Z | 1994-12-23T00:00:00.000 | {
"year": 1994,
"sha1": "2413ab66bb5e9206a50d25b4a61c8193a01d0ea9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9412367",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fbb2026b98d2fd2a155784013d6adf9ee479d938",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235262497 | pes2o/s2orc | v3-fos-license | Can artificial intelligence (AI) be used to accurately detect tuberculosis (TB) from chest X-rays? An evaluation of five AI products for TB screening and triaging in a high TB burden setting
Artificial intelligence (AI) products can be trained to recognize tuberculosis (TB)-related abnormalities on chest radiographs. Various AI products are available commercially, yet there is lack of evidence on how their performance compared with each other and with radiologists. We evaluated five AI software products for screening and triaging TB using a large dataset that had not been used to train any commercial AI products. Individuals (>=15 years old) presenting to three TB screening centers in Dhaka, Bangladesh, were recruited consecutively. All CXR were read independently by a group of three Bangladeshi registered radiologists and five commercial AI products: CAD4TB (v7), InferReadDR (v2), Lunit INSIGHT CXR (v4.9.0), JF CXR-1 (v2), and qXR (v3). All five AI products significantly outperformed the Bangladeshi radiologists. The areas under the receiver operating characteristic curve are qXR: 90.81% (95% CI:90.33-91.29%), CAD4TB: 90.34% (95% CI:89.81-90.87), Lunit INSIGHT CXR: 88.61% (95% CI:88.03%-89.20%), InferReadDR: 84.90% (95% CI: 84.27-85.54%) and JF CXR-1: 84.89% (95% CI:84.26-85.53%). Only qXR met the TPP with 74.3% specificity at 90% sensitivity. Five AI algorithms can reduce the number of Xpert tests required by 50%, while maintaining a sensitivity above 90%. All AI algorithms performed worse among the older age and people with prior TB history. AI products can be highly accurate and useful screening and triage tools for TB detection in high burden regions and outperform human readers.
Summary Background
Artificial intelligence (AI) products can be trained to recognize tuberculosis (TB)-related abnormalities on chest radiographs. Various AI products are available commercially, yet there is a lack of evidence on how their performance compares with each other and with radiologists.
Objective
We evaluated five AI software products for triaging TB using a large dataset that had not been used to train any commercial AI products.
Methods
Individuals (≥15 years old) presenting to three TB screening centers in Dhaka, Bangladesh, were recruited consecutively. Every participant was verbally screened for symptoms and received a chest x-ray (CXR) and an Xpert test. All CXR were read independently by a group of three Bangladeshi registered radiologists and five commercial AI products: CAD4TB (v7), InferRead®DR (v2), Lunit INSIGHT CXR (v4.9.0), JF CXR-1 (v2), and qXR (v3). We compared the performance of the AI products with each other, with the radiologists, and with the Target Product Profile (TPP) of triage tests. We used a new evaluation framework that simultaneously evaluates sensitivity, Xpert saving, and number needed to test to inform implementers' choice of vendor and threshold.
Introduction
The use of artificial intelligence (AI) technologies for medical diagnostics has accelerated rapidly in the past decade and AI-powered deep learning neural networks are increasingly being used to analyze medical images, such as chest radiographs or x-rays (CXR). 2,3 CXR is recommended by the World Health Organization (WHO) as a screening and triage tool for tuberculosis (TB), 4 a disease which killed almost as many people worldwide in 2020 as COVID-19. 5 A triage test is used among people with TB symptoms and/or significant risk factors for TB 7 . The performance of CXR as a screening and triage tool has been limited by high inter-and intra-reader variability and moderate specificity, 4 as well as limited radiologist availability, especially in high TB-burden countries. Bangladesh is one such high burden country, with TB prevalence estimated at 260 per 100,000 population and greater prevalence in urban areas. 8 AI technologies provide an opportunity to vastly increase image reading capacity in a variety of contexts. Such technology makes use of neural networks and deep learning to identify TBrelated abnormalities from CXRs. 3 Inspired by the human nervous system, neural networks are interconnected functions, each comprised of a weight and a bias coefficient. 1 Through back-propagation, the networks "learn" by adjusting the weights and biases of the underlying functions based on the difference between predictions and ground truth in a training dataset. 1 Deep neural networks are structured in a number of layers, which increases the capacity of the machine to perform complex processes, such as parsing medical images. 6 Several commercial AI products have emerged in recent years promising to identify TBrelated abnormalities from digital CXR images. 9 AI algorithms produce a continuous abnormality score (from 0 to 100 or from 0 to 1) which represents the probability of the presence of the TB associated abnormalities . 10 Although some software comes with preset threshold scores, all products also allow users to customize the threshold score at any level to dichotomize the output into binary classification ("suggests confirmatory testing for TB" or not). 10 In March 2021, the WHO updated the TB Screening Guidelines to recommend CAD software in place of human readers for analysis of digital CXR for TB screening and triage in individuals greater than 15 years old. 11 The WHO did not recommend specific products leaving many gaps for implementers to consider before a decision whether to implement, and which product, is made. Most available publications on AI feature earlier versions of one product and were conducted with the involvement of AI developers. Published evidence from impartial authors is therefore limited. [12][13][14][15][16] There is also a lack of sizeable external datasets to directly compare products. 15,17 Further, country programs and health professionals need performance measurements beyond accuracy, which is commonly reported as area under the Receiver Operating Characteristic (ROC) 18 curve (AUC) 15 (Annex 1), and guidance on operating point selection for different patient sources. To help implementers assess accuracy, we evaluated five AI software products for triaging TB using a large dataset that had not been used to train any commercial AI products. We also present a new analytical framework for selecting AI vendors and threshold scores in different settings.
Setting and test population
This evaluation of AI software products to read CXR for TB followed the Standards for Reporting of Diagnostic Accuracy (STARD) Initiative on design and conduct of diagnostic accuracy evaluations . 19 In this retrospective study, we included all individuals (≥15 years old) who presented or were referred to three TB screening centers in Dhaka, Bangladesh (details, Annex 2) between May 15, 2014 and Oct 4, 2016. 20 Younger individuals were not included in the analysis as all evaluated AI products are only developed for those ≥15 years of age (Annex 3).
Reading and testing Process
After providing informed consent, each participant was verbally screened by healthcare workers for TB symptoms (procedure, Annex 2) using a standardized digital questionnaire and received a digital posterior-anterior (PA) CXR from a stationary Delft Easy DR X-ray System (machine specification and radiologist reading details in Annex 4). A few asymptomatic people suggested to have TB by the referring physicians also received a CXR. Three Bangladeshi radiologists, registered with Bangladesh Medical and Dental Council (BMDC) with 10, 6, and 1 year of experience (performing over 10,000 CXR reads a year minimum), worked part time for this project and alternated CXR reading. Blinded to any information except age and gender, the radiologists read 15-20 CXRs per day. They graded each CXR as normal or abnormal according to the TB prevalence survey handbook 21 , and further classified abnormal CXRs into highly suggestive of TB, possibly TB, and abnormal but not TB, which could be analyzed separately.
The following AI companies agreed to participate in this independent study: CAD4TB (v7) by Delft Imaging Systems (Netherlands), InferRead®DR (v2) by Infervision (China), Lunit INSIGHT CXR for Chest Radiography (v4.9.0) by Lunit (South Korea), JF CXR-1 (v2) by JF Healthcare (China), and qXR (v3) by Qure.ai (India). 10 Detailed overviews of each AI product can be found in Annex 3. Each center was equipped with three 4-module GeneXpert systems and all individuals were asked to submit a fresh spot sputum sample for testing with the Xpert MTB/RIF (Xpert) assay. On average 12 Xpert tests were done per center daily. Xpert was repeated if the initial test failed (invalid, error, or no result). The final Xpert results were used as the bacteriological evidence and reference standard. All data collected was entered in a customized OpenMRS database and all CXR images were anonymized using a pydicom module 22 in python (script, Annex 5).
The five AI algorithms scored the anonymized images retrospectively, independently, and blinded to all information. A small sample of the anonymized CXRs were checked by the AI developers for image quality. No prior validation was done at the study site. We used CAD4TB cloud version to analyze the anonymized CXR files and installed the other four AI products on the Stop TB Partnership server to analyze the anonymized CXR files.
Data analysis
We first compared the performance of the group of three Bangladeshi radiologists with the five AI algorithms to detect bacteriologically positive (Bac+) TB-suggestive abnormalities. By dichotomizing the categories used by the Bangladeshi radiologists, we created three binary human reading classifications (A-C) varying the radiologist categories that were considered abnormal CXRs (Annex 6). To compare with the continuous output from AI, we calculated the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the radiologists' three different binary classifications and the threshold score each AI product needed to match this sensitivity value for each one. 23 We then compared the difference in specificity, PPV, and NPV between human readings and those of the five AI algorithms using the McNemar test for paired proportions. We also compared each product against WHO's Target Product Profile (TPP) for a triage tool of ≥ 90% sensitivity and ≥70% specificity by altering threshold score to match each target value in turn and recording the performance. 6 The AUCs were compared for each of the five algorithms using R programming (pROC package DeLong methods) for dependent AUCs. 24 Since ROC plots could mislead on the reliability of algorithm performance owing to an intuitive but wrong interpretation of the specificity in imbalanced datasets (low disease prevalence), 25 we also calculated the area under Precision-Recall (PRC) curve (PRAUC) for each (ROC and PRC methodology, Annex 7). Both ROC's and PRC's were generated for each product over a continuous range of threshold values.
Additionally, we assessed the distribution of abnormality scores disaggregated by Xpert results and prior history of TB. Finally, since the same threshold scores might provide different results in different populations, we evaluated the performance of the AI algorithms disaggregated by age, gender, prior TB history and patient sources using AUC and PRAUC.
Evaluation framework
We used an evaluation framework that analyzes performance beyond AUC, which is the standard approach of AI evaluations, to inform threshold selection by factoring in costeffectiveness and the ability to triage. Products were evaluated in a hypothetical triage process whereby the AI score output would be used to triage all individuals in the study population for follow-on Xpert diagnosis based on a pre-defined threshold score. We calculated the proportion of subsequent Xpert assays saved (with 0% representing the Xpert testing-for-all scenario) as a proxy for a product's the cost-effectiveness. Likewise, the number of people needed to test (NNT) to find one Bac+ individual was used as a proxy for a product's ability to triage. We plotted the sensitivity against the proportion of Xpert saved to show the trade-off between finding as many Bac+ patients as possible and the cost savings of each AI algorithm. We produced visualizations of sensitivity, proportion of Xpert saved, and NNT over a continuous range of threshold scores in an evaluation framework to facilitate our understanding of threshold selection.
Ethics
All enrolled participants provided informed written consent (Annex 2). The study protocol was reviewed and approved by the Research Review Committee and the Ethical Review Committee at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b).
Role of the AI developers
The AI developers had no role in the study design, data collection, analysis plan, or writing of the study. The developers only had access to the CXR images and did not receive any information on the patients' demographic, symptom, medical, or testing data.
Role of the funding source
The funders of the study (Government of Canada) had no role in study design, data collection, data analysis, data interpretation, or writing of the report.
Results
Between May 15, 2014 and Oct 4, 2016, a total of 24,009 individuals 15 years old and above visited the three TB centers and were enrolled for this study. Xpert tests needed to be repeated in 830 participants, of which, 15 remained invalid or showed an error after the second Xpert, and 24 did not have enough specimen for the second Xpert. Sixteen individuals did not have a valid or clear x-ray. After excluding these 55 individuals, a total of 23,954 (98·1%) individuals were included in the analysis ( Figure 1). The median age was 42·0 [30·0, 57·0], 32·9% were female, and almost all (98·4%) reported at least one TB-related symptom. Reported symptoms included cough (89·9%), fever (79·6%), shortness of breath (54·4%), weight loss (62·8%), and hemoptysis (13·0%). The final sample included 3,586 (15·0%) participants who had a history of prior TB treatment. More than three quarters (n=17,541, 75·8%) of the participants were referred by public or private providers; 12.8% of participants were walk-ins; 10.7% referred by DOTS after a negative smear; while community screening and contact tracing made up just 142 (1.4%) of participants combined. The prevalence of Bac+ TB confirmed by Xpert was 15·3% overall (n=3,675), and 4·9% (n=181) of these cases were resistant to rifampicin, which is higher than the national average. About 14.3% of Xpert positive patient with prior history of TB had RIF resistance, and 3.1% TB patient without a history of TB had RIF resistance. The Bac+ rate and the RIF+ rate are both higher than the national average because of the high proportion of referrals and urban population in this study. 8 The radiologists graded 3,683 (15·4%) radiographs as Highly Suggestive of TB, 7,154 (29·9%) radiographs as Possibly TB, 3,625 (15.1%) radiographs as Abnormal-not TB, while 9,492 (39·6%) were read as normal (Table 1).
The performance of a single AI algorithm across the evaluation framework can be used to inform threshold selection. For most of threshold scores (0-0·9), the sensitivity JF CXR-1 remained above 90% (Figure 2-d), diagnostic Xpert tests saved remained between 30% and 60% (Figure 2-e) and the NNT was between 5 and 3 ( Figure 2-f). The sensitivity of CAD4TB, Lunit INSIGHT CXR, and qXR remains above around 80% for most of the threshold scores (0-0·8), before quickly decreasing.
The results presented in Figure 2-d demonstrate that the threshold selection depends on the algorithm in question and the context in which it is being used. For example, the threshold required to achieve at least 90% sensitivity must be below 0·34 for InferReadDR, 0·50 for CAD4TB, below 0·60 for qXR and Lunit INSIGHT CXR, and below 0·93 for JF CXR-1.
Density plot
The stacked density plot in Annex 10 shows the distributions of the abnormality scores of the five AI algorithms disaggregated by Xpert outcomes and by prior TB history. The distributions of the five AI algorithms vary considerably, indicating different underlying neural networks and the effect that changing the threshold score can have for different products. Lunit INSIGHT CXR's, CAD4TB's, qXR's, and InferReadDR's density plot demonstrated good dichotomization pattern (between Bac+ and Bac-). Although almost all Bac+ participants received high abnormality scores (95-100) from JF CXR-1, so did many Bac-individuals. None of the distributions of the abnormality scores from the Bac-participants with prior TB history (the dark red bars) are left skewed.
Subgroup Analysis
All 5 AI algorithms showed significant variation in performance with age: performing worse in the older age group (>60 years old) than in both the younger (p values range 1.8e - 16 [qXR] to 9.6e -8 [Lunit INSIGHT CXR]) and middle age group (p value range 6.7e -13 [qXR] to 6.6e -8 [Lunit INSIGHT CXR]). InferRead DR, CAD4TB, JF CXR-1, and qXR also performed significantly worse in the middle age group compared to the younger, although no significance was observed for the other products. All five AI algorithms performed significantly worse among people with prior TB history (p values from 1.6e -30 [InferRead DR] to 1.1e -12 [CAD4TB]). No significant differences were observed across genders, except JF-CXR-1 and Lunit INSIGHT CXR which performed better in males than females (p values= 0.0045 and 0.020 respectively) (AUC in Figure 3 and p-value in Annex 11).
AI performance also varied with patient source. All CAD performed significantly better among walk-ins than referrals from public and private facilities, and DOTS re-testing (p values from 7.5e -25 to 0.02), except Lunit INSIGHT CXR which showed no difference with public sector referral. qXR, JF CXR-1, and InferRead DR performed better among individuals from DOTS retesting than private referrals (p = 2.1e -4 , 3.0e -6 , and 3.9e -3 , respectively) and InferRead DR also performed significantly worse among individuals from community screening than walk-ins (p = 0.0065).
Discussion
This is the largest independent study evaluating multiple AI algorithms as triage tests for TB with CXR, and the first published evaluation of JF CXR-1 and InferReadDR for detecting TBsuggested abnormalities, and of the latest version of CAD4TB (v7). Our study shows that the predictions made by the five algorithms significantly outperform experienced Bangladeshi human readers in detecting TB-related abnormalities. 14 The AUCs indicate that all AI algorithms performed well, with qXR and CAD4TB being the two top performers. CAD4TB, Lunit INSIGHT CXR, and qXR's AUCs in this study are slightly lower than those in previous independent evaluations. 17,26 Our sub-analysis showed that CAD performance varied with demographic and clinical factors; as well as patient source; and therefore implied that variation exists in AI performance across different contexts and geographies. This cautions the generalization to other populations, particularly when selecting threshold scores for different populations. Training strategy and dataset (Annex 3) could explain these differences in CAD performance by affecting the ability of CAD to generalize learning to different populations. Further, the difference in performance of different CAD software is better visualized in PRC plot than in ROC curves, so should be utilized in future analyses with imbalanced datasets to inform vendor selection.
It is important that implementers can make informed decisions when selecting the threshold score specific to their settings. To do so, we used a new evaluation framework with important implementation-relevant indications, like confirmation tests saved and NNT, to measure the cost-effectiveness and the ability to triage. We observed that automated reading of CXR by all five AI products can keep sensitivity above 90% and at least halve the number follow-on diagnostic tests required. For large case finding programs that may have limited onsite Xpert testing capacity, there is a tradeoff between the number of cases identified and the proportion of Xpert that can be saved. A clear example of this is that choosing to save 70-80% of Xpert tests by allowing sensitivity to reduce to 70% misses 30% of TB cases. Additionally, the evaluation framework also indicates the difference in products that may have very similar performance using ROC and PRC alone.
The importance of using a more nuanced analytical framework for evaluation can be demonstrated by imagining different hypothetical case finding situations. For a program focused on capturing almost all people with TB and with access to many rapid diagnostic tests, to identify at least 95% of TB positive individuals: qXR would save the most confirmatory Xpert tests (54%), CAD4TB would save 51% Xpert tests, followed by JF CXR-1, Lunit INSIGHT CXR, and InferReadDR which would save 43%, 42%, and 41% of subsequent tests respectively. If we imagine another hypothetical case of a large active case finding program using CXR, but with a much more limited budget and the need to reduce the numbers of follow-on Xpert tests by 75% whilst accepting compromised sensitivity: qXR would have a sensitivity of 80·6% (79·2-81·8%) and CAD4TB with 79·7% (78·3-81·0%). This is followed by Lunit INSIGHT CXR with a sensitivity of 76·6% (75·1-77·9%), InferReadDR with 69·3% (67·7-70·8%) sensitivity, and JF CXR-1 with 68·5% (66·6%-69·7%) sensitivity. We recommend that the evaluation framework be included in future AI evaluations instead of only reporting on AUC.
The density plot shows that underlying neural networks of the five AI algorithms were constructed very differently and that no universal threshold score can be applicable to all AI algorithms. Moreover, the density plots of the Bac-individuals with prior TB history indicates the algorithms' poor ability to differentiate between old scarring and active lesions could lead to excessive recall in this group. We hypothesize that the abnormalities on the chest due to age and prior TB history influenced the classification of active TB. The overall performance of the five AI algorithms differed between age groups, patient sources, and prior TB history, although performed similarly across genders (Figure 3, Annex 11). Interestingly, accuracy (i.e. AUC) was lowest among those who presented themselves and were recruited through community-based case finding. This implies that the threshold scores likely need to be different depending on the population tested and with sub populations. We further recommended the manufacturers to include basic demographic and clinical data to further improve their AI products in future software iterations.
Our results document the performance of five products at one point in time. New products are set to emerge in the near future and updated software versions are launched almost annually. 10 Two products in this evaluation had not been previously evaluated in peerreviewed journals. Unlike traditional diagnostic tests which take years to produce and update, the performance of AI improves incredibly fast. Future guidance from bodies like WHO must prepare for this speed of change and independent evaluation libraries are required to help implementers understand the latest performance.
Limitations
Due to logistic and budgetary constraints, we did not use culture as the reference standard, meaning that some people with Xpert-negative, culture-positive TB have incorrectly been labelled as not having TB. We also did not have access to Xpert Ultra, which is more sensitive than Xpert, in Bangladesh during the study. Due to limited number of asymptomatic individuals, we did not stratify by symptoms and perform subgroup analysis. We did not conduct HIV testing because Bangladesh has a low HIV prevalence. 5 However, the performance among different sub-populations, especially those living with HIV who often present with atypical radiological images, needs to be better documented. 27 Similarly, we excluded children from our study population, even though some (but not all) of the products included are licensed for use in younger age groups. 27,27 Such decisions, not to evaluate performance in children and HIV+ individuals, limits the generalizability of our findings. Further evaluation of CAD in children is necessary. Each CXR was read by one Bangladeshi radiologist, not by a panel of radiologists. However, the intended use of these AI algorithms is in resource-constrained settings with few or no radiologists and neither resources nor time permitted multiple readings of high numbers of images. Further, human readers were blinded to clinical and demographic information except age and gender, though in the field reading of CXR could be informed by this data. Additionally, the analysis includes only three human reads as a comparison point. We caution against extrapolating the study findings to rural areas as our study was done in metropolitan Dhaka where experienced human readers are often more available. CAD software may perform even better compared with the radiologists in rural and low resourced areas. Additionally, only one brand of x-ray machine was used in this study due to procurement constraints. Lastly, we did not conduct this study prospectively and did not collect implementation data such as programmatic costs, setup, services, user experience, etc.
Conclusions
Our results demonstrate that all five AI algorithms outperformed experienced certified radiologists and could save follow-on Xpert testing and reduce NNT whilst maintaining high sensitivity. ROC and precision recall curves are powerful tools for evaluation, however, additional metrics and analysis, including our new evaluation framework of sensitivity, confirmation tests saved, and NNT with varying threshold scores, will help implementers with threshold and software selection.
Author Contributions
The study was conceived by ZZQ, JC and SB. Data collection was led by SB, KP, SA, MS. Data cleaning and verification was done by ZZQ, SB, KP, SA, data analysis and interpretation conducted by ZZQ, TN and JC; ZZQ wrote the first draft of the manuscript. ZZQ, JC and RB revised the manuscript. All authors contributed to and approved the final manuscript.
Declaration of interests
None declared.
Data sharing
The datasets used in this study can be available upon reasonable request to the corresponding author. CXR images will not be provided as these are withheld by the corresponding author's organization to reserve their use for product evaluations. radiologists, when matching sensitivity. For example, the difference in specificity = the specificity of an AI product -the specificity of the corresponding radiological binary classification. Marks the closest available match to a specificity value of 70%. Figure 1. The diagnostic process used by the 3 TB screening sites in the study * DOTS (Directly Observed Treatment Short course) centers are specialized facilities for the diagnosis and treatment of TB patients. DOTS re-testing refers to individuals referred from DOTS center after a negative smear. Can artificial intelligence (AI) be used to accurately detect tuberculosis (TB) from chest X-rays? An evaluation of five AI products for TB triaging in a high TB burden setting
Annex 2 Description of TB Screening Centers and consent procedure
Participants were recruited from individuals presenting or referred to three centers in metropolitan Dhaka: icddr,b Mohakhali Tuberculosis Screening & Treatment Center (TBSTC), icddr,b Golapbagh TBSTC, and icddr,b Dhanmondi TBSTC. Everyone was screened TB symptoms (any cough, fever, shortness of breath, weight loss, hemoptysis), and then received chest x-ray and GeneXpert. How patients arrived at the TB screening center was documented (whether this was referral by public or private providers, walk-in, referral by DOTS center, from community screening or contact tracing) and individuals were asked to self-report history of TB. For literate adults (>17 years) consent was taken in the form of a signature. For children and adolescents (15-17 years) we obtained consent from both the parent/guardian and child (assent). For illiterate person the consent was read in front of the person to confirm that he/she understands his/ her engagements with the study in the presence of a witness. They consent by their thumb print in front of a witness. | 2020-06-11T01:01:37.816Z | 2020-06-09T00:00:00.000 | {
"year": 2020,
"sha1": "1c0e4b545c0f9c90dfe861894b3d860b70b169b7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1a9ee450980a0e26e0a89f7a99d9557555330b79",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Biology"
]
} |
923372 | pes2o/s2orc | v3-fos-license | ClearSee: a rapid optical clearing reagent for whole-plant fluorescence imaging
Imaging techniques for visualizing and analyzing precise morphology and gene expression patterns are essential for understanding biological processes during development in all organisms. With the aid of chemical screening, we developed a clearing method using chemical solutions, termed ClearSee, for deep imaging of morphology and gene expression in plant tissues. ClearSee rapidly diminishes chlorophyll autofluorescence while maintaining fluorescent protein stability. By adjusting the refractive index mismatch, whole-organ and whole-plant imaging can be performed by both confocal and two-photon excitation microscopy in ClearSee-treated samples. Moreover, ClearSee is applicable to multicolor imaging of fluorescent proteins to allow structural analysis of multiple gene expression. Given that ClearSee is compatible with staining by chemical dyes, the technique is useful for deep imaging in conjunction with genetic markers and for plant species not amenable to transgenic approaches. This method is useful for whole imaging for intact morphology and will help to accelerate the discovery of new phenomena in plant biological research. Summary: The optical clearing reagent ClearSee improves the multicolor imaging of fluorescent proteins and dyes and allows the structural analysis of gene expression patterns in multiple plant tissues.
INTRODUCTION
To understand how cell patterning changes with gene expression, an important challenge in developmental biology is visualization of three-dimensional (3D) morphology with gene expression in intact tissues at the cellular level. Recent advances in fluorescence imaging using fluorescent proteins (FPs), such as green fluorescent protein (GFP), reveal gene expression at the subcellular level. However, it is difficult to observe such FPs in intact plant tissues because plant tissues contain a variety of autofluorescent compounds (Müller et al., 2013), which results in non-specific background fluorescence. In addition, plant tissues contain various components with different refractive indexes (e.g. air, 1.00; cell wall, 1.42; cytoplasm; 1.36) (Kumar and Silva, 1973;Vogelmann et al., 1996). These refractive index mismatches cause light scattering. In traditional observation methods, mechanical sectioning is required to obtain high-resolution images of deep plant tissues. However, it is difficult to reconstruct a 3D representation of gene expression patterns from mechanical sections because of the laboriousness of serial sectioning and potential difficulty of obtaining sections for desired regions and orientations. Classically, a variety of chemical reagents have been used to improve the transparency of plant tissues. Of these reagents, acidified chloral hydrate is most commonly used to clear plant tissues (Lersten, 1967). Chloral hydrate (as Hoyer's solution) has been used for the preservation of specimens since the late 19th century (Hoyer, 1882), and has a high refractive index (1.428), which allows high penetration of light without scattering for a wide variety of plant tissues (Villani et al., 2013). However, to our knowledge, acidified chloral hydrate has not previously been used in conjunction with GFP.
The combination of staining with a chemical dye and clearing with chloral hydrate yields optical sections of high resolution at a subcellular level (Haseloff, 2003). Optical sectioning enables the generation of a series of z-stack images, thereby obtaining images in a desired plane after 3D reconstruction. Bougourd et al. (2000) demonstrated the utility of high-resolution (z-stacks were collected with 0.2 µm intervals) confocal imaging of mature Arabidopsis embryos by clearing with chloral hydrate after staining the cell contents with Aniline Blue. Truernit et al. (2008) performed highresolution (z-stacks were collected with 0.1-0.2 µm intervals) confocal imaging of the cellular structure in various tissues of Arabidopsis thaliana by staining the cell membrane with propidium iodide. As alternative approaches for large-scale tissues, optical sections have been obtained by high-resolution X-ray computed tomography (Stuppy et al., 2003), optical projection tomography (Lee et al., 2006) and magnetic resonance imaging (Metzner et al., 2014). However, these techniques lack subcellular resolution. Some of these techniques can be combined with β-glucuronidase (GUS) staining to visualize gene expression at the cellular level (Lee et al., 2006;Truernit et al., 2008), whereas GUS staining cannot be detected at the subcellular level and prohibits the detection of multiple gene expression by multicolor imaging. Recently, array tomography has been developed for 3D imaging at high subcellular resolution, especially z-axis resolution, in animal tissues (Micheva and Smith, 2007). Array tomography incorporates automated ultrathin (50-200 nm) sectioning of resin-embedded samples that preserves the fluorescence of FPs, imaging of these sections, and 3D reconstruction. However, application of this method is limited to relatively small specimens.
Multi-photon excitation microscopy (MPEM) is valuable for deep imaging in intact tissues because the excitation wavelength of multi-photon excitation is in the infra-red region, which shows high penetration of biological samples (Centonze and White, 1998). In animal tissues, deep imaging has been achieved by MPEM at 1.4 mm depth for living mouse brain tissue (Kawakami et al., 2013;Horton et al., 2013). Two-photon excitation microscopy (2PEM) has also been used for deep imaging of plant tissues (Feijó and Moreno, 2004). The longer wavelength excitation (1000 nm) for 2PEM allows deep imaging with decreased autofluorescence. Moreover, 2PEM allows multicolor imaging by simultaneous excitation of multiple FPs with a single wavelength, because of the broad two-photon absorption spectra (Drobizhev et al., 2011;Mizuta et al., 2015). However, it is difficult to achieve whole-plant imaging, even by 2PEM, because the complex geometry of plant tissues leads to light scattering caused by refractive index mismatch.
Recently, various chemical mixtures have been used for clearing mammalian tissue to reduce refractive index mismatch and to remove the colored tissue components (Vogt, 2015;Miyawaki, 2015). Scale, a urea-based aqueous reagent, renders fixed mouse brain samples transparent while preserving the fluorescence of FPs, because urea promotes the hydration of biological samples (Hama et al., 2011). Scale allows deep imaging over 1.6 mm depth both by confocal imaging and 2PEM. SeeDB, a sugar-based aqueous reagent, clears fixed mouse embryos and brain samples by adjusting refractive index mismatch within the samples without detergents or denaturation reagents (Ke et al., 2013). Scale requires 2 weeks for clearing of fixed mouse brain samples, whereas SeeDB can shorten the clearing period to 3 days. Surprisingly, CUBIC, a Scale-based aqueous reagent, allows whole-body imaging as well as whole-brain imaging in mice Tainaka et al., 2014). By chemical screening,aminoalcohol [N,N,N′,ethylenediamine] was found to decolorize body samples by solubilization. These reagents have a high refractive index (1.38-1.39 in Scale; 1.49 in SeeDB; 1.48-1.49 in CUBIC), thereby rendering high transparency to fixed mouse brain tissues (Hama et al., 2011;Ke et al., 2013;Susaki et al., 2014). As alternative approaches for clearing tissues while preserving the fluorescence of FPs, the CLARITY and PACT-PARS methods use active or passive extraction of lipids from the tissue-hydrogel hybrid (Chung et al., 2013;Tomer et al., 2014;Yang et al., 2014).
In this study, we developed an aqueous chemical reagent, termed ClearSee, that renders fixed plant tissues transparent to allow deep imaging by chemical screening. ClearSee rapidly diminishes chlorophyll autofluorescence while preserving the fluorescence of FPs. Multicolor imaging with ClearSee enables observation of the precise 3D structure and specific gene expression patterns. Moreover, ClearSee is applicable to whole-root and leaf imaging using 2PEM and confocal microscopy. We demonstrate the application of ClearSee treatment to whole-seedling imaging for visualization of phloem patterning.
Chemical screening of clearing reagents for plant tissues
The main source of interruption of fluorescent observation is autofluorescence (e.g. by chlorophyll) in plant tissues (Müller et al., 2013). In the Scale and CUBIC reagents, polyhydric alcohol/ detergent/urea mixtures are used for clearing of brain samples (Hama et al., 2011;Susaki et al., 2014). We first evaluated the clearing efficiency of these compounds for removal of chlorophyll autofluorescence using fixed leaves. We screened 24 compounds, including polyhydric alcohols, detergents, hydrophilic small molecules, and traditional molecules, for clearing (Table S1). We measured the chlorophyll fluorescence at 680 nm emission with 415 nm excitation in the chemical solution over 7 days of incubation using a microplate reader (Fig. 1A). A series of detergents (#07, #08, #09, #11, #12, #14, #15 and #16) showed high activity for chlorophyll extraction with 7 days of incubation. Chloral hydrate (#23) and lactic acid (#24) are among the most commonly used clearing solutions for plant tissues (Simpson, 1929). Because #23 and #24 also quenched chlorophyll fluorescence, these compounds exhibited low activity for chlorophyll extraction in the microplate reader assay (Fig. 1A).
Next, we evaluated the preservative effect of these compounds on recombinant Venus fluorescence (Fig. 1B). In the case of compounds that showed high clearing activity, #14, #23 and #24 strongly quenched Venus fluorescence. Given that some FPs are pH sensitive, we analyzed the FP stability by incubation in neutralized chloral hydrate-based clearing solution with pH adjustment ( pH 7.1). The fluorescence of Venus was quenched even with the neutralized chloral hydrate-based clearing solution (Fig. S1). By contrast, the fluorescence of Venus was stable with other compounds, including #07, #08, #09, #12 and #16.
To evaluate the clearing effect of polyhydric alcohols in addition to detergents, we next incubated fixed leaves in detergent/ polyhydric alcohol mixtures. For a second screening, #09 was rejected because of the cost. The mixtures of #03/#12, #04/#12 and #05/#12 were not fully mixed, as assessed by visual confirmation, and hence were also rejected because of the low uniformity of the mixtures. The results of the second screening using fixed leaves of UBQ10pro::H2B-mClover are summarized in Table S2. The fluorescence of recombinant Venus was stable in all mixtures (Fig. 1C). Although some #01 mixtures showed high transparency of fixed leaves, the mClover fluorescence was slightly reduced. The #07 and #08 mixtures tended to show high stability of mClover fluorescence and transparency (Table S2). The six combinations that showed high mClover fluorescence, reduced autofluorescence, and high transparency were applied to the third screening. For the third screening, we evaluated detergent/polyhydric alcohol/urea mixtures such as the Scale/CUBIC reagents (Hama et al., 2011;Susaki et al., 2014). The #01/#07/#19 mixture decreased the fluorescence of recombinant Venus (Fig. 1D). Among the other mixtures, #04/#07/#19 showed high mClover fluorescence, decreased autofluorescence, and high transparency (Table S3). We designated the #04/#07/#19 mixture [10% (w/v) #04, 15% (w/v) #07, 25% (w/v) #19] as ClearSee. Fig. 1E shows a seedling incubated in ClearSee for 2 weeks. Compared with PBS incubation, ClearSee rendered the whole seedling optically transparent.
ClearSee clears chlorophyll autofluorescence while preserving the fluorescence of FPs
Recently, Scale-like solution [6 M urea (#19), 30% (v/v) glycerol (#03), 0.1% (v/v) Triton X-100 (#10)] was used to clear plant tissues (Warner et al., 2014). To evaluate the clearing efficiency of this solution, we incubated fixed leaves in PBS, ClearSee, Scale-like solution, and neutralized chloral hydrate-based clearing solution. After 4 days of treatment with the clearing solutions, Scale-like solution-treated leaves still showed green coloration, whereas ClearSee-treated leaves contained no green pigmentation and were transparent ( Fig. 2A). The transparency of ClearSee-treated leaves was comparable to that of chloral hydrate-based solution-treated leaves, and indicated that ClearSee rapidly clears leaf tissues.
To examine the stability of FPs with ClearSee treatment, UBQ10pro::H2B-mClover leaves were treated with clearing solutions for 4 days. Scale-like solution did not fully remove the green pigmentation of the leaf, whereas the transparency of ClearSee-treated leaves was comparable to that of chloral hydratebased solution-treated leaves (Fig. 2B, bright-field). With chloral hydrate-based solution treatment, autofluorescence was dramatically decreased but the fluorescence of H2B-mClover was completely lost (Fig. 2B). By contrast, sufficient fluorescence of H2B-mClover was retained for detection after ClearSee treatment.
To compare the transparency with FP stability for each solution, we performed 3D imaging of UBQ10pro::H2B-mClover leaves (Fig. S2). We obtained images from 100 z-stacks with 1.0 µm intervals by 2PEM with 950 nm excitation. Although the fluorescence of H2B-mClover was detected to ∼40 µm depth in PBS, the nuclei were clearly observed even at 100 µm depth in ClearSee-treated leaves (Fig. S2A,B). In Scale-like solution-treated leaves, the signal intensity of H2B-mClover was decreased as depth increased and was difficult to detect at more than 70 µm depth (Fig. S2C). The fluorescence of recombinant Venus was stable in Scale-like solution (Fig. S1), indicating that the residual autofluorescence of Scale-like solution-treated leaves prevented the detection of H2B-mClover fluorescence. Consistent with this conclusion, the fluorescence of H2B-mClover was more strongly detected in ClearSee than PBS and Scale-like solution ( Fig. S2A-C). These results indicated that ClearSee renders the leaf tissue transparent while maintaining the fluorescence of mClover.
To examine the causes of autofluorescence, we measured the autofluorescence spectrum with spectral imaging by 2PEM. Autofluorescence was observed in mesophyll cells of clearing solution-treated leaves (Fig. 2C). The two independent emission spectra were detected with 950 nm excitation in mesophyll cells of clearing solution-treated leaves (Fig. 2D). The autofluorescence intensity, especially at >610 nm, was dramatically decreased in mesophyll cells of ClearSee-treated leaves (Fig. 2D). This spectrum corresponded to chlorophyll autofluorescence (Langhans and Meckel, 2014), which indicated that ClearSee diminished chlorophyll autofluorescence while maintaining mClover fluorescence ( Fig. 2B,C). The emission peaks in the 500-600 nm range were presumably caused by autofluorescence from the cell wall and other cellular components (Müller et al., 2013;Mizuta et al., 2015). Such autofluorescence was still partly detected in ClearSee-treated leaves (Fig. 2C,D).
Confocal and two-photon imaging of ClearSee-treated tissues
Recently, we showed that 2PEM is valuable for in vivo deep imaging while avoiding autofluorescence in plant tissues (Mizuta et al., 2015). However, 2PEM is not accessible to all researchers because of the equipment cost. To evaluate imaging penetration in ClearSee-treated tissues, we undertook confocal laser scanning microscopy (CLSM) observation of ClearSee-treated roots. Samples were imaged using a 25× water-immersion objective lens [numerical aperture (NA), 1.10; working distance (WD), 2.0 mm]. We obtained images from 150 z-stacks with 1.0 µm intervals. Fig. S3 shows optical xy and xz sections of root tips in RPS5Apro::tdTomato-LTI6b lines, in which the plasma membrane is labeled (Mizuta et al., 2015). Although the 2PEM images showed higher contrast than those from CLSM (Fig. S4), both methods were capable of whole-root imaging to almost 100 µm depth (Fig. S3). Fig. S3 shows a comparison of fixed and ClearSee-treated root tips with the same optical setting. Without the ClearSee treatment, the signal intensity was decreased on the opposite side of the epidermis from the objective lens, even in 2PEM images (Fig. S3, fixed). Therefore, ClearSee-treated plant tissues were sufficiently transparent to be penetrated by a singlephoton excitation laser (visible laser) and a two-photon excitation laser (Fig. S3).
To determine whether ClearSee allows multicolor imaging and monitoring of hormonal signals, we performed 3D imaging of ClearSee-treated DR5rev::3xVenus-N7; RPS5Apro::H2B-tdTomato roots (Fig. 3). The DR5 promoter marks auxin-responsive transcriptional sites (Ulmasov et al., 1997). As with RPS5Apro:: tdTomato-LTI6b roots, whole nuclei of the root tip were observed both by CLSM and 2PEM (Fig. 3A, ClearSee). Higher-contrast images of nuclei were obtained by 2PEM than by CLSM, as observed for the plasma membrane (Fig. 3B). The fluorescence of 3×Venus-N7 was observed around the quiescent center in the ClearSee-treated root tip (Fig. 3A, DR5). The expression pattern driven by DR5rev in fixed root tips was consistent with that of live root tips (Fig. 3A, fixed, live), indicating that the proper expression pattern was not affected by the clearing processes with paraformaldehyde (PFA) fixation followed by ClearSee treatment. Movie 1 shows reconstructed xz-stacks in live and ClearSee-treated DR5rev::3xVenus-N7; RPS5Apro::H2B-tdTomato roots by CLSM with 488 nm and 561 nm excitations and by 2PEM with 950 nm excitation. This movie shows that ClearSee allows overall crosssections of root tips to be obtained optically without sectioning of the specimen. These results demonstrate the advantage of greatly improved transparency achieved by ClearSee treatment for deep imaging and optical sectioning of root tips. We also performed ClearSee treatment for weak expression markers in Arabidopsis roots. As shown in Fig. S2, the FPs were more strongly detected in ClearSee-treated samples ( Fig. 3A-C). Consistent with this finding, ClearSee-treated SCMpro::SCM-mGFP5 and SCRpro::GFP-SCR roots showed strong GFP fluorescence (Movie 2). These results indicated that ClearSee is also useful for imaging of weak expression markers. Next, we performed whole-leaf imaging. The leaf is a challenging organ for deep imaging because it is composed of multiple cell types, such as epidermal, palisade mesophyll, spongy mesophyll, vascular bundle and guard cells (Littlejohn et al., 2014). As described above, the leaf also contains various components with different refractive indexes, such as cell walls, cytoplasm and air spaces. In addition, different cell types in a leaf exhibit different shapes, orientations, and organelle densities in each cell for efficient light absorption in chloroplasts by internal reflection (Vogelmann, 1986). Therefore, light scattering from these different types of cells and the strong autofluorescence from chloroplasts make deep imaging difficult. As the leaf margins grow, the expression of DR5::GFP is detected at the apex of the leaf margin (Bilsborough et al., 2011). Movie 3 shows the ClearSee-treated leaf margin of DR5rev::3xVenus-N7; RPS5Apro::H2B-tdTomato. We obtained images from 76 z-stacks with 1.0 µm intervals. In the ClearSeetreated leaf, reporter expression under DR5rev was clearly detected even at the cellular level in the whole leaf. The fluorescence signals of Venus and tdTomato were detected only in the outer layer of the live leaf margin of DR5rev::3xVenus-N7; RPS5Apro::H2B-tdTomato, whereas the expression pattern driven by DR5rev in the upper leaf margin was consistent with live and ClearSee-treated leaves (Movie 4). These results suggested that ClearSee preserves specific gene expression, such as that of auxin-responsive genes, at the cellular level in whole tissues. We next obtained images from 100 z-stacks with 1.0 µm intervals using the UBQ10pro::H2B-mClover leaf. Fig. 4A and B show xy and xz maximum-intensity projections in the fixed UBQ10pro::H2B-mClover leaf without ClearSee treatment. The nuclei were only observed up to 50 µm depth even by 2PEM. Fig. 4C and D show xy and xz maximumintensity projections in the ClearSee-treated UBQ10pro::H2B-mClover leaf. As shown in Fig. 2A, nuclei were clearly observed in the epidermis and vascular bundles. Although the signal intensity was decreased as depth increased, CLSM detected nuclei in the epidermis on the opposite side from the objective lens to 100 µm depth (Fig. 4C). As observed for root tips, 2PEM showed higher contrast than CLSM (Fig. 4D).
To determine whether ClearSee allows visualization of subcellular components in addition to nuclei and plasma membrane markers, we performed ClearSee treatment of 35Spro:: mt-YFP and 35Spro::GFP-mTalin leaves, in which the mitochondria and actin cytoskeleton are labeled, respectively (Nelson et al., 2007;Oikawa et al., 2003). The localization patterns of mt-YFP and GFP-mTalin were similar in fixed and Fig. 3. Comparison of imaging penetration for CLSM and 2PEM in ClearSee-treated Arabidopsis root tips. (A) DR5rev::3xVenus-N7 (green); RPS5Apro::H2B-tdTomato (magenta) root treated with ClearSee for 4 days (ClearSee), or after (fixed) and before (live) fixation without ClearSee treatment. Optical xy and xz sections were generated from 150 z-stack images with 1.0 µm intervals by CLSM with 488 nm and 561 nm excitation (confocal) and 2PEM with 950 nm excitation (two-photon). Beneath are cross-sections at the positions indicated by the colored lines (1, transition zone; 2, meristematic zone). The top of the xz section images is facing the objective lens.
To assess the possibility of post-staining in ClearSee-treated tissues, we stained the cell wall with Calcofluor White in ClearSeetreated leaves. We obtained images from 256 z-stacks with 1.0 µm intervals. As shown in Fig. 4E, the cell wall was stained with Calcofluor White even in the mesophyll cells, while maintaining the fluorescence of mClover. The stomatal pores were also observed by xz optical cross-section (Fig. 4E, right). In addition, we stained the nuclei with Hoechst 33342 in ClearSee-treated leaves. We obtained images from 144 z-stacks with 1.0 µm intervals. As shown in Fig. S6, the nuclei were stained with Hoechst 33342 even in the central mesophyll cells. These results indicated that ClearSee is compatible with staining by chemical dyes.
Visualization of pistil interior by whole imaging
We next evaluated fluorescence imaging of the ClearSee-treated pistil. Sexual reproduction processes occur in female reproductive organs in the pistil concealed by multiple cell layers, hence it is difficult to observe these important events because of the complex internal structure (Crawford et al., 2007;Cheung et al., 2010). We obtained images from 401 z-stacks with 1.0 µm intervals of the fixed UBQ10pro::H2B-mClover pistil (Fig. S7A). The nuclei were observed only in the epidermal cells of pistils. We obtained images from 410 z-stacks with 1.0 µm intervals in the ClearSeetreated UBQ10pro::H2B-mClover pistil (Fig. 5A). The stigmatic papillae are elongated cells with a large nucleus. The style showed a dense structure in spite of penetration by pollen tubes. In the ovary, the transmitting tract showed a sparse structure, caused by programmed cell death (Crawford et al., 2007). The ovules were connected to the margin of the septum. Thus, the precise structure of the pistil was clearly observed after ClearSee treatment without sectioning of the specimen. Movie 5, which shows reconstructed xzstacks in the ClearSee-treated UBQ10pro::H2B-mClover pistil, illustrates how ClearSee reveals the complicated internal structure of the pistil and the journey of the pollen tube from the stigmatic papilla to the ovule through the transmitting tract.
Next, we performed multicolor imaging of pollen tubes. Previously, the pollen tube has been specifically labeled by staining with Aniline Blue (Cheung et al., 2010), but different genotypes of pollen tubes are indistinguishable with this method. Recently, we performed multicolor imaging by 2PEM using transgenic plants expressing five different FPs, which are simultaneously excited by 2PEM at 980 nm (Mizuta et al., 2015). The pistil was pollinated with pollen from LAT52pro::mTFP1 and LAT52pro::Venus, and then fixed with 4% PFA 6 h after pollination. The pollinated pistil was treated with ClearSee.v2 [v2 differs in that #7 was changed to 5% (w/v)] for 4 weeks. After ClearSee treatment, we obtained images from 60 z-stacks with 6.0 µm intervals. Entry of each pollen tube into each ovule was observed within the whole pistil. Discharge of the pollen tube contents was also detected in the ovules (Fig. 5B, asterisks). Pollen tubes expressing mTFP1 and Venus were obviously distinguished, indicating that ClearSee differentiated pollen tubes of distinct genotypes within the pistil. In xz optical sections, the position of pollen tubes in the transmitting tract could be observed. In the case of a fixed pistil without ClearSee treatment, the pollen tubes were not detected within the pistil, even by 2PEM (Fig. S7B). Thus, ClearSee is useful for multicolor imaging of different genotypes, ecotypes, and gene expression in deep complex plant tissues. In addition, Fig. 6 shows multicolor pistil imaging after treatment for 5 months with ClearSee.v2. The pistil was pollinated with pollen from LAT52pro::mTFP1, LAT52pro::sGFP, LAT52pro::Venus, and LAT52pro::mApple. Spectrum imaging by 2PEM with 990 nm excitation showed that each of the four FPs were clearly distinguishable. It is notable that each pollen tube color was observed even after treatment with ClearSee.v2 for 5 months. This raises the possibility of long-term storage of plant tissues treated with ClearSee.
Visualization of phloem by whole-seedling imaging The vascular system extends throughout the entire plant body to supply not only water and nutrients but also signaling molecules (Notaguchi and Okamoto, 2015). Multiscale imaging from the subcellular to the whole-plant is required to assist with understanding the functioning of the vascular system. However, the vasculature is an internal tissue and is therefore difficult to observe with conventional microscopy. The vascular system consists of multiple tissues, such as phloem and xylem (Turner and Sieburth, 2003). Previously, the phloem has been labeled with GUS for staining of whole leaves and seedlings, but GUSstained images show low resolution at the subcellular level (Bauby et al., 2007). The phloem has also been labeled with SUC2pro::RCI2A-mCitrine, which allows high-resolution imaging even at the subcellular level (Thompson and Wolniak, 2008). However, it is difficult to observe the phloem of whole plants by fluorescent imaging, as described above. Therefore, we evaluated whole-seedling imaging for visualization of phloem distribution using SUC2pro::RCI2A-mCitrine lines treated with clearing solution.
The seedlings with cotyledons were fixed with 4% PFA and then cleared with ClearSee for 7 days. Movie 6 shows z-stack images of SUC2pro::RCI2A-mCitrine by CLSM. The phloem distribution from the root to the cotyledons was clearly visualized (Movie 6, green), and the spiral secondary wall thickening of xylem vessels was also observed in bright-field images. Fig. 7A-E shows whole- 6. ClearSee is applicable for long-term storage. Pistil pollinated with LAT52pro::mTFP1, LAT52pro::sGFP, LAT52pro::Venus, and LAT52pro:: mApple pollen and treated with ClearSee for 5 months. Maximum intensity projection for xy sections was generated from 96 z-stack images with 3.0 µm intervals by 2PEM with 990 nm excitation. Images were acquired in sequential bandwidths of 8 nm spanning the wavelength range 460-648 nm to generate a lambda stack containing 19 images. Scale bar: 50 µm. plant images of SUC2pro::RCI2A-mCitrine obtained by 2PEM. We obtained the merged image for a 5×10 xy tiling array from 67 z-stacks with 10 µm intervals using a 25× objective lens. As shown in Fig. 7A, phloem patterning labeled with SUC2pro::RCI2A-mCitrine was observed in the whole seedling. Fig. 7B-E shows enlarged images from Fig. 7A, but with the same resolution as Fig. 7A. The phloem was parallel to spiral thickened xylem vessels in the root (Fig. 7B-E, arrowheads). The phloem branched from the root into each cotyledon (Fig. 7C). The venation pattern in the cotyledon was also observed in the ClearSee-treated seedling (Fig. 7B), but not in the fixed seedling (Fig. S8). Fig. 7F,G show ClearSee-treated seedlings of the SUC2pro::RCI2A-mCitrine line with rosette leaves. Phloem extension into the cotyledons, and subsequently into rosette leaves, was observed (Fig. 7F, arrow). Thus, phloem development patterning was clearly observed after ClearSee treatment. Although clearing takes longer compared with seedlings, ClearSee diminished chlorophyll autofluorescence in adult plants after bolting (Movie 7). Taken together, these results showed that ClearSee is applicable for whole-plant imaging.
ClearSee is applicable to other plant species
To explore the applicability of ClearSee for other plant species, we cleared the gametophyte of the moss Physcomitrella patens. Although the moss protonema, which is the initial stage after spore germination, and the gametophore leaf cells are suitable for cellular and subcellular observation owing to their single-layered structure, observation of the apical region of the gametophore is difficult because of the complicated structure and autofluorescence. Fig. 8 shows the gametophore in the living and ClearSee-treated H2B-mRFP line, which was generated by inserting mRFP into the H2B locus in the wild type. In the living gametophore, strong chlorophyll autofluorescence was observed in the gametophore leaf cells (Fig. 8A, live, autofluorescence). Thus, the structure in the apical region of the gametophore was concealed for both fluorescence and bright-field observations (Fig. 8B, live). By contrast, the intensity of chlorophyll autofluorescence was decreased in the ClearSee-treated gametophore (Fig. 8A, ClearSee, autofluorescence). The H2B-mRFP signal was clearly observed even in the apical region of the gametophore, as well as in the gametophore leaf cells, following ClearSee treatment (Fig. 8B, ClearSee, H2B-mRFP). These results suggest that the ClearSee clearing method is not limited to angiosperm tissues but is also suitable for non-vascular plant tissues while maintaining the stability of FPs.
DISCUSSION
We developed ClearSee as a clearing reagent for plant tissues to allow deep imaging. Plant tissues are difficult samples for deep imaging because chlorophyll and other cellular contents absorb light, and the complex geometry, including air spaces within tissues such as in the leaf and pistil, diffract light by refractive index mismatch. ClearSee rapidly diminishes chlorophyll autofluorescence and substitutes it with a solution of high refractive index (ClearSee, 1.410; ClearSee.v2, 1.395) in the whole plant body. The method is applicable for a variety of organs, such as the leaf, root, pistil and seedling of A. thaliana and the moss P. patens. Moreover, ClearSee allows deep imaging of the whole leaf and root, even by CLSM. This finding is advantageous Fig. 7. Phloem patterning in the whole seedling. (A-E) SUC2pro::RCI2A-mCitrine seedling treated with ClearSee for 7 days. Maximum intensity projection for xy view was generated from 67 z-stack images with 10 µm intervals by 2PEM with 950 nm excitation. Boxed regions in A are magnified in B-E. (F,G) Reconstituted 3D image of seedling with rosette leaves expressing SUC2pro::RCI2A-mCitrine after ClearSee treatment for 7 days. Arrowheads indicate spiral xylem vessels. Arrow indicates extension of phloem into rosette leaf from root. Scale bars: 1 mm in A; 100 µm in B-E. for many researchers because CLSM is more commonly used than 2PEM. In A. thaliana, the thickness of the root and leaf is ∼100 µm and ∼150 µm, respectively, and therefore CLSM with ClearSee should be applicable to these depths in other plant tissues. Nevertheless, 2PEM provides higher resolution and signalto-noise ratio in ClearSee-treated samples, especially for the z-axis. Higher z resolution is important for 3D reconstruction and optical transverse sectioning. Conventional mechanical sectioning is laborious and time-consuming because fixation and embedding of samples is needed and optimization is tissue and species dependent. In addition, obtaining the desired section planes may be difficult. Fluorescence microscopy with ClearSee enables the suitable z-stack images for 3D reconstruction to be obtained, and therefore images of the desired regions and planes with any orientation by optical sectioning. In addition, 2PEM with ClearSee permits deeper imaging, as shown by whole-pistil (∼400 µm) and whole-seedling (∼670 µm) imaging. Deep imaging by 2PEM with ClearSee raises the possibility of whole-plant imaging.
ClearSee diminishes chlorophyll autofluorescence, but the fate of autofluorescence derived from other cellular contents remains unclear. The sources of autofluorescence in the emission range 500-600 nm include phenols, flavins, polyacetylene and isoquinoline in the vacuole, chloroplasts and cell wall (Müller et al., 2013). In mammals, aminoalcohol diminishes the color of heme in the blood by chemical screening Tainaka et al., 2014). This property allows whole-body imaging of mice to be performed with CUBIC, which includes aminoalcohol. Additional screening to identify chemical reagents to clear the residual autofluorescence with ClearSee would permit clearer and deeper imaging in plant tissues.
In the present study, we evaluated the utility of seven FPs (mTFP1, sGFP, mClover, Venus, mCitrine, tdTomato and mApple) and fusion proteins (free FPs, nuclear localization signal, histone and membrane proteins). This versatility of ClearSee could enable the analysis of morphology and cell patterning with multiple gene expression during development. The potential application of ClearSee as a substitute for GUS staining was demonstrated. GUS staining requires optimization of the staining conditions depending on the tissue and the promoter of interest, and staining diffuses from around the exact expression site. In addition, normal DR5 expression was observed with ClearSee, which suggests that hormonal or environmental responses are maintained after ClearSee treatment. Given this applicability of ClearSee, we traced the growth of pollen tubes of different genotypes in the pistil after pollination by labeling with different FPs. Previously, pollen tube guidance within the pistil has been mainly studied using Aniline Blue staining. Aniline Blue clearly stains the pollen tube, but all pollen tubes are stained identically. By contrast, following Fig. 8. Clearing of a leafy gametophore of Physcomitrella patens with ClearSee. A leafy gametophore of the H2B-mRFP line of P. patens treated with ClearSee for 4 days. Images were collected in the ranges of 570-668 nm for H2B-mRFP and 672-701 nm for autofluorescence with 561 nm excitation by CLSM. (A) Maximumintensity projections were generated from 325 z-stack images with 1.0 µm intervals for living and ClearSee-treated gametophores. (B) Optical slice of the apical region of gametophore covered with juvenile gametophore leaves. Scale bars: 100 µm.
ClearSee treatment, multicolor imaging can be used to study gene expression in such as the pistil for analysis of cell-cell communications during male-female interactions and between different genotypes and ecotypes.
To date, phloem development has mainly been studied using mechanical sectioning for anatomical observations and/or GUS staining for gene expression (Bonke et al., 2003). However, it is difficult to trace the continuity of vascular strands from mechanical cross-sections and analyze gene expression at the cellular level from whole-mount GUS staining. In ClearSee-treated seedlings, we performed whole-plant imaging to observe 3D structure from micro to macro scales. By obtaining merged images with a 25× objective lens, we could observe the vascular strands throughout the plant body even at the cellular level. Recently, it was suggested that vascular systems have a role in long-distance signaling in response to environmental changes by transferring mobile molecules, such as hormones, peptides and RNA (Notaguchi and Okamoto, 2015). ClearSee will be a useful technique for the study of such long-distance signaling in response to localized changes as it enables whole-plant imaging at the cellular level.
Given the successful clearing of moss tissue by ClearSee, the reagent may be applicable to a wide range of plant species. We also demonstrated that the application of ClearSee is not limited to transgenic plants with FP markers. The applicability of staining with chemical dyes showed that ClearSee could also be used for deep imaging in plant species that are not amenable to transgenic approaches. Moreover, ClearSee is compatible with post-treatment staining with chemical dyes, suggesting that it will permit the incorporation of chemical dyes together with FP markers in transgenic plants. In the case of the pistil, we attempted to use a version with less detergent, ClearSee.v2. Although clearing with ClearSee.v2 required a longer treatment time than with ClearSee, we obtained images with improved clarity. Therefore, the concentration of the individual ClearSee components should be optimized for the specific plant species or tissues under investigation for improved image clarity and depth.
A. thaliana seeds were sown on plates containing half-strength Murashige and Skoog salts (Duchefa Biochemie, Haarlem, The Netherlands), 0.05% MES-KOH ( pH 5.8), 1× Gamborg's vitamin solution (Sigma) and 1% agar. The plates were incubated in a growth chamber at 22°C under continuous lighting after cold treatment at 4°C for 2-3 days. Two-week-old seedlings were transferred to soil (Sakata no Tane; Sakata Seed, Yokohama, Japan) and grown at 22°C under continuous lighting.
The H2B-mRFP line of the moss Physcomitrella patens, which was generated by inserting mRFP into the H2B locus in the Gransden 2004 wild-type strain (Rensing et al., 2008), was used. The fragmented protonemata were cultured on BCDAT medium for 4-5 weeks under white light at 25°C and developed into leafy gametophores (Nishiyama et al., 2000).
Chemical screening
First screening was performed using a microplate reader (EnSpire; PerkinElmer) for rosette leaves from A. thaliana. Leaves were fixed with 4% (w/v) PFA for 120 min in PBS under vacuum. Fixed leaves were washed in PBS and incubated with 400 µl screening chemical solutions (Table S1) in 96-well plates. After 7 days of incubation, 200 µl were transferred into new 96-well plates and chlorophyll fluorescence measured at 680 nm emission with 415 nm excitation.
The fluorescence stability of Venus in chemical solutions was measured with a microplate reader. To prepare the recombinant Venus protein, the full-length coding region of Venus was cloned into the pCold I expression vector (Takara). The recombinant Venus protein was expressed in Escherichia coli strain Rosetta-gami2 (DE3) pLysS (Novagen). After induction with 1 mM isopropyl-β-D-thiogalactopyranoside at 15°C overnight, cells were harvested and lysed in 20 mM phosphate buffer containing 500 mM NaCl, 5 mM imidazole, 1 mM 2-mercaptoethanol, and cOmplete Protease Inhibitor Cocktails (Roche). After sonication and centrifugation, the supernatants were collected. Recombinant Venus was incubated in chemical solutions for 24 h and the fluorescence intensity was measured at 515 nm emission with 485 nm excitation. The refractive index of ClearSee was measured by a digital refractometer (AR200; Reichert).
ClearSee protocol
ClearSee solutions were prepared by mixing xylitol powder [#04; final 10% (w/v)], sodium deoxycholate [#07; final 15% (w/v)] and urea [#19; final 25% (w/v)] in water. Seedlings, leaves and pistils of A. thaliana and gametophores of P. patens were fixed with 4% (w/v) PFA for 30-120 min (seedlings, 30 min; leaves, 120 min; pistil or gametophores, 60 min) in PBS under vacuum (∼690 mmHg) at room temperature. Fixed tissues were washed twice for 1 min each in PBS and cleared with ClearSee at room temperature for 4 days to 4 weeks or more, depending on tissue type. The minimum incubation times for clearing were 4 days for leaves, roots and moss, 7 days for seedlings, 2 weeks for pistils, and 4 weeks for mature stems. In the case of pistils, incubation for 4 weeks improved clarity. ClearSee-treated samples could be stored at room temperature for at least 5 months. For post-staining, cleared tissues were stained with Calcofluor White (final 100 µg/ml) in ClearSee solution for 1 h, and Hoechst 33342 (final 10 µg/ml) in ClearSee solution overnight. After staining, tissues were washed in ClearSee for 1 h.
Microscopy settings
For screening of chemical reagents and deep imaging, we used three microscope systems. Settings are detailed in the supplementary Materials and Methods. | 2017-04-13T02:30:39.578Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "236fff3e275baa0d010c6692b1fb2cfe1cdb9b0d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1242/dev.127613",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "0589411f61b7588f59ca548aca8c27d5a6c9f2eb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
31085322 | pes2o/s2orc | v3-fos-license | Visualisation of a flexible modular structure of the ER folding-sensor enzyme UGGT
In the endoplasmic reticulum (ER), a protein quality control system facilitates the efficient folding of newly synthesised proteins. In this system, a series of N-linked glycan intermediates displayed on the protein surface serve as quality tags. The ER folding-sensor enzyme UDP-glucose:glycoprotein glucosyltransferase (UGGT) acts as a gatekeeper in the ER quality control system by specifically catalysing monoglucosylation onto incompletely folded glycoproteins, thereby enabling them to interact with lectin–chaperone complexes. Here we characterise the dynamic structure of this enzyme. Our crystallographic data demonstrate that the sensor region is composed of four thioredoxin-like domains followed by a β-rich domain, which are arranged into a C-shaped structure with a large central cavity, while the C-terminal catalytic domain undergoes a ligand-dependent conformational alteration. Furthermore, small-angle X-ray scattering, cryo-electron microscopy and high-speed atomic force microscopy have demonstrated that UGGT has a flexible modular structure in which the smaller catalytic domain is tethered to the larger folding-sensor region with variable spatial arrangements. These findings provide structural insights into the working mechanism whereby UGGT operates as a folding-sensor against a variety of glycoprotein substrates through its flexible modular structure possessing extended hydrophobic surfaces for the recognition of unfolded substrates.
Results and Discussion
Crystal structures of the folding-sensor region and catalytic domain of UGGT. We determined the crystal structure of the N-terminal folding-sensor region of UGGT (UGGT N , residues 29-1142) at a 3.1-Å resolution ( Fig. 1a and Supplemental Tables S1-3). Beyond expectations based on our previous bioinformatic analysis 12 , the current crystallographic data revealed that UGGT N was composed of four Trx-like domains (designated as Trx1-4), followed by a β-rich domain with a unique topology (Fig. 1b). The overall structure of UGGT N displayed a C-shaped structure containing a large central cavity with 60 × 80 × 120 Å dimensions. The N-terminal Trx1 domain showed a unique Trx-like fold in which the four-helix subdomain was not inserted between β3 and β4 but preceded the α1-helix (Supplemental Fig. S1a). In contrast, Trx2 and Trx3 domains exhibited a typical Trx-like fold comprising a four-or three-stranded β-sheet surrounded by six α-helices, in which the four-helix subdomain was inserted between β3 and β4 (Supplemental Fig. S1b,c). The crystal structure of the Trx3 domain from Thermomyces dupontii was essentially identical to the isolated domain structure from Chaetomium thermophilum, which, however, indicated that the C-terminal α6-helix was disordered in the presence of a detergent molecule as a crystallising agent occupying a potential substrate-binding hydrophobic surface 12 . In the present T. dupontii UGGT N structure, the α6-helix was ordered in the detergent-free condition. The Trx4 domain represented a typical Trx-like fold comprising a four-stranded β-sheet surrounded by six α-helices, which, however, exhibited an unusual topology (Supplemental Fig. S1d). Namely, in the four-helix subdomain, one core α-helix and β-strand are encoded by the N-terminal segment (residues 275-410), while three core β-strands and α-helix are encoded by the C-terminal segment (residues 897-950) (Fig. 1b). Such uncommon domain architecture was also observed in the β-rich domain comprising six-stranded β-strands (Supplemental Fig. S1e), which are encoded by three discrete segments (residues 29-39, 231-244 and 959-1036) (Fig. 1b). Although we hypothesised that UGGT possesses a β-strand-rich domain comprising residues 942-1149 based on the bioinformatic analysis 12 , only the N-terminal subdomain comprising residues 959-1036 gave unambiguous electron density (Supplemental Fig. S1e and f), suggesting motional freedom in this domain. In the central cavity of the UGGT N C-shaped structure, remarkable hydrophobic patches were found (Supplemental Fig. S2), suggesting their possible involvement in folding-sensor function.
We determined the crystal structure at a 1.40-Å resolution of the C-terminal catalytic domain (CAT, residues 1190-1480) in complex with a UDP-glucose donor substrate and Ca 2+ (Fig. 2a and Supplemental Table S4), which are required for the enzymatic activity 3 . In the CAZy classification, the CAT domain belongs to the GT24 family, which is supposed to share structural similarity with the GT8 family 14 . This was confirmed by the present crystal structure showing a GT-A fold comprising nine β-strands and 12 α-helices. The UDP-glucose and Ca 2+ ligands were accommodated in the active site containing a DXD motif (D1294-A1295-D1296) (Fig. 2b), in which the ligand binding residues are essentially identical across species together with the retaining GT8 glycosyltransferases 14 (Supplemental Fig. S3), suggesting that the enzymatic mechanism of UGGT is evolutionarily conserved. Furthermore, we determined the crystal structure of the CAT domain in complex with UDP and Ca 2+ at a 1.35-Å resolution (Fig. 2c, Supplemental Fig. S4 and Supplemental Table S4). In the crystal structure, the Tris molecule was accommodated in the corresponding binding site on the glucose moiety of UDP-glucose. In comparison with UDP-glucose-bound complex, 1379-1387 and 1427-1438 loops containing Asn1386 and Asn1430 located closer to the glucose-or Tris-binding site undergo significant structural changes upon UDP binding (Supplemental Fig. S4b), suggesting its mechanism of release of the monoglucosylated product. In both crystal structures, a remarkable hydrophobic patch was found around the active site (Supplemental Fig. S5), implying its possible contribution to the folding-sensing mechanism of the catalytic domain. An affinity labelling experiment suggested that the catalytic domain is involved in interaction with a hydrophobic aglycon 15 .
Overall structure of UGGT. To obtain information on its overall structure, we performed small-angle X-ray scattering (SAXS) experiments of full-length UGGT (UGGT FL ) together with UGGT N in solution. The radius of gyration (Rg), the maximum dimension (D max ) and the apparent molecular mass were estimated to be 45 ± 1 Å, 150 Å and 164 ± 6 kDa for UGGT FL and 47 ± 1 Å, 155 Å and 146 ± 2.3 kDa for UGGT N , respectively. These data indicate their monomeric structures in solution (Supplemental Figs S6,S7 and Table S3). However, the estimated Rg and D max values of UGGT N were significantly different from those estimated from the crystal structure ( Fig. 1a), that is, approximately 36.4 and 120 Å, respectively, although the crystal structure contained several disordered segments including the C-terminal part of the β-rich domain. Based on the SAXS data, we constructed three-dimensional shape models of UGGT FL and UGGT N illustrating the anisotropic elongated C-shaped structures (Fig. 3). The UGGT N crystal structure nicely fitted into the shape models. Although comparison between the UGGT FL and UGGT N models indicated that the CAT domain is located in the space surrounded by the Trx2, Trx3 and Trx4 domains in the UGGT N model, the crystal structure of the CAT domain could not be adequately fitted with the putative site in terms of the matching of volume density (Fig. 3a), suggesting that the CAT domain is considerably mobile in solution. Besides the catalytic domain, an extra protrusion was found around the Trx2 domain, suggesting its mobility (Fig. 3a,b). Taken together, these results suggest that UGGT has significant conformational heterogeneity because of the flexible nature of the Trx2 and CAT domains in solution.
To collect conformational snapshots of UGGT, we performed electron microscopy (EM) of UGGT FL . Approximately 18,000 particles were picked from the 297 cryo-EM images and subjected to 2D classification. Intriguingly, the obtained 2D class averages showed significant particle heterogeneity (Fig. 4a). The 2D class data were sorted into four 3D classes with almost equal populations (20-33%), indicating the structural variation of UGGT. Owing to the conformational heterogeneity and the consequent limited-resolution images, the crystal structure of UGGT N could not be unambiguously fitted into the EM maps. Therefore, we performed domain mapping using a monoclonal antibody directed against the UGGT segment 29-468 corresponding to parts of β-rich, Trx1, or Trx4 domain (Supplemental Fig. S8). Based on the negative-stain EM data of UGGT FL complexed with the Fab fragment of this antibody, we successfully identified the respective domains in the EM map. Concomitantly, the EM data together with dot blot analysis showed that the UGGT antibody specifically recognises the Trx4 domain. Similar to the UGGT FL crystal structure, the Trx4 domain is located at one edge of the C-shaped structure in the EM image ( Fig. 4b and Supplemental Fig. S9). The negative-stain EM data demonstrated that UGGT FL exhibits a cradle-like C-shaped structure with a central cavity. Whereas most of the individual parts of UGGT N crystal structures could be nicely fitted into the EM map, the part of the β-rich domain disordered in the crystal structure was still not observed regardless of the EM images of the intact protein. An extra density map was observed at the cavity surrounded by Trx1-Trx3 domains, indicating that the CAT domain would be located there (Fig. 4b). However, the crystal structure of the CAT domain did not perfectly match the extra EM map volume. Consequently, these negative-stain EM data suggest that the C-terminal part of the β-rich and CAT domains as well as the linker connecting them are quite flexible and therefore invisible. Using information on the domain mapping, the UGGT N crystal structure can be fitted into the four classes of the cryo-EM maps (Fig. 4c), confirming that the UGGT FL possesses a large central cavity. In the cryo-EM images, the Trx1 and Trx4 as well as the N-terminal part of β-rich domains, which were tightly associated with each other in the crystal structure (Fig. 1a), were observed with variable spatial arrangements, whereas the Trx2 and Trx3 domains were more extensively dislocated, precluding unambiguous mapping of the Trx2 domain in these EM images. Similar to the negative-stain EM data (Fig. 4b), the cryo-EM data provide no interpretable density corresponding to the CAT domain in the EM map (Fig. 4c). On the basis of the EM data together with the SAXS, we concluded that UGGT possesses a high degree of motional freedom of the CAT domain with respect to the folding-sensor region, which also exhibits significant conformational variability due to the conformational dynamics of its modular structure.
Visualisation of the flexible modular structure of UGGT. We performed high-speed atomic force microscopy (HS-AFM) to characterise the dynamic nature of UGGT further. The AFM images of UGGT exhibited one larger lobe and one smaller one with their dimensions concordant with SAXS, cryo-EM and crystallographic data (Supplemental Fig. S10). When the N-terminally His 6 -tagged UGGT was immobilised onto a Ni 2+ -treated mica surface, the smaller lobe dynamically moved around the larger lobe, the latter being the least mobile ( Fig. 5a and Supplemental Video S1). In contrast, when the C-terminally tagged construct was used, the position of the larger lobe dynamically fluctuated (Supplemental Video S2). These results clearly indicate that the observed larger and smaller lobes of UGGT correspond to the N-terminal folding-sensor region and the C-terminal catalytic domain, respectively. These observations were consistent with the bioinformatic-based prediction that the N-and C-lobes are connected via an unstructured linker (Fig. 1b) 12 . In the HS-AFM image, the relative positions of the N-and C-lobes were thus variable but not randomly distributed (Fig. 5b and Supplemental Fig. S11). The most frequently observed distances between the centres of two lobes were estimated to be 75 Å (Fig. 5c). This distance value was comparable to previous biochemical data indicating that UGGT can transfer glucose to N-glycans positioned at least 40 Å from the unstructured regions 7 .
Upon the intentional deformation of immobilised UGGT using the AFM tip, the N-lobe was disrupted into at least four domains ( Fig. 5d and Supplemental Video S3), consistent with the crystallographic data suggesting that UGGT N consists of the four Trx-like domains tightly associated with the β-rich domain (Fig. 1a). After removal of the force applied using the AFM tip, the disrupted N-lobe spontaneously recovered to the large globular particle, indicating the reversible nature of the domain assembly of the N-lobe (Supplemental Video S3).
Conclusion
Here, we characterised the overall structure and dynamic property of UGGT using an integrative approach by combining X-ray crystallography, SAXS, cryo-EM and HS-AFM. Our results reveal that UGGT has a modular structure in which the 35-kDa catalytic domain is tethered to the 125-kDa folding-sensor region. The N-terminal folding-sensor region is composed of the four Trx-like domains and the β-domain assembled into the C-shaped structure with structural plasticity harbouring the central cavity displaying hydrophobic patches, which putatively accommodates incompletely folded glycoproteins. These findings provide structural insights into the working mechanism whereby UGGT operates as a folding-sensor against a variety of glycoprotein substrates through its flexible modular structure possessing the extended hydrophobic surfaces for the recognition of unfolded substrates.
Methods
Protein expression and purification. The artificial codon-optimised UGGT FL gene (residues 29-1480) from T. dupontii, a thermophilic fungus, was designed using the genomic DNA database (Talth1p4_002475, http://genome.fungalgenomics.ca) and purchased from Genscript (Japan). The UGGT FL and N-terminal folding-sensor region (UGGT N , residues 29-1142) genes were subcloned into the BamHI and SalI sites of a modified pCold-GST vector 12 . Expression plasmids containing N-or C-terminally His 6 -tagged UGGT FL genes were constructed using the inverse PCR method with KOD plus DNA polymerase (TOYOBO). Domain-specific Met-labelled UGGT N mutants were created by L36M/I89M/V118M (β-Trx1*), L313M/L355M/I383M (Trx4*), I448M/L752M/L808M (Trx2-3*) and I931M/L966M/I1038M (Trx4*-β) mutations for protein chain tracing of UGGT N crystal structure determination. A series of UGGT N fragments were prepared for epitope mapping of monoclonal antibodies directed against UGGT (see Supplementary information, Fig. S8). The expression and purification of these recombinant proteins were performed in accordance with a protocol used for the crystallisation of the third Trx-like domain of C. thermophilum as previously described 12 . The resultant N-or C-terminally His 6 -tagged UGGT FL proteins contained 'GSHHHHHHGSHM' or 'HHHHHH' sequences at the N-or C-terminus, respectively, after removing the GST tag.
The gene encoding the catalytic domain of UGGT (CAT, residues 1190-1480) was amplified by PCR and subcloned into the BamHI and SalI sites of a modified pCold-I vector (Takara Bio Inc.), in which the factor Xa site was replaced with the tobacco etch virus (TEV) protease recognition site. The CAT domain plasmid was introduced into Escherichia coli BL21-CodonPlus (DE3)-RIL (Agilent Technologies). The recombinant CAT domain was produced as an inclusion body in E. coli and subjected to oxidative refolding. The harvested cells were resuspended in buffer containing 50 mM Tris-HCl (pH 7.5) and 150 mM NaCl and lysed by sonication. The obtained inclusion bodies were extensively washed with the resuspension buffer containing 1 mM EDTA and 2% Triton X-100 and subsequently solubilised in 6 M guanidinium chloride, 50 mM Tris-HCl (pH 8.0) and 10 mM dithiothreitol. The solubilised protein was refolded by dilution (to a protein concentration of 0.2 mg/mL) in 50 mM Tris-HCl (pH 7.5), 10 mM CaCl 2 , 400 mM L-arginine, 5 mM reduced glutathione and 0.5 mM oxidised glutathione at 4 °C for 12 h. The unconcentrated refolded protein was dialysed against a large amount of buffer containing 20 mM Tris-HCl (pH 7.5), 2 mM CaCl 2 and 150 mM NaCl to remove excess L-arginine. This buffer exchange procedure at low protein concentration was critical for the successful refolding of the CAT domain. After the dialysis, the refolded His 6 -tagged CAT domain was purified and concentrated using cOmplete His-Tag Purification Resin (Roche). Dialysis buffers containing 10 and 500 mM imidazole were used for column washing and as an elution buffer, respectively. Next, the His 6 -tag on the CAT domain was removed by TEV protease and dialysed against 20 mM Tris-HCl (pH 8.0) and 2 mM CaCl 2 . Finally, the untagged CAT domain was purified on a Resource Q anion exchange column (GE Healthcare) with a 0-0.5 M NaCl gradient.
Crystallisation, X-ray data collection and structure determination. The SeMet-substituted CAT domain protein was concentrated to 4.0 mg/mL in 20 mM Tris-HCl (pH 8.0), 150 mM NaCl, 2 mM CaCl 2 and 5 mM UDP-Glc and used for crystallisation. Optimised crystals were obtained by a hanging-drop vapour diffusion method in 24% PEG 3350 and 100 mM Tris-HCl (pH 9.0) at 20 °C after a few days. For X-ray diffraction data collection, the crystals were cryoprotected with a crystallisation buffer containing 24% PEG 3350, 100 mM Tris-HCl (pH 9.0), 2 mM CaCl 2 , 5 mM UDP-Glc and 15% glycerol and flash-cooled in liquid nitrogen. The UDP-Glc-bound CAT domain crystal was soaked into the crystallisation buffer containing an excess amount of UDP (10 mM), giving rise to the UDP-bound complex. For crystallisation of UGGT N proteins, the SeMet-substituted UGGT N was concentrated to 10.0 mg/mL in 20 mM Tris-HCl (pH 8.0), 150 mM NaCl and 2 mM CaCl 2 . The UGGT N variants (β-Trx1*, Trx4*, Trx2-3* and Trx4-β*) were also dissolved and concentrated under the same conditions. Meanwhile, among the UGGT N variants, a high-quality single crystal was not obtained only for the Trx4-β* mutant. The remaining UGGT N proteins could all be successfully crystallised (under the conditions summarised in Tables S1-3). Complete data sets for the CAT domain and UGGT N were collected, indexed, integrated and scaled using MOSFLM 16 , HKL2000 17 and XDS 18 .
The crystals of the CAT domain complexed with UDP-Glc or UDP alone belonged to the space group P2 1 2 1 2 1 with one molecule per asymmetric unit and diffracted up to resolutions of 1.40 and 1.35 Å, respectively. The crystal structure of UDP-Glc-bound CAT domain was solved by using the single-wavelength anomalous dispersion method with a crystal of the SeMet-substituted protein, while the UDP-bound form was determined by molecular replacement using the UDP-Glc-bound structure as a search model (regarding crystallographic parameters and refinement statistics, see Tables S1-S4). The initial phases were determined with the SHELX C/D/E programs 19 . The obtained electron density map was clear enough to be interpreted and the initial coordinates were built automatically using ARP/wARP 20 . The obtained UGGT N crystals belonged to the apparent space group P6 2 22, and the structure was solved by using the single-and multi-wavelength anomalous dispersion method with SeMet-and Pt-labelled crystals, respectively (Supplemental Table S2). Based on the obtained electron density map in conjunction with the methionine marking approach, namely through guidance of anomalous signals from SeMet-labelled β-Trx1*, Trx2-3* and Trx4* crystals (Supplemental Table S2), the initial coordinates of UGGT N were manually built. However, merohedral twinning of UGGT N was suspected during refinement because the refined coordinates had an R work of 32% and an R free of 37% in the P6 2 22 space group, even in the proceeded stage for model building. Therefore, the diffraction data were processed with the lower-symmetry space group and the eventually twinning refinement in the P3 2 12 space group dramatically improved the refinement statistics, namely R work /R free = 23.2%/27.8%. Manual model fitting to the electron density maps was performed using COOT 21 . The refinement procedure was performed with phenix.refine 22 and REFMAC5 23 . The stereochemical quality of the final model was assessed using PROCHECK 24 . The molecular graphics were made using UCSF Chimera 25 and PyMOL (http://www.pymol.org/). Small-angle X-ray scattering. The untagged forms of UGGT FL and UGGT N were used for the measurements of small-angle X-ray scattering (SAXS). The SAXS measurements were performed using the Nano-Viewer diffractometer system equipped with a MicroMax-007 X-ray generator (RIGAKU), with a Cu target (λ = 1.5418 Å) and PILATUS 200 K (DECTRIS). All samples were prepared in 10 mM Tris-HCl (pH 7.7) containing 150 mM NaCl and 2 mM CaCl 2 . The scattering profiles were collected from sample solutions at eight different concentrations ranging from 2 to 5 mg/mL at 23 °C. Ovalbumin (45 kDa; Sigma-Aldrich) was also measured and used as a standard protein to estimate the apparent molecular weight of UGGT. The exposure time was 30 min for each sample. The observed two-dimensional image data were circularly averaged and then the profile of the buffer solution was subtracted. The concentration dependence of the Rg and the forward scattering intensity normalised by the weight per volume (mg/mL), I(0)/conc., were used for the SAXS analysis, in which the small-angle regions (0.0129-0.0285 Å −1 for UGGT FL and 0.0129-0.0313 Å −1 for UGGT N ) were subject to Guinier plotting (Supplemental Figs S6 and S7). The P(r) functions of UGGT FL and UGGT N were calculated from the SAXS profile extrapolated to a protein concentration of zero by using GNOM software 26,27 (see Table S5 for the obtained parameters).
Ab initio shape modelling was performed using DAMMIN 28 without structural restrictions such as point symmetry and particle anisometry. The 10 independently calculated models were averaged using DAMAVER 29 . Using the average model as a start model, we finally refined the shape model using DAMMIN. The refinement procedures were independently performed three times to check the reproducibility. The representative model is shown in Fig. 3a and b. The structural model of UGGT N obtained from the crystallographic analysis was superposed into the shape models of UGGT FL and UGGT N by using SUPCOMB 29 . Electron microscopy. For the cryo-EM, untagged, UGGT FL protein was used. An aliquot of 2.5 μL was applied on R1.2/1.3 holely carbon film on a molybdenum grid (Quantifoil Micro Tools GmbH) pre-treated by glow-discharging. The plunged-freezing of the specimen was performed with Vitrobot Mark-IV (FEI Company). The frozen grid was kept at liquid nitrogen temperature and loaded into a JEM2200FS electron microscope equipped with a 200-kV field emission electron source and an omega-type energy filter (JEOL Ltd.) using a Gatan 914 cryo-specimen holder (Gatan Inc.). A total of 297 images were collected on a DE20 direct detector camera (Direct Electron LP) at a detector magnification of 93,023 with an energy slit width of 20 eV using a low-dose mode. The image size was 0.69 Å per pixel on the camera. The images were processed by Relion 1.4 software 30 after subjecting them to motion collection using the DE_process_frames.py script provided by the manufacturer. In Relion, the images were estimated the contrast transfer function after twice binning. Then, 33,195 particle images were collected from the 297 images using an implemented auto-picking programme. The particle images were subjected to 2D classification after sorting with cross-correlation coefficients. Three-dimensional structures were reconstructed from good 2D classes consisting of 17,762 particle images using initial models of UGGT generated from subtomogram averaging of electron tomography performed with the same data acquisition conditions as mentioned above. For domain mapping, specimens of untagged, UGGT FL protein were reacted with an excess amount of the Fab fragment of UGGT-specific monoclonal antibody (regarding preparation and characterisation of this antibody, see Supplemental method and Fig. S8) for 2 hours at 4 °C. The antibody reacted specimens were applied onto lab-made carbon-coated EM grids glow-discharged beforehand. After removing excess solution with filter paper, the grids were stained with 2% uranyl acetate solution and dried after removing the staining with filter paper. The stained grids were observed and the images were collected and processed with the same data acquisition conditions and procedures as mentioned above.
High-speed atomic microscopy. For HS-AFM of UGGT FL , N-or C-terminally His 6 -tagged UGGT proteins were used. We used laboratory-constructed, high-speed AFM apparatus 31 with cantilevers (7-μm long, 2-μm wide and 90-nm thick) operated at room temperature. Typical values of the spring constant, resonant frequency and quality factor of the cantilever in aqueous solution were ~0.2 N/m, ~0.8 MHz and ~2, respectively. In the AFM imaging, the free and set-point oscillation amplitudes were approximately 1.5 nm and 90% of the former, respectively. The N-or C-terminally His 6 -tagged UGGT proteins were immobilised onto a Ni 2+ -treated AFM surface on which the histidine tagged to the molecules binds to the Ni 2+ ions on the negatively charged mica surface in 10 mM Tris-HCl (pH 7.5), 150 mM NaCl and 2 mM CaCl 2 . The HS-AFM images were taken at a frame rate of 10 fps.
Note
While this paper was under the peer review process, structural characterisation was reported for UGGT FL from three different species, Drosophila melanogaster (fly), Penicillium chrysogenum (mesophilic fungus) and Chaetomium thermophilum (thermophilic fungus), by two independent research groups 32,33 . The SAXS and EM data of fly and mesophilic fungal UGGTs showed similar C-shaped structures with conformational heterogeneity, as observed in our thermophilic fungal UGGT. Regarding the other thermophilic fungal UGGT, crystal structures of UGGT FL at a resolution of 2.8-4.3 Å with proteolytic cleavage at the linker connecting the N-terminal folding-sensor region and the CAT domain were determined, in addition to SAXS-and EM-derived structural models.
Coordinates and structural factors of our crystal structures of UGGT have been deposited in the PDB: UGGT N (5Y7O) and CAT domain (UDP-Glc-bound form [5H18] and UDP-bound form [5Y7F]). | 2018-04-03T01:22:39.179Z | 2017-09-22T00:00:00.000 | {
"year": 2017,
"sha1": "07975034f35f521c143f19f268279770a43ae85a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-12283-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9de4fe63858933e6e485b8cf062c5dc85d52c82b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
264362896 | pes2o/s2orc | v3-fos-license | Perioperative Intravenous Amino Acid Infusion in Major Urologic Surgery
Post-operative acute kidney injury (PO-AKI) is a serious complication that may occur after major abdominal surgery. The administration of intravenous perioperative amino acids (AAs) has been proven to increase kidney function and has some beneficial effects to prevent PO-AKI. The aim of this study was to establish if the perioperative infusion of AAs may reduce the incidence of PO-AKI in patients undergoing major urological minimally invasive surgery. From a total of 331 patients, the first 169 received perioperative crystalloid fluids and the following 162 received perioperative AA infusions. PO-AKIs were much higher in the crystalloid group compared to the AA group (34 vs. 17, p = 0.022) due to a lower incidence of KDIGO I and II in the AA group (14 vs. 30 p = 0.016). The AA group patients who developed a PO-AKI presented more risk factors compared to those who did not (2 (2-4) vs. 1 (1-2), p = 0.031) with a cut-off of 3 risk factors in the ROC curve (p = 0.007, sensitivity 47%, specificity 83%). The hospital length of stay was higher in the crystalloid group (p < 0.05) with a consequent saving in hospital costs. Perioperative AA infusion may help reduce the incidence of PO-AKI after major urological minimally invasive surgery.
Introduction
Post-operative acute kidney injury (PO-AKI) is defined as a decrease in kidney function occurring within seven days from surgery and is one of the most frequent post-operative complications after major abdominal surgery.In major laparoscopic urologic surgery, PO-AKIs have been described to occur in 15.5% of cases [1].PO-AKIs, diagnosed according to the KDIGO classification [2,3], are associated with an increased risk of short-term adverse outcomes, including dialysis, cardiovascular events, lung injury, delirium and infection; all these adverse events can lead to increased long-term morbidity and mortality [4].PO-AKIs commonly have a multifactorial etiology with pre-operative, intra-operative and post-operative factors increasing PO-AKI risk.The most important pathophysiologic mechanisms responsible for PO-AKIs involve alterations in kidney microcirculation, an increased oxygen demand, hypoperfusion and systemic inflammatory reactions to surgical intervention [5,6].
In recent years, minimally invasive surgeries such as laparoscopic and robotic surgery have been increasingly used by urologists worldwide [7].Depending on the type of surgery, these two approaches are mostly performed with the patient lying in the Trendelenburg position and by applying intra-abdominal pressure throughout the induction of pneumoperitoneum.The latter condition has been recently reported as a risk factor for PO-AKI [8].Reducing the intra-abdominal pressure and the time in the Trendelenburg position has been adopted to reduce the risk of post-operative complications, especially PO-AKIs [1,[9][10][11].
The occurrence of PO-AKIs is strongly associated with the risk of mortality, cooccurrence of other post-operative complications, increased length of hospital stay and progressive chronic kidney disease [4].This and the limited treatment options for PO-AKI require great attention to identify means to prevent it [5].
Protein intake in animal models has been proven to be a protective factor for AKIs due to a direct increase in renal blood flow [12].Moreover, plasma amino acid levels are able to stabilize renal hemodynamics by afferent artery dilatation, which may increase the glomerular filtration rate by 25 to 60% [13].In addition, peri-operative supplementation of amino acids in patients undergoing cardiac surgery has been proven to have beneficial effects on AKIs by improving the estimated glomerular filtration rate (eGFR) and urine output [14].
The aim of the present study was to evaluate if the perioperative administration of intravenous amino acids may reduce the occurrence of PO-AKIs in patients submitted to major laparoscopic urological surgery.
Study Design
This was a before-after clinical study conducted at the Galliera Hospital of Genoa from January 2022 to May 2023.All patients signed an informed consent form on personal data storage and the local ethics committee approved the study (7/2019 id: 4378, amendment 2).
Three hundred and thirty-one patients older than 18 years undergoing any major urological, minimally invasive surgery were consecutively studied.The presence of preoperative severe kidney chronic diseases (eGFR < 35 mL/min/1.73m 2 ), regardless of the main cause, was the only exclusion criterion.Patients who developed a PO-AKI due to obstructive uropathy were also excluded.
Standard laparoscopies were performed on 164 patients and robot-assisted surgeries were performed on 166 patients, following the indications of the European Association of Urology guidelines (EAU).All patients were treated following a standardized multidisciplinary Enhanced Recovery After Surgery (ERAS) protocol as previously described [1].Patients undergoing surgery from January to September 2022 (crystalloid group, n =169) were treated with standard intravenous fluid administration and a group of them had also been included in a previous study [1].Those undergoing surgery from October 2022 to June 2023 (AA group, n = 162) received infusion of intravenous amino acids.Patients of both groups were treated by the same surgical team.
For each patient, the following anamnestic data were recorded: age, sex, body mass index (BMI), American Society of Anesthesiologists (ASA) score, preoperative serum creatinine and hemoglobin.Furthermore, the following comorbidities were evaluated to identify the presence of risk factors for developing PO-AKI: age > 75 years, female sex, hypertension, hyperlipidemia, liver diseases, peripheral vascular diseases, smoking history, diabetes, anemia and cardiovascular diseases [15].The cumulative number of risk factors per patient was calculated.The data collected intraoperatively were duration of surgery, estimated blood loss, mean arterial pressure and fluid administration.
Intraoperative Procedure
At admittance to operating room, after positioning a large caliber peripheral venous access (16 G), all patients in the AA group received continuous infusion of Sintamin 10% at 100 mL/h, before anesthesia induction.The intravenous infusion of Sintamin was continued at the same rate for the entire duration of surgery and was reduced to 50 mL/h after recovery from anesthesia throughout post-operative day 1.Other additional crystalloids were administrated per clinical need depending on the patient's hemodynamics and intraoperative blood loss, always following the restrictive goal-directed fluid therapy imposed by the ERAS protocol [1].
The crystalloid group was managed with the same restrictive goal-directed fluid therapy imposed by the ERAS protocol using mostly Ringer's lactate as the intravenous crystalloid.Post-operatively, the patients of crystalloid group continued the infusion of Ringer's lactate at 50 mL/h until the end of post-operative day 1.
All patients started drinking water 2 h after extubation and were allowed free oral water intake 6 h later.
The intraoperative abdominal pressure induced by pneumoperitoneum was settled at 10 mmHg and maintained using an AirSeal Intelligent Flow System ® (ConMed, Utica, NY, USA) or a LexionSystem ® (Lexion Medical, St Paul, MN, USA) throughout the whole surgical procedure in both the laparoscopic and robotic-assisted procedures.
Outcome Measures
Primary outcomes were the incidence and gravity of PO-AKI, defined according to the Kidney Disease Improving Global Outcomes criteria based on creatinine values and urinary output variations [2].
Secondary outcomes were admittance to the intensive care unit, in-hospital length of stay, 30 days re-admission and any kind of complication.The safety of amino acid administration was evaluated by adverse reaction development or an increase in intra-operative blood loss in partial nephrectomies.A cost-effectiveness assessment was conducted by comparing administration of amino acids with Ringer's Lactate.
Statistical Analysis
By assuming an effect size = 0.20 (with arcsine transformations of the square root differences between AKI proportions for two distinct patient groups before/after treatment), alpha = 0.05 and power (1 − Beta) = 0.8, a minimum overall sample size of 196 patients was required.
All the variables are expressed as median and interquartile range (IQR) for continuous variables, or as a percentage (%) for categorical variables.The Shapiro-Wilk test was used to evaluate the normal distribution of continuous variables.The Mann-Whitney U-test or Fisher's exact test was used to evaluate differences between groups for continuous or categorical variables, respectively.The cumulative probability for lack of PO-AKI was calculated.
Logistic regression was used to identify independent variables potentially associated with PO-AKI assumed as the dependent variable.
Statistical significance was assumed at two-tailed p < 0.05.Statistical analyses were performed by using SPSS, version 27.0 (SPSS, Chicago, IL, USA), software packages.A p-value < 0.05 was considered statistically significant.
Results
Patients' characteristics, type of surgery and preoperative data did not differ significantly between groups, except for sex (p = 0.028).There were no differences in the duration of surgery, intraoperative mean arterial pressure or intraoperative blood loss between groups (Table 1).The total number of PO-AKIs was significantly higher in the crystalloid than the AA group (34 vs. 17, p = 0.022), with a significantly lower incidence of KDIGO 1 and 2 PO-AKI in the AA than the crystalloid group (14 vs. 30 p = 0.016), whereas no difference was observed in the incidence of KDIGO 3 class of PO-AKIs (3 vs. 5 p = 0.724).Considering the overall population of crystalloid and AA groups, no difference was found in the cumulative number of PO-AKI risk factors (1 (1-2) vs. 1 (1-2) p = 0.310).The median number of risk factors was not significantly different between the patients who developed PO-AKI and those who did not (2 (1-2) vs. 1 (1-2) p = 0.196) (Table 2).Within the crystalloid groups, there was no significant difference in the median number of risk factors between patients who developed PO-AKI and those who did not (1 (0-2) vs. 1 (1-2), p = 0.995).In contrast, within the AA group, those patients who developed PO-AKI had a significantly higher number of risk factors than those who did not (2 (2-4) vs. 1 (1-2), p = 0.031).The best cut-off for the number of risk factors to identify PO-AKI determined by ROC analysis was ≥3 (AUROC: 0.701, p = 0.007, CI: 0.553-0.850,accuracy 55%, sensitivity 47%, specificity 83%).
In females, the occurrence of PO-AKI was insignificantly different between treatment groups (4/21 vs. 3/32, p = 0.477).Direct logistic regression was used to assess the impact of a number of factors on the likelihood of developing PO-AKI.The model contained five independent variables (intravenous amino acid infusion, sex, ASA, duration of surgery, estimated blood loss and intra-operative fluid administration).The full model containing all predictors was statistically significant, X 2 = 6.553, p = 0.010.Only intravenous amino acid infusion made a unique statistically significant contribution to the model with an OR = 0.480, CI:0.250-0.923,p = 0.028 controlling for all other factors in the model (Table 3).In-hospital length of stay was higher in the crystalloid group than in the AA group (respectively, 58 vs. 34 days, p = 0.007 and 4 days (3-5) vs. 4 days (4-6) p = 0.034).The cumulative number of days of hospitalization was 923 for the 169 patients in the crystalloid group and 752 for the 162 patients in the AA group with a reduction of 171 days and a net estimated saving in hospital costs of EUR 68,400 in eight months.The amount of savings clearly exceeded the increased cost of administering amino acids instead of crystalloids during hospitalization, as the costs for the whole period of 8 months were estimated to be about EUR 1156 (calculating a cost of Ringer's lactate of EUR 0.40 and of Sintamin of EUR 2.78 per unit).Furthermore, considering an observed median hospital stay of 4 days, the reduction in the duration of hospitalizations by 171 days may have allowed 42 more patients to undergo oncological surgery over the same time period.
No differences were found in the 30-day readmission rate, as they were all complications due to surgical reasons (urinary fistula, intra-abdominal, blood-serum collection, urinoma, lymphocele).
Discussion
The main findings of this before-after clinical study are that the perioperative administration of amino acids reduced (1) the incidence of mild-to-moderate PO-AKI and (2) the in-hospital length of stay and its related costs.
A possible explanation of our results could be the normal hyperemic response of the kidneys to a protein workload to enhance the elimination of nitrogenous wastes.Indeed, eating a high protein meal is known to have a great influence on renal perfusion by causing vasodilatation of afferent arteries and a reduction in vascular resistances through local receptors (glutamate receptor N-methyl-D-aspartate) and humoral factors (nitric oxide, insulin, glucagon, prostaglandins, etc.), finally leading to an increase in renal blood flow and the glomerular filtration rate (eGFR) by up to 35% from the baseline [2,13,16].
In agreement with these assumptions, studies on animal models showed that the resulting increase in renal blood flow after a protein meal can protect the kidneys against acute ischemic insults [7,8].Likewise, a randomized controlled trial in 22 adult cardiac surgery patients demonstrated that amino acid infusion started immediately after surgery can significantly increase renal blood flow and the GFR and optimize renal oxygen consumption in this population [17].
In the present study, a significantly lower number of PO-AKIs was detected in the AA than the crystalloid group thanks to a reduced occurrence of mild-to-moderate AKIs in class KDIGO 1 and 2.
We interpreted this result by considering that some of our patients possibly had preexisting subclinical renal damage at hospital admittance with a reduced renal functional reserve, despite the absence of any rise in the serum baseline creatinine level [18].Nevertheless, this condition may lead to a high susceptibility of the kidneys to new insults and PO-AKI.
The perioperative increase in renal blood flow and consequently the eGFR via amino acid infusion is only possible in kidneys with a sufficient residual recruitable nephron mass.In fact, patients with less than three risk factors seem to benefit more from perioperative amino acid infusion.Therefore, in the case of severe pre-existing structural nephropathy, amino acid infusion may not be able to reduce the incidence of PO-AKI.This result is supported by the reduction in the incidence of KDIGO 1 and 2 but not KDIGO 3 in the AA group.
There was a statistically significant sex difference between crystalloid and AA groups.Recent animal studies focused on sex differences and AKI, although with some conflicting data, and described an increased susceptibility to AKI development in males due to the presence of testosterone which increases endoplasmic reticulum stress [18,19].
A reduction in PO-AKI leads to a faster enhanced recovery after surgery and a reduced in-hospital length of stay, which may be translated into a global economic saving and the possibility of increasing the monthly number of surgeries.
Finally, it should be emphasized how amino acid infusion increases renal perfusion without increasing intraoperative blood loss even in partial off-clamp nephrectomies.
The present study has limitations.First, it was a single-center, before-after design without randomization.Second, consecutive patients underwent major urologic minimally invasive surgery without considering the impact of different types of surgery on renal function.Nevertheless, the absolute and relative numbers of surgery types were similar between groups with the exception of radical prostatectomy.Third, the low occurrence of PO-AKI KIDGO 3 might have affected the results.Fourth, this study was not designed to investigate the underlying mechanisms of intravenous amino acid infusion efficacy and basic experiments, and larger clinical studies are needed to support our results.Fifth, the lack of determination of fractional sodium excretion and urine sediments might have obscured chronic kidney conditions.Sixth, the effects of medications affecting the reninangiotensin system were not investigated.
Conclusions
The results of the present study suggest that perioperative intravenous amino acid infusion in major minimally invasive urological surgery is safe and might be helpful in reducing the incidence of PO-AKI, the hospital length of stay and related hospital costs, thus potentially shortening pre-operative waiting lists.Further randomized trials focusing on selected surgical interventions are needed to better define the efficacy and safety of perioperative intravenous amino acid infusion for the prevention of PO-AKIs and other complications.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Author
Contributions: C.B., A.D.D., A.B. and F.D. contributed to the study design, data collection and analysis, and the writing of the manuscript of the paper, F.G., F.M.V., M.M., M.E.and F.C. (Fabio Campodonico) contributed to the data collection, data analysis and reading and checking of the manuscript, G.C. contributed to the graphical abstract and writing and checking the manuscript, A.C., F.C. (Francesco Corradi) and C.I. contributed to the study design, data analysis, results interpretation and writing of the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Regione Liguria (7/2019 id number: 4378, amendment 2).
Table 1 .
Baseline characteristics of patients and pre-operative and intra-operative data.
BMI, body mass index; ASA, American Society of Anaesthesiology; eGFR, estimated glomerular filtration rate.Data are medians with interquartile range (IQR) or absolute numbers with percentage (%).
Table 2 .
Outcomes and post-operative data.
KDIGO, Kidney Disease: Improving Global Outcomes; eGFR, estimated glomerular filtration rate.Data are medians with interquartile range (IQR) or absolute numbers with percentage (%), unless otherwise specified.
Table 3 .
Multiple logistic regression analysis of predictors of PO-AKI. | 2023-10-21T15:07:41.412Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "e8f02503541bb64741e423d875bd8659c3c5b5b5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/20/6614/pdf?version=1697702504",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88c25a55f1c0d74ce27cb201bed6b8ac06afb49b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.